Artificial Intelligence and our Future

Guest contribution by | 06.06.2019 | Software development | 0 comments

Artificial intelligence can make a lasting difference to our lives. In the future, not only manual but also intellectually demanding activities will be automated. Is artificial intelligence merely the continuation of a known development or will it revolutionize human life as such? An opinion.

Rationalisation as a motive

The steam-driven looms made numerous weavers unemployed and abandoned their families to hunger. The invention of the sound film took away the livelihood of all those musicians who had previously provided musical accompaniment to silent films in the theater. If chatbots answer customers’ questions automatically, 80% of service staff can look for a new profession. And so on…

At the same time, this automation and digitisation has not only brought higher profits to entrepreneurs. Increasing productivity has not automatically led to higher production and lower prices. But the investment in the new technologies was of course only worthwhile after a certain number of units. As a result, producers were under pressure to sell more and were able to do so through lower prices, among other things. Handmade goods were pushed into the corner of high-priced luxury items. Then the manufacturers waged a ruinous price war against each other. Some of this pressure they passed on to their remaining employees, their suppliers and dealers. All in all, however, the prices for daily necessities have fallen and the standard of living has risen. So we almost all have an advantage from this development.

That’s how rationalisation has gone so far. In particular, it was possible to automate manual activities and simple intellectual activities such as searching for answers from a questionnaire.

Now it is hoped that the networking of production means, autonomous systems and artificial intelligence (AI) will also lead to the automation of intellectually more demanding activities. Even the implementers of this development, the software developers, are at the mercy. Thanks to automatic speech analysis, requirements can be extracted from any text, a tool generates code from models at the push of a button, and the resulting software can be tested automatically. (In fact, it will still take a while before this really works.) People still have to intervene to ensure quality and control, just like the few remaining technicians in the automated production halls. But in principle, hardly any profession can stay the way it is, if, for example, image recognition can classify tumors better than the hospital physician who has spent the night, and even operations and geriatric care can eventually take over the robots.

As a lecturer I am already creepingly suffering the same fate as theater musicians. Through online videos I can provide my performance to more learners, but nowadays videos are also spoken of by avatars. The trend is towards distance learning, and as a knowledge carrier I only create the course material. From an earlier seminar of several days a two-hour repetitorium is created. Multiple-choice tests give learners feedback on whether they have understood the material correctly. Today even the highly qualified are relieved of the work of machines or are made more efficient and therefore less efficient by digitalisation.

Now artificial intelligence could be seen as a continuation of the previous development: More and more complex tasks can be performed by machines. So nothing new? No reason to get excited? After all, the world has not been destroyed by the introduction of sound film.

New era through artificial intelligence

Unfortunately, AI could usher in a new era that we better not be entering. I used to think that the main problem was the flawiness of software. Currently, there are also numerous studies that show that computers find it very, very difficult to profitably simulate human intelligence because they don’t really understand what they’re doing, because they lack a view and understanding of the big picture, because they’ve been trained with inadequate data, or because they simply imitate human errors. On the other hand, we have always been accustomed to new technologies killing people. Despite all security measures, people still die from electric shocks, smouldering fires and perhaps even electrosmog. Nevertheless, we accept these risks as a side effect of a useful invention from which we all benefit. We do not want to live without electricity and we probably could not live without it. Not even anybody thinks about it anymore. And even if we did, we would reckon with the high benefits of electricity and the deaths that can be avoided by electricity. Analogously, AI could also be something that kills people every now and then. That is not the main problem.

It is about responsibility. So far, technology has only helped us to do our job better. They are tools for human work. In principle, I am still doing the same thing as twenty years ago, but faster and better. I don’t have to go to the library to do research anymore and roll around thick books there to copy professional articles page by page, I format and layout my texts while writing (no typesetting is necessary anymore) and I can detect and correct spelling mistakes in no time without having to retype the corresponding page completely. I don’t have to write mathematical formulas by hand in calligraphy and glue them to the right place on the paper. Presentation slides made by hand were also not really easy to read, but at least more efficient than blackboard letters, which had to be rewritten each time and which cost a lot of lecture time. I can dictate texts into speech recognition and thus note down my thoughts more quickly. I can concentrate more on content than ever before, thanks to technology.

Artificial intelligence, on the other hand, is something completely different. It makes its own autonomous decisions and thus assumes responsibility. The question of whether or not it can do that is less important than the fundamental ethical question of whether we want or are allowed to hand over responsibility to machines. That is the central question we must ask ourselves.

The autonomy of the AI seems useful at first because it practically means that the AI does not have to be reprogrammed for every application. It does not need a case-specific algorithm, but uses data to find out how to do it. It analyses the results of the expert work and imitates them. There is little initial effort, except for the procurement of this data, because the system learns itself independently. Afterwards we sometimes wonder about the results. However, the error is not so much due to the AI as to the data entered. Just as lunatics are human, so the AI will also be lunatic. But since it can process more information more objectively in less time, it will quickly be able to use the experience of several professional lives

Do we want to hand over responsibilities to AI?

But we give up responsibility when AI decides autonomously. I don’t believe that AI will intentionally seize world domination, but sovereignty over our world will slip away from us, or we will carelessly hand it over to save time and money. Then at some point we will no longer know how to pay an invoice manually, perhaps the bank will no longer support it at all. Students will receive detailed machine-written reports for their master’s thesis, but in case of misunderstandings there will be no human contact for a complaint. What should he say, he has no insight into the way of thinking of the AI and must rely on the objectivity of the machine. The stock market collapses because all trading bids switch to crisis mode at the same moment. Two trains drive head-on towards each other for hours, but nobody can stop it. Wars are waged because of software errors.

I’m not interested in the financial damage involved. Rather, the network of people that has formed our society so far is destroyed. People produce goods for people, people learn from people, people help people. That connects us all with each other. That will then stop. Instead, devices network to form the Internet of Things. Wherever we are operated by machines, whether good or bad, the interpersonal Internet of Humans is disintegrating. No one is responsible for anything anymore. Then what are we still doing? Are we sitting alone in a one-room apartment all day and being looked after by machines? Are interpersonal relationships simulated virtually? Then we live in the Matrix!

I like to go shopping, because I can get in touch with people. The massage is a nice chat. Should I do small talk with a massage robot? And if I no longer make decisions and take responsibility at work, but only reboot the correction machines, what’s the point? Or the search engine informs me about a new version of a standard, another software automatically updates the corresponding course documentation, and I only maintain the software tools?

In principle, the latest EU directive on AI also argues in this direction: AI should be law-abiding, ethical and robust. The four ethical principles mentioned are

  • respect for human autonomy,
  • damage avoidance,
  • fairness and
  • explainability.

It is expressly emphasised that the AI should support people in their decisions while preserving human autonomy. I very much hope that this will be taken into account at all times during development and when deciding on the use of AI.

Dr. Andrea Herrmann
Dr. Andrea Herrmann

Dr. Andrea Herrmann has been a freelance trainer and consultant for software engineering since 2012. She has more than 20 years of professional experience in practice and research up to substitute and visiting professorships. She has published more than 100 professional publications and regularly gives conference lectures. Dr. Herrmann is an official supporter of the IREB Board, co-author of the IREB Curriculum and Handbook for CPRE Advanced Level Certification in Requirements Management.

Share This