The best chess players, Go experts or Jeopardy specialists have gone up against the artificial intelligence of machines and programs – called Deep Blue, AlphaGo or Watson – and did not have a chance. However, that of course is not the reason for analysts at Gartner to put the topic of "artificial intelligence" (AI) and self-learning machines at the top of their list of trends for 2017. Such assistance systems are no longer limited to just playful recreation. Artificial intelligence liberates knowledge from the old-fashioned silo mentality for learning and makes expertise available at all times and places – quickly and easily.
The areas of application for AI are diverse: AI can automatically process unstructured data, such as images or videos, and derive insights from them. Handwriting, voice or face recognition are based on artificial intelligence just as much as autonomous driving or machine translation. Watson, AlphaGo and others can converse with callers as well as any human call center employee; with their help, doctors can speed up their diagnostics, and bank advisors use smart programs to assess the creditworthiness of their customers. Soon, Watson will be taking care of the assessments of damages of a Japanese insurer. According to a study by TNS Infratest, German decision-makers expect that the degree of automation through intelligent machines and robots used in companies will increase from 20to more than 85 percent over the next ten years. Machine learning is even becoming a part of our everyday lives, since even assistance systems such as Apple's Siri or Google Now utilize AI.
Only training leads to cleverness
What are the learning processes of machines based on? How can their intelligence quotient be raised? When are they truly smart? A prerequisite for artificial intelligence is that programs are able to learn and thus solve problems independently. It is therefore insufficient to program computers just for a specific application area. Machine learning is inspired by human thought processes. For this, different layers of artificial neurons are linked with each other – for the layman, all of this is similar to the networked nerve cells in our brain. Consequently, we refer to these as artificial neural networks. However, it takes a lot of training to make such machines smart. The computers are fed huge amounts of data. Accordingly, AlphaGo first analyzed 100,000 games of amateurs and played millions of times against itself before the machine was ready to challenge the world's best Go player –and promptly beat him.
Humans as teachers
Important AI application areas are hospitals and medical practices. Here too, humans can be considered as the machines’ teachers. Experienced radiologists show medical computers how to interpret magnetic resonance imaging (MRI), for example. As soon as the machine has been trained sufficiently, it can locate tumors faster and more precisely than any doctor. The reason: it can include more relevant data in its calculation. Whether self-learning machines are really clever depends on whether they are able to discover new links, patterns and solutions by themselves.
Often, it is difficult to understand the machine’s solutions process. Hence, many people feel uneasy about smart machines. According to an article in the German newspaper 'Die Zeit', even the digital avant-garde shares the concerns of many laymen: "In fact, thinkers such as physicist Stephen Hawking warn about the potentially destructive force of artificial intelligence if it is used incorrectly. Tesla founder Elon Musk views an unregulated development of artificial intelligence as being 'our biggest existential threat', and Bill Gates states that he does not understand people who are not concerned about the rapid development of AI." Moreover, 58 percent of the respondents to a TNS Infratest study are in favor of imposing clear limits on the research and development of artificial intelligence.
Will robots take our jobs?
Euphoria long prevailed over any discussion of what is feasible; only gradually, the impact of artificial intelligence on the social, cultural and political environment is now being examined. The fears are being underpinned almost daily: in social media as well as the traditional media it is also being as to whether the rise of the robots will cost us our jobs. With regard to autonomous driving, many people find it also difficult to accept the idea, that in the future all decisions will be left up to machines or to assistance systems – especially as, according to the law, the driver still bears full responsibility. On the other hand, co-existence with machines could also have positive aspects and could, for example, just simply be fun.
Artificial intelligence can also be prejudiced
In her article, Kate Crawford, Visiting Professor at MIT and Principal Researcher at Microsoft Research, focuses on risks of a completely different nature. Contrary to conventional thinking, artificial intelligence would be no more free from prejudices than the human brain. Thus, queries of data collected to predict the occurrence of crime lead police to excessively control disadvantaged groups in a society. According to a study by the investigative journalists of ProPublica from May 2016, the algorithms often used – but not publicly known – by U.S. judges almost twice as often mistakenly predict recidivism for black defendants as they do for white ones. Kate Crawford comments: "Autonomous systems will change jobs, infrastructures and schools. We need to ensure that such changes are beneficial to us before they influence our daily lives even more."
Digitization can improve processes and products. But how? Thomas Novotny, CTO Innovation at T-Systems, shows you how in his workshops. He doesn’t just talk about digitization; he “lives” it – in his digital workshops.
Artificial Intelligence (AI) is the next big thing. Self-learning machines will not only optimize individual business processes; they will change the entire working world radically. Is it all just hype, or is it a genuine opportunity for the future?