Realistic? Sure, of course! But certainly not revolutionary.
Every Google search query generates an individual website with personalized advertising in real time. Why? Because the search immediately triggers an auction to display an ad in the browser sidebar that matches the search. With the help of machine learning, hundreds of thousands of profiles are compared with hundreds of thousands of products – which is very sophisticated, especially because of the speed. Nobody wants to wait for their search results just because the matching advertising still needs to be produced. The challenges surrounding artificial intelligence are quite different.
Bringing smart learning to all the little devices around us in the Internet of Things, keyword: edge computing. Data is collected from devices with limited computing, power, and storage resources. Some are so severely limited to save energy that they cannot even process commas. Such ultra-low power devices are used, for example, as intelligent consignment notes in logistics.
Without analysis, data is not useful. Machine learning first has to preprocess the sensor current of a small device. The result is then sent continuously to a central computer. There, a predictive model is learned from the many sensor currents, which can then be executed on a small device.
“Hardly anyone understands the possibilities of artificial intelligence. And almost everyone imagines something wrong: robots!”
We’ve specifically developed solutions for using machine learning on very limited devices. We have even managed to learn very complex predictive models onto ultra-low power devices and with theoretical guarantees! A breakthrough.
In this way the device can perform some data analysis on-site, so it sends less data in the end, which in turn saves energy. But most of all, because we did not just program it willy-nilly. It has a clear theoretical basis, so users know how reliable and accurate the learned model is.
In astrophysics, for example, in search of the needle in a haystack: the answer to the question of how we find extremely rare gamma rays in the vast amounts of data collected by Cherenkov telescopes to determine events that took place thousands of years ago outside of our galaxy.
Katharina Morik, 63, is a professor of computer science and heads the Department of Artificial Intelligence at the Technical University of Dortmund. She earned her doctorate in 1981 with research on the conviction systems of artificial intelligence in Hamburg, became a professor on machine learning in 1988 at the Technical University of Berlin, and then accepted a position with the University of Dortmund in 1991. Today, she is one of the world’s most renowned experts in artificial intelligence with a research focus in the areas of machine learning and data mining.
Sure: we’re examining traffic in Dublin and Warsaw. Sensors measure traffic flow in the streets, in Warsaw also in streetcars, and count the number of mobile phones registered in a radio cell. We have also evaluated recent social media posts. For example, when someone posts they are stuck in a traffic jam. Or when people on Twitter interact about upcoming concerts or sporting events, which will have an impact on traffic.
Through machine learning, we are able to predict when and where jams occur. It’s not just about passenger cars, it’s also about streetcars. This allows us to predict traffic anywhere in the city at any time and divert road users in real time right around the traffic jams.
Correct. This is why we are working on an auction type model that distributes different route recommendations, so each driver receives an individual route to avoid making new traffic jams out of the diversion.
Engineers plan processes according to natural laws. That is, under known conditions, the behavior of a machine or process is firmly defined. We, on the other hand, collect the data of each individual process. In the case of Industry 4.0, for example, to forecast success in terms of a production result. There are many factors that are not fully known or controllable, such as weather, room temperature, and humidity. How all these parameters affect a manufacturing process, for example, can only be recognized with data from the individual processes. The model that is machine-learned from the data thus complements the engineering model and can readjust and control in real time.
Created as a discipline in Dartmouth in 1959, artificial intelligence (AI) stirred up public controversy by winning games that required intelligence. AI focuses on games, planning, computer-assisted proofs, robotics and the comprehension and generation of natural language, images and videos. Machine learning can improve all these capabilities and is the fastest-growing sub-discipline of AI.
Areas? No, there are always ways to improve processes in real time – as described using the example of Dublin in transport and logistics, but also in production, where data analysis helps to identify anomalies early and make quality forecasts to reduce resource consumption. Or in medicine, where we examine genetic data for therapy profiles, for example, in the fight against neuroblastoma, a cancer that especially occurs in very early childhood. And even in analog industries like steel production, machine learning helps to improve processes.
For example, we looked at a specific type of furnace used to make steel and examined four targets: the tapping temperature of the pig iron, the content of carbon and phosphorus at the end in the melt, and iron in the slag. Using a machine learning process, we established a correlation between different combinations of multiple measured variables on one hand and the targets on the other. As a result, the moment when the melting process should be terminated can be determined even more precisely. Which saves a lot of energy – and money – every day!
The selection of characteristics is also part of machine learning, for which there are algorithms. But in this case, in fact, a newly designed feature has significantly improved the model, and there’s only one thing that helps: reflection.
Whether a Google search request, Amazon's Alexa in the living room at home or shorter working hours thanks to more efficiency in the company: People like to use the advantages of Artificial Intelligence – just as long as it’s not called "Artificial Intelligence." Professor Katharina Morik, a computer scientist and head of the Artificial Intelligence department at the Technical University of Dortmund, is fascinated by this ambivalent assessment, which she regularly encounters outside the university. In the interview, she makes it clear that fear of technology is not really justified. A lack of information and assumptions are the causes of the criticism, which fizzles out as soon as the advantages of Artificial Intelligence become clear.
(Video in German)