A free outing with a self-driving taxi on the freeway. And? Will you get in? This is no longer a fantasy. This is just the beginning. It's clear that it won't work without trust. Particularly when artificial intelligence is making sensitive and critical decisions, it has to be reliable and robust. You can find out here where the risks of using AI lurk and how they can be minimized.
AI can handle lots of data better and faster, recognize connections, and find errors. But value judgments can only be made by humans, which shows the limitations of AI.
Pavol Bauer, Senior Data Scientist, T-Systems International GmbH
In our personal lives and in our work, trust is the lubricant that guarantees smooth interaction. We are generally quite good at knowing who we can trust and who we should avoid. We also need these capabilities when dealing with AI. One thing is clear: we train AI with data. This will enable continuous improvement. But how does it make decisions and why? Why does one person fail the selection criteria for a job and the other when applying for a loan? Before I step foot in an autonomous car, I want to know how the AI will cope in a dicey traffic situation. So we have to take a closer look at the algorithms.
Algorithms have neither ideology, nor ulterior motives. We humans bear the responsibility because it is us who manipulate. But who decides what's right or wrong? The fact is that we need standards, guidelines, norms, and security for the entire lifecycle of an AI. Otherwise things will become critical. Values and guidelines are particularly important in healthcare or the public sector. The same applies to security. So it is crucial that purchased data for an AI is tested in detail, because ML algorithms can only be trained and recognize patterns if they are fed high-quality, precise data. If you are traveling with an autonomous car, you have to be able to trust that your vehicle can reliably recognize all traffic signs. For this reason, when it comes to data labeling, where a human explains to the AI system what is on a photo, for example, nothing can be allowed to go wrong. Last but not least, AI must also be protected against hackers to prevent external influence.
When it comes to artificial intelligence, trust is a must and not just an optional extra. (...) Our provisions will be future-proof and open to innovation and will only intervene where this is absolutely necessary, that is when safety and the basic rights of EU citizens come into play.
Margarethe Vestager, EU Commissioner for competition