Search
Blue lines connected through dots

What mistakes is AI allowed to make?

AI steers cars or puts the brakes on seven applicants. Of course I want to know whether I can trust their decisions

July 19 2022Pavol Bauer

Can we trust AI?

A free outing with a self-driving taxi on the freeway. And? Will you get in? This is no longer a fantasy. This is just the beginning. It's clear that it won't work without trust. Particularly when artificial intelligence is making sensitive and critical decisions, it has to be reliable and robust. You can find out here where the risks of using AI lurk and how they can be minimized.

Why is trust important?

Person with glasses

In our personal lives and in our work, trust is the lubricant that guarantees smooth interaction. We are generally quite good at knowing who we can trust and who we should avoid. We also need these capabilities when dealing with AI. One thing is clear: we train AI with data. This will enable continuous improvement. But how does it make decisions and why? Why does one person fail the selection criteria for a job and the other when applying for a loan? Before I step foot in an autonomous car, I want to know how the AI will cope in a dicey traffic situation. So we have to take a closer look at the algorithms.

The algorithm of trust

Algorithms have neither ideology, nor ulterior motives. We humans bear the responsibility because it is us who manipulate. But who decides what's right or wrong? The fact is that we need standards, guidelines, norms, and security for the entire lifecycle of an AI. Otherwise things will become critical. Values and guidelines are particularly important in healthcare or the public sector. The same applies to security. So it is crucial that purchased data for an AI is tested in detail, because ML algorithms can only be trained and recognize patterns if they are fed high-quality, precise data. If you are traveling with an autonomous car, you have to be able to trust that your vehicle can reliably recognize all traffic signs. For this reason, when it comes to data labeling, where a human explains to the AI system what is on a photo, for example, nothing can be allowed to go wrong. Last but not least, AI must also be protected against hackers to prevent external influence. 

Good AI or bad AI?

When is it OK to trust artificial intelligence? What properties must it have to count as trustworthy? The EU parliament is in the process of paving the way for AI regulation. In 2019, the EU Commission presented ethics guidelines for trustworthy AI. The AI Act of 2021 refers to these, according to which trustworthy AI has three components:

  • It is lawful and complies with all applicable laws and regulations.
  • It is ethical and guarantees adherence to ethical principles and values.
  • It is robust, both in technical and social terms and does not cause any damage – neither willful nor unintentional. 

Only when all three components are in place across the whole life cycle of the AI systems can we talk of trusted AI.

Recipe for a trusted AI

A trustworthy artificial intelligence, which must keep up with future regulation, needs to comply with further criteria in individual cases fraught with risk: the AI in question must be explainable – that means the methods by which it makes decisions must be traceable. The systems must not be a black box; instead, they must be transparent. Fairness also plays a very important role: Dependable algorithms do not discriminate against certain groups or individuals because of learned bias. Trustworthy models consider human autonomy, as we want to be able to correct their decisions. Beyond this, there is also security and data privacy, in particular for the protection of personal data. Each aspect is not always equally relevant for each AI application – we always need to check what risks each system can carry.

Threat detected...

... threat averted? In all my fascination for innovative technologies and, above all AI, I have always been aware of their risks and scrutinize AI technologies very closely depending on the area of application. Let's come back to the example of autonomous driving. We have already come a long way from a technical perspective. Today, assistance systems are already making our driving more efficient, planning our routes more intelligently and, thanks to AI technology in logistics, are deploying HGVs at a higher frequency. Why aren't we all exclusively using autonomous driving yet? Because the legal framework is not yet finalized and there is a lack of trust. We can't currently answer every question: can I be sure that an autonomously operated car will detect every obstacle? Will it respond and make appropriate decisions in every situation? And who is responsible when something goes wrong? Of course humans also make mistakes when they drive cars. But we don't forgive machines for their mistakes.

When it comes to artificial intelligence, trust is a must and not just an optional extra. (...) Our provisions will be future-proof and open to innovation and will only intervene where this is absolutely necessary, that is when safety and the basic rights of EU citizens come into play.

Margarethe Vestager, EU Commissioner for competition

Ruled for or by AI?

What can AI even be allowed to do? There is no shortage of criteria catalogs which formulate the secure use of machine learning methods and which concern themselves with the ethics of machines. But the EU AI Act will be the first piece of EU legislation to regulate what artificial intelligence is and isn't allowed to do. The goal: to reduce the risk when implementing AI and thus strengthen trust in the algorithms. The law is expected to come into force in 2023. There is likely to be a year's grace so that companies have time to respond to the new requirements. I am already recommending that companies start to address these provisions now. That is to say: stop discussing ethics, transparency, security, and trustworthiness and integrate these aspects in all AI projects right from the start. This will make "AI made in Europe" more trustworthy and more secure, will prevent discrimination, and will improve the competitive advantage of European providers. 

It is all about risk

For me, the AI Act is a particularly important tool: it provides a regulatory framework with global validity and offers greater reliability. This is precisely the right way to strengthen trust in artificial intelligence. The AI Act divides AI applications into different risk categories using a sliding scale. If the risk is unacceptable, it will ban the use of the AI solution – this includes, for example, social scoring by governments. If a high risk has been detected, the AI has to comply with certain requirements. The bulk of applications fall into this category – this includes, for example, all solutions related to autonomous driving. Ultimately, this is all about protecting the integrity of us humans. In contrast, the stakes are much lower when it comes to AI with limited (e.g. in the case of chatbots) or minimal risks (e.g. video games).

Do companies need AI guidelines?

The answer is a definite yes! And when I say that, I mean companies must develop their own binding ethical AI guidelines today. I will only buy a company's products and services or recommend them to others if I trust them. We, too, are committed to the digital ethics of Telekom, which form the basis of the artificial intelligence safety net. And we go one step further. During development, we already take our own compliance guidelines into consideration alongside existing or future regulations, such as the General Data Protection Regulation, the EU Guidelines, or the AI Act. When it comes to data gathering, we are concerned with data protection issues as well as possible biases in model training and, during operation, we protect the AI from hacker attacks or misuse. When it comes to maintenance and further development, we make sure that all standards and norms are complied with. This ensures that trustworthiness becomes the asset with which European providers and companies can score points. 

Ready for your audit

We openly admit that putting ethical principles and rules into practice is not easy. This is why in future we want to provide companies with even more support in this area. For example, by integrating the processes for developing a trustworthy AI into MLOps tools. If the requirements of test catalogs are already taken into account during development, this will make providing evidence during a later audit easier and it guarantees the continuous further development of a trustworthy artificial intelligence. We also offer certified solutions such as smart voicebots and chatbots from our Conversational AI Suite. We were one of the first companies to test this according to the BSI criteria catalog for trustworthy AI (AIC4). Please don't hesitate to contact me directly if you want to know more.

Tips for further reading on AI

And, last but not least, here's some more reading material: René Phan's article on the possibilities of technology and how to master its application is an absolute must! Or, if you are also starting to suspect that we might be having some scams foisted upon us, that is "Fake AI", then you might be interested in this article. Because, not everything that is labeled AI is actually AI.

On this note, see you next time. Pavol Bauer, signing off.

About the author
AU-Bauer-Pavol (1)

Pavol Bauer

Senior Data Scientist, T-Systems International GmbH

Show profile and articles

This might also interest you

Do you visit t-systems.com outside of Germany? Visit the local website for more information and offers for your country.