Artificial intelligence (AI) has become an integral of our everyday lives. Whether in medical diagnostics, customer communication or text translation, AI systems are taking on more and more tasks. But with this development, the responsibility of us humans in the use and development of AI is also growing. Because AI can not only support, but also cause harm – for example, through discriminatory decisions, non-transparent processes or insecure data processing.
According to a study by Ernst & Young1, around a quarter of Germans do not check the results provided by AI. They trust the AI system almost blindly. This compares to around 30 per cent of people worldwide.
Trust is central in the context of artificial intelligence because it forms the basis for acceptance, security and responsible use. Trust in AI is not a ‘nice-to-have’ but a must-have. It determines whether AI systems are accepted, used and further developed – or whether they meet with resistance. Trustworthy AI is therefore not only an ethical ideal but also a strategic success factor.
Over the past decade, the European Commission has been working on the framework conditions for trustworthy AI and has promoted the development of ‘Ethical Guidelines for Trustworthy AI’ (2018)2. vorangetrieben. These guidelines are based on fundamental rights and propose four ethical principles for dealing with AI: respect for human autonomy, harm prevention, fairness and explainability. In addition, seven requirements for the implementation and realisation of trustworthy AI should apply. These principles form the basis for AI that is not only powerful but also socially acceptable.
AI systems should support human autonomy and decision-making. This requires AI systems to serve a democratic, prosperous and just society, as well as enabling human oversight.
Technical robustness requires AI systems to be developed with a preventive approach to risk and to behave reliably in accordance with their intended purpose. This should also apply to potentially changed operating environments or the presence of other forces.
In order to protect people's privacy, appropriate data quality management is necessary when using AI systems.
AI systems must operate in a manner that is traceable and explainable at all times – in relation to the data, the system and the underlying business model.
To create trustworthy AI, inclusion and diversity must be guaranteed throughout the entire life cycle of the AI system. This means that all affected stakeholders must be involved in the entire process and have equal access.
Society and the environment should also be taken into account throughout the entire AI life cycle. The sustainability and environmental responsibility of AI systems should be promoted, as should research into AI solutions to global challenges. Ideally, AI should be used for the benefit of all people, including future generations.
Precautions must be taken to ensure responsibility and accountability for AI systems and their results before and after their implementation.
Trust in artificial intelligence is not a luxury, but a strategic necessity.
Dr. Frank Wisselink, Executive Product and Project Manager for AI at T-Systems
Deutsche Telekom implemented almost all of these European Commission principles in its AI guidelines back in 2018. After all, trust is one of the cornerstones of Deutsche Telekom's business. These guidelines were exemplary then and remain so today:
Deutsche Telekom has therefore been placing great emphasis on the trustworthiness of AI for over seven years. Deutsche Telekom puts its AI guidelines into practice and has integrated them into its governance structures and processes. These include, for example, the privacy and security process and the company's specially developed Digital Ethics Assessment. The Digital Ethics Assessment is a standardised procedure throughout Deutsche Telekom that all of Deutsche Telekom's AI systems must undergo. It centrally records and reviews all AI use cases and models.
Implementing the principles and guidelines for trustworthy AI can be challenging in some cases. From a technical perspective, many AI models are complex ‘black boxes’ whose decisions are difficult to understand. The explainability of deep learning models remains a challenge to this day.
Furthermore, the general public often lacks a technical understanding of AI systems, which promotes uncertainty and mistrust. At the same time, companies are under pressure to innovate: rapid market launch and competitive advantages often conflict with ethical standards and regulatory requirements.
From an economic perspective, trustworthy AI is not a sure-fire success either. Developing transparent, fair and secure systems requires time, resources and interdisciplinary expertise. In my view, however, this is an investment that will pay off in the long term.
With the EU AI Act, the EU Commission has taken another step towards trust-building regulation in 2024: the EU AI Act provides a comprehensive legal framework for AI in Europe for the first time. This regulation not only creates legal certainty, but also trust – both among users and companies.
Trustworthy AI is not an optional feature – it is a prerequisite for sustainable innovation. It is possible to design AI systems responsibly. But it requires clear principles, regulatory guidelines and the will to implement them.
The future of AI will not only be decided in data centres – but also by whether we can trust it.
1 EY study ‘AI Sentiment Index 2025’: https://www.ey.com/de_de/newsroom/2025/05/ey-ai-sentiment-index-2025
(German only)
2 European Commission: Ethics guidelines for trustworthy AI https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai