Digital sovereignty, data protection and technological independence determine how safely and responsibly artificial intelligence (AI) can be used. T-Systems shows why sovereign AI goes far beyond the cloud – and how companies benefit from European, value-based AI.
Digital sovereignty has long been more than just a buzzword. Since the US Patriot Act (2001), the Snowden revelations (2013) and the US Cloud Act (2018), Europe has been faced with the central question: How can we maintain control over our data, technologies and infrastructures? Since the publication of ChatGPT at the latest, AI has developed from a field of research into a social and economic everyday topic. With this new relevance, awareness of risks – from geopolitical dependencies to regulatory uncertainties – is also growing. Especially in times of global crises, disrupted supply chains and digital monopolies, it is clear that sovereignty is not an option, but a prerequisite for resilience.
Sovereignty means control and independence – over data, operations and technology. Artificial intelligence encompasses much more than just the model that generates answers. It consists of an entire technological layer: infrastructure, data, models, applications and operational processes. True sovereignty in AI therefore means more than just developing European models – it requires control over the entire digital value chain. Europe is already well positioned in some areas: With cloud infrastructures such as the T Cloud, there are trustworthy platforms operated under EU law that can securely host AI applications. Europe is also setting standards in data storage, data protection and governance with initiatives such as the EU AI Act and secure data spaces. But data sovereignty does not end with European laws – it is also influenced by non-European legal frameworks.
The “Clarifying Lawful Overseas Use of Data Act” (Cloud Act) obliges U.S. providers to disclose data upon official order – even if it is stored on servers outside the U.S. For European companies, this means that data stored in a US-managed cloud can potentially fall under American law and thus under the US Cloud Act. This contradicts the European ideas of data protection and sovereignty – and makes it clear why the development of one's own cloud infrastructures operated under EU law is so crucial.
Dependence remains high in other areas as well: high-performance chips and GPUs come predominantly from non-European suppliers, and US and Asian players also dominate generative AI models. This shows that Europe is on the way, but still a long way from complete digital sovereignty. Every company that uses AI should therefore ask itself central questions: Who operates my infrastructure? Where is my data stored? How transparent and controllable is my model? If you can't answer these questions clearly, you haven't really gained control over your AI environment – and therefore not over your digital future.
Sovereign AI therefore means:
This control can vary in intensity – sovereignty is not a binary yes or no, but a question of degree. Three levels can be distinguished:
Sovereignty in AI is therefore not an either/or, but a scale. At the lowest level are proprietary models that can only be used via APIs – such as GPT-4 or Claude. They remain black boxes, operated on foreign infrastructure. One step above are so-called “open-weight models” such as Llama 3: They provide access to model weights and open up initial possibilities for customization and hosting in Europe. Even more control is offered by open-source models, whose complete source code is accessible. If these are also developed locally, such as Apertus from ETH Zurich and EPFL, the highest level of sovereignty is reached. This is considered the first generative model that fully complies with the transparency requirements of the EU AI Act. Another example is Teuken-7B from the Fraunhofer Institute – the first Large Language Model to be trained in all 24 official EU languages. Such models form the core of a truly European AI landscape: open, comprehensible and value-based.
The following events have put the discussion of sovereignty at the heart of European technology policy:
The US Patriot Act – global surveillance and extraterritorial data use
The Snowden revelations – awareness of data sovereignty
GDPR vs. US Cloud Act – Conflict between two jurisdictions
Pandemic, chip crisis – supply dependencies and technological vulnerability
Ukraine war, energy crisis, AI Regulation – sovereignty has emerged as Europe's strategic goal
Europe has chosen to go its own way – with clear rules, transparency and ethical guardrails. With initiatives such as Gaia-X, the EU Chips Act and massive investments in infrastructure and AI (“Made for Germany”), the EU is laying the foundation for a self-determined digital future.
The challenge: the balance between sovereignty, costs and functionality – the so-called “Triangle of Needs”. The more control companies take, the greater the effort and complexity. The right mix depends on the specific use case – a chatbot that only accesses public data needs less sovereignty than a security-critical AI system in the healthcare industry.
Sovereignty is not a binary yes or no, but a question of degree.
Dr. Maja-Olivia Himmer, AI & Sovereignty Strategy Lead at T-Systems
T-Systems supports companies on their way to sovereign AI with an end-to-end approach – from data connection to model monitoring.
A sovereign AI landscape combines European values, transparency and control with innovative strength. Sovereignty does not mean isolation, but freedom of action – it creates trust, reduces dependencies and thus strengthens Europe's competitiveness. It is crucial to develop AI in line with our values, while maintaining the technological momentum necessary for Europe's future. After all, Europe has the opportunity to become a pioneer for responsible, safe and powerful AI – if it succeeds in combining value orientation and innovative strength.