Artificial intelligence (AI) is changing the way we work across nearly every industry. But the rules aren’t the same everywhere. Legal professionals/experts in law firms, legal departments, public administration, and the judiciary handle information that is among the most sensitive there is: client confidentiality, court proceedings, and government decisions. When AI is used in these contexts, there is responsibility to clients, to the law, and ultimately to the rule of law itself.
First, this responsibility concerns the quality and accuracy of AI outputs. An AI that misquotes a statute or confuses the legal regime of two countries can cause real harm in the legal sphere. Many AI models available today are designed for broad applicability. That makes them versatile, but not necessarily reliable for use in law. In a legal setting, it’s not about how elegant a phrase sounds, but whether the sources are trustworthy, the argumentation transparent, and the result traceable. A reliable data foundation is therefore one of the fundamental pillars of such a specialized AI application.
The second foundational pillar of specialized Legal AI concerns safety and sovereignty. After all, legal professionals/experts are entrusted with confidential information and must comply with strict professional secrecy, criminal, professional, and data-protection standards. When AI is used in law firms, companies, courts, agencies, or ministries, a question arises that may sound technical at first, but is politically and legally fundamental: where and how is this data processed, and who has access to it?
This question should have occupied us more intensely much earlier. The Clarifying Lawful Overseas Use of Data Act, better known as the US CLOUD Act, is already eight years old. It allows U.S. authorities to demand data from U.S. cloud providers, regardless of where the servers are physically located. The impact varies by application area. For the legal sector, which plays a critical role in the rule of law and democracy and can be considered critical infrastructure, digital sovereignty is not a “nice-to-have”, but a prerequisite.
The CLOUD Act’s legal baseline has not fundamentally changed since then. In light of geopolitical volatility, however, a new awareness has grown alongside a willingness to address this issue concretely—and thankfully so.
Digital sovereignty also represents a major opportunity for European digital companies, provided sovereignty doesn’t become an empty marketing claim. A particularly relevant factor is cloud infrastructure. High-performance AI in Europe requires a cloud infrastructure that is fast, resilient, secure, and sovereign. But, as is often the case, it’s worth looking at closely. Digital sovereignty isn’t just about geographic location in Europe, because the US CLOUD Act also concerns data centers of American cloud providers located in Europe. If Europe takes digital sovereignty seriously, US cloud providers would be excluded.
For Noxtua as Europe’s sovereign Legal AI, the Industrial AI Cloud from Deutsche Telekom under European control is therefore a crucial building block in our sovereignty strategy. After all, the AI factory is not only compatible with the EU AI Act; it also meets the stringent compliance requirements of highly regulated sectors such as the judiciary, while embedding privacy and security considerations. At the same time, it supports our scaling and expansion across the European continent. As a Legal AI built for Europe, we develop country- and jurisdiction-specific versions in close collaboration with leading legal publishers to help legal professionals across Europe work more practically, securely, and in a sovereign manner.
AI systems must be built from the ground up to keep data where it belongs, under European control, and under European law.
Dr. Leif-Nissen Lundbæk, CEO and co-founder of Noxtua
Our data have value—economic value, but also a strategic one. And the more sensitive the context, the more important it is to ask whom we entrust that value to. This matters especially in the legal field: whenever AI is used to draft a legal document, compose official correspondence, or review a contract, highly sensitive information may be disclosed, sometimes without realizing where the data ends up and how it will be used.
Data protection is not a bureaucratic detail, but a fundamental right. As the use of generative AI grows in sensitive domains, this right takes on a new dimension. It’s no longer enough to accept a privacy policy. AI systems must be built from the ground up to keep data where it belongs, under European control, and under European law.
Europe has set important regulatory anchors in recent years. What must come next is investment in homegrown technology. I hope more European companies will dare to pursue this path, and that demand for trusted European technology will grow.