People and even some companies tend to put complex tasks on the back burner and pass on responsibility. Latest example: the question of the ethics and sustainability of AI solutions that companies use to automate their processes. EU regulations such as the AI Act also require us to take a close look at the opportunities and risks of AI. So that responsible AI makes our lives better.
In our T-Systems Co-Creation Community, we interviewed executives from IT and business departments in various industries on this topic. In doing so, we noticed an interesting dichotomy. I call it “two-speed AI.” Companies are finding it difficult to reconcile their desire to experiment with regulatory requirements. Many respondents worry that German and European companies could lose out if they have to comply with too many requirements. Uncertainty is spreading. Because, on the one hand, technology is developing rapidly, just think of ChatGPT and generative AI more generally. On the other hand, it is difficult for companies to estimate what they will face with regulation. It gets dangerous when this mixture slows down all those companies that want to use AI and implement it in their business processes.
AI is far too often a black box and can be abused for hoaxes and deepfakes. As a result, the AI Act, which will likely take effect in 2026, calls for greater oversight of AI use and more reliability, transparency, and controllability. Depending on the risk level of the AI system, a classification is made into four risk levels: unacceptable, high, limited, or low. The goal is binding design and development standards. And those who are sloppy in data selection or training risk data breaches, discrimination, or reproduce stereotypes. Violations could result in penalties of up to six percent of sales. One more reason why we may look to AI for human-centric AI and transparent corporate responsibility.