T-Systems-Claim-Logo
Search
Closeup female face with layer of polygon visuals

How do companies strike a smart balance with AI?

ChatGPT, hyperautomation, robotics – AI enables many innovations. But when dealing with AI, we need to think in a new way.

05. September 2023Stephan De Haas

Rethink AI

People and even some companies tend to put complex tasks on the back burner and pass on responsibility. Latest example: the question of the ethics and sustainability of AI solutions that companies use to automate their processes. EU regulations such as the AI Act also require us to take a close look at the opportunities and risks of AI. So that responsible AI makes our lives better.

AI is good – everything’s good?

With the Artificial Intelligence Act passed by the European Parliament, the EU wants to regulate artificial intelligence. It applies to companies that provide or use AI systems in the EU market, regardless of where they are based. The AI Act was the necessary wake-up call for many companies to address the issue of “Responsible AI”. This is the first result of our latest T-Systems Research project on AI. I was surprised to learn that companies are already taking advantage of AI opportunities, but tend to neglect their responsible use of AI. But that’s exactly what's so critical with AI. Rethink AI – for me, this means that AI must serve the well-being of us humans and the planet. So we should all be asking ourselves now how we can make the transformation to a responsible AI-driven company.

Our balance in dealing with AI

Woman's hand showing futuristic user interface dashboard

In our T-Systems Co-Creation Community, we interviewed executives from IT and business departments in various industries on this topic. In doing so, we noticed an interesting dichotomy. I call it “two-speed AI.” Companies are finding it difficult to reconcile their desire to experiment with regulatory requirements. Many respondents worry that German and European companies could lose out if they have to comply with too many requirements. Uncertainty is spreading. Because, on the one hand, technology is developing rapidly, just think of ChatGPT and generative AI more generally. On the other hand, it is difficult for companies to estimate what they will face with regulation. It gets dangerous when this mixture slows down all those companies that want to use AI and implement it in their business processes.

The top 5 of human-centric AI

It's no longer a question of if, but how companies implement AI correctly. Because AI will continue to change our world. And it is still up to us humans to decide how the smart algorithms develop. We humans need to think and decide in the right direction for AI to act the way we want it to. If we want to achieve human-centric, responsible AI, we must take precautions.

  1. We train AI responsibly and consider ethical standards.
  2. We understand AI as an extension of human capabilities. AI experts call this augmentation.
  3. We reduce risk by having humans review and control AI results.
  4. We do not equate responsibility with ethics; we only speak of “Responsible AI” when it also conserves natural resources and the climate. We must therefore reduce its ecological footprint.
  5. We train employees in dealing with AI and take their fears seriously.

Out of the black box

AI robot face by wire frame with speed motion

AI is far too often a black box and can be abused for hoaxes and deepfakes. As a result, the AI Act, which will likely take effect in 2026, calls for greater oversight of AI use and more reliability, transparency, and controllability. Depending on the risk level of the AI system, a classification is made into four risk levels: unacceptable, high, limited, or low. The goal is binding design and development standards. And those who are sloppy in data selection or training risk data breaches, discrimination, or reproduce stereotypes. Violations could result in penalties of up to six percent of sales. One more reason why we may look to AI for human-centric AI and transparent corporate responsibility.

Can AI make our world greener?

The current proposal of the EU AI Act also emphasizes the role of AI in climate change and for the environment. The main goals are safety and market development, but environmental benefits are also a focus. I understand this as a clear mandate. That is because the digital footprint of the ICT sector is growing. Training the GPT-3 language model alone consumed 1.287 gigawatt hours of electricity, according to the University of Berkeley. As much as 120 U.S. households on average in a year. We should therefore not only focus on Green AI – i.e. AI processes that make the world greener and serve to protect the climate, nature and species – but pay more attention to developing and operating the machine-learning processes themselves in a way that conserves resources. With more speed. That’s why we are also increasingly involved in the UN Climate Change Global Innovation Hub and Green Digital Action at COP28 within the scope of our Co-Creation Advisory Board for Sustainability, and want to present concrete solutions at COP 29 in 2024.

How do we become more sustainable together?

That’s exactly why I have such high hopes for the co-creation approach with which we bring together idea generators from all industries. One of the projects we are driving forward as part of the Co-Creation Advisory Board for Sustainability is the “Sustainability Academy.” We are cooperating on this initiative with the Environmental Campus Birkenfeld and the start-up IGNAITE, which specializes in combining AI and the environment. We provide companies with customized learning modules on sustainable transformation that they can use to train their workforce. Sustainability chatbots are also high on the corporate agenda. We also see promising AI deployment opportunities in ESG reporting. For this purpose, we offer a regular digital exchange with experts with our #Experience Sessions.

The early bird catches the worm

This saying captures the responsibility of dealing with AI quite well. This is not a new issue for T-Systems either. We can build on Telekom’s AI guidelines and are currently expanding our risk management system with a view to the AI Act. Detecon’s Digital Ethics Framework also contains clues for the further development of AI towards Responsible AI. The fact that our Group is intensively addressing the issue is also underlined by the Manifesto on the Use of AI and our involvement in the EU’s AI Act. I am convinced that many opportunities will present themselves to companies if they engage in the new regulatory environment. After all, it’s not just about meeting external requirements. Those who help shape the process can thus ensure that their own employees internalize the importance and implementation of responsible AI at an early stage. And they will find it easier to exploit the potential of data and AI and design a strategic roadmap for the responsible use of artificial intelligence. 

What’s next for Responsible AI?

We want to make a difference with our company and the co-creation approach, and focus not only on ethics and responsibility, but also sustainability when it comes to AI. When we talk about human-centric AI, we are extending our previous customer-centricity. Technology should benefit people as well as the environment and climate. Our goal is called “AI for People and Planet”. Therefore, we want to develop a checklist for “Responsible AI” based on our research findings. And thus explain to our customers what they need to pay attention to, for example, when dealing with data models. I am convinced that together we can create an AI for good. Interested? I look forward to working together with you.

About the author
Stephan De Haas

Stephan De Haas

Head of Co-Creation & Client Consulting, T-Systems International GmbH

Show profile and articles

You may also find this interesting

Do you visit t-systems.com outside of United Kingdom? Visit the local website for more information and offers for your country.