Beige background with leaf on the side.

Is there a prescription available for digital ethics?

The EU AI Act proves: in the healthcare sector, digital ethics are business critical. What we have to do now

June 23 2022Steffen Kuhn

E-health with ethics: how does it work?

Anyone digitalizing and implementing technologies such as artificial intelligence has to tackle ethical issues. Not only should companies command the technology but also be able to estimate the consequences of it. Just as security by design has been around for a while, ethics by design should become the rule. In the healthcare sector in particular, unwanted side effects from AI and big data need to be prevented.  

Technology as medicine

Two robot hands hold a piece of folded paper.

Do you use a bonus program from your health insurer, and are rewarded because you use your smartphone to prove that you jog regularly? Do you think this offer is ethically sound? I have nothing against this. These sport offers are ethically sound because they use the support of modern technology to reward the extra mile, and they do not discriminate against deficits. Since we're already on the subject: the digitalization of healthcare shifts the focus of care from conventional therapy to prediction, prevention and precision medicine. Modern technology such as AI or big data are a blessing because they open up new treatment methods. However, many people still have their reservations. But if we first test whether their application is ethically sound and we develop a stronger awareness of ethical issues, it will help to strengthen the acceptance of disruptive technologies, resulting in AI becoming the most effective medication.

If artificial intelligence provokes fear, we need to talk about what an ethical framework for its use would look like.

Tim Höttges 

Digital technologies as medicine

What can digital technologies already do? Thanks to machine learning, smartphones can detect dementia, autism and Covid-19, AI discovers tumors in early stages, connected pacemakers send automated warning notifications to the treating doctors. Digital technologies allow medical treatments to be developed which are tailored to the patient, their genome and their individual needs. Augmented reality and robotics enable remote operations. According to the International Federation of Robotics, the global sales of medical robots is expected to increase to USD 11.3 billion this year – that is more than a doubling compared to 2019. And what do all these above-mentioned health solutions have in common? They all require patient data. Since this information is very sensitive, data privacy and anonymization play a major role. Transparency is of the highest order. It must always be clear what information is used for what.

Required: immune system for data

We are thus obliged to take up the question of digital ethics. But what exactly does this mean? Companies that want to take advantage of digitalization must behave ethically. They have to concern themselves with the moral consequences of digital transformation for individuals and for companies, and pursue a comprehensive system of values. Otherwise they risk damage to their reputation and liability. Their policies cover, for example, data protection and IT security, protection of personal rights, the protection of privacy as well as the autonomy of individuals. These companies and institutions give transparency to which data they use for AI applications, because they would like to prevent bias and therefore discrimination towards certain groups of people. They develop “Digital ethics” guidelines for their employees, development teams and service providers. And before they implement digital solutions for improvement of health, they scan whether these collide with ethical principles.

What is the threat of ethical violations?

Digital ethics are not an extra, they are now business critical.  

  • Only value-based companies gain the trust of their customers, business partners and investors.
  • When it comes to legal violations, companies are not simply liable, their reputation also gets damaged – having a drastic impact over the long term.

Up to four percent of the global annual sales or up to EUR 20 million is what the General Data Protection Regulation budgets for in the case of an violation. And it's not just the healthcare sector that needs to arm itself for further regulation. Since experts in AI applications see a major test for the data protection, the European Commission has set out a draft bill in April 2021 for trustworthy artificial intelligence which aims to punish violations with up to six percent of the global annual turnover. This EU AI Act is expected to regulate all AI usage by 2024 and group it into security classes.

Development of digital ethics

With a view to digital ethics, companies in every sector should answer the following questions:

1. Which technologies could in future impact the company ethics and reputation and how can such threats be identified?

2. How does the company define its own ethical standard – beyond laws and regulation? How is it possible to create company-wide awareness?

3. What should practical instructions and standards for employees working with the new technologies look like?

4. How can technologies be monitored to guarantee that they are in accordance with their own company ethics?

On the subject of digital ethics and data centered innovation, the compliance division should take on a key role and develop a legally compliant framework which translates values and laws into practical codes of conduct.

This is how to set up a compliance framework 

  • Identify and analyze the potential risks of new technologies and processes for liability or reputation
  • Define the guidelines and principles
    Develop a vision for handling technologies. This vision outlines principles which are based on the fundamentals of the law but also on the company's own values.
  • Design user-friendly and practical measures, tools and methods
    Companies must put their guidelines into practice. Pay attention to clear codes of conduct, use creative methods: companies must put their guidelines into practice. Pay attention to clear codes of conduct, use creative methods: Chatbots provide faster advice than handbooks.
  • Develop monitoring measures
    You must be able to check at any time whether previously determined guidelines are actually being implemented in the relevant technology. Ethical seals of approval for the use of AI are conceivable.

Compliance as a Driver for Data-Driven Innovation?

Find out what really matters and how you can drive data-driven innovations effectively and in compliance with the law.

When can we allow ourselves to trust AI?

 In the future, whenever it comes to sensitive data and systems, ethics by design will be a must-have. Ethics are the prerequisite for societal acceptance of a brand and its digital products and services. Companies do not score on innovative products alone but increasingly also on their value system. They therefore need to make themselves familiar with possible pitfalls of modern technology: AI in particular is a hot topic in this regard, because it offers unbelievable opportunities, yet we nevertheless have to contain it. Precisely how this works, why companies need guidelines for their smart algorithms and what mistakes AI is allowed to make will be revealed to you by my colleague Pavol Bauer in his blog article, which is worth a read.

If you would like to know where the white spots are in your company, please feel free to contact me directly: Steffen.Kuhn@detecon.de

About the author
Steffen Kuhn, Managing Partner

Steffen Kuhn

Managing Partner , Detecon

Show profile and articles

This might also interest you

Do you visit t-systems.com outside of Germany? Visit the local website for more information and offers for your country.