The idea of Artificial Intelligence (AI) is not new. The Terminator films have pioneered the idea of machines taking over the human world in an apocalyptic manner since 1984. While the world has not yet reached those extremities, the closest that machine intelligence has come to beating human intelligence is when IBM’s “Deep Blue” computer defeated reigning world chess champion, Garry Kasparov, on May 11, 1997.
To put it simply, what the computer did during the phenomenal game was: think, anticipate the opponent’s moves, and play just like any other human chess player. It learned from the huge database of moves made and games played by previous Grandmasters.
Fast-forward to present times: the scope of AI is rapidly expanding and spans across different industries. But its approach remains the same — learn from data and experience, and then make decisions on behalf of humans.
This is helping improve efficiency, enable a growth agenda, boost differentiation, manage risk and regulatory needs, and positively influence customer experience.
With the growing adoption of technologies such as Cloud, IoT, 5G, and distributed ledger, AI is able to create multiplicative value. However, because AI has become more ubiquitous, the need for transparency in its usage has intensified. The need for ethical use of AI is felt more intensely than ever. Today, good intentions alone are not enough for companies trying to earn their customers’ trust and avoid reputational and financial repercussions. Tangible checks and measures need to be built into processes related to how data is being handled.
In Europe, and especially in Germany, data protection is taken very seriously. The Federal Financial Supervisory Authority (better known as BaFin) has recently formulated general principles for the use of Big Data and AI in Finance. These principles cover the entire data processing lifecycle – from creation of an algorithm to its application. Let’s deep-dive into the details.
It is crucial for companies to understand that while intelligent and self-sufficient algorithms enable better decision-making, the responsibility of those decisions rests with the key management roles within the organization. The use of BDAI will grow exponentially due to the increasing use of open banking, open finance, adoption of newer technologies, and the rise of FinTech companies. And it can be potentially damaging for companies to lose sight of their core responsibilities towards the handling of data, customer privacy, and regulations.
To enable the previous principle, a risk management system and an outsourcing management system are necessary for implementing transparent reporting and monitoring measures. This would help prepare the organization for scenarios such as erroneous outcomes from AI systems or technical failures, data corruption or cyber security threats. For example, today, wearable devices like smartwatches and telematics are sources of new data points. This helps to determine surcharge or discounts offered to the customers over the base rates.
When using data for AI, companies should ensure that characteristics leading to any kind of discrimination are factored in, especially whilst calculating risk and pricing. Such a risk can also arise if these characteristics are replaced with an approximation. The repercussions could cause legal, financial, and reputational impact. Companies should therefore establish (statistical) verification processes to rule out discrimination. For example, credit institution Svea Ekonomi AB of Finland refused to extend the loan of a male applicant. It was found using surrogate variables, which, in this case, were the characteristics of gender, native language, age, and place of residence, which constitute direct or indirect statistical discrimination.
A good data strategy will help companies to ensure the optimal quality and quantity of their data, and eventually, good results for their AI applications. The data strategy must be implemented in a data governance system and the responsibilities must be clearly defined. Also, with the quantum of data coming from different sources, channels and devices, it is also very prudent to consider the time value of the data i.e., how old or new the data feed is and how relevant it is for a specific use case.
Compliance with the applicable data protection regulations is non-negotiable. Regulations like GDPR (and any other applicable local laws) require companies to define their data privacy policies and make them easily accessible. The corresponding data subjects should be made aware of how their data is being used and what is being done with it through proper disclosure. With such a significant amount of data being collected and processed in present times, it is both a responsibility and a challenge for companies to protect customer data and to be transparent about its usage. Violations of the laws can be expensive. Caixabank of Spain, for example, was fined for €6 million on January 13, 2021, for GDPR violations in regard to use of customer personnel data and violating the transparency requirements (Source).
Sufficient documentation is required to ensure that the algorithms and the underlying models can be verified – by the company itself and by authorized auditors and supervisors. This includes model selection, model calibration and training, and model validation.
The AI application should go through a rigorous validation process before it is deployed into production. After deployment, it should be checked frequently. It is essential to identify the factors that lead to the ad-hoc validation of the algorithms and thus potentially lead to the algorithms being recalibrated, or an alternative algorithm being selected. Factors include a systematic change in input data, external (macroeconomic) shocks, changes to the legal requirements under which an algorithm is operated and feedback from the output phase such as a threshold being crossed.
The AI applications should ensure that the outcomes are accurate and can be reproduced at a later point in time. This helps both internally, to verify the original outcomes, and externally from any compliance perspective, e.g. in audits or litigations.
AI algorithms learn from the available data, which is one of the critical factors that drive performance and decision-making. Therefore, the focus should be on the selection and accurate documentation of this data. If the data is not relevant for the use case, or its coverage is not optimum, then it could lead to a modelling bias. Depending on the scope and riskiness of the decision for which an algorithm is used, various measures should be taken to ensure that the calibration and validation can be subsequently understood and verified.
Bias is an inclination of prejudice towards or against an object, person, or position. Thus, it is important to identify the risk of bias where it may occur, taking into account the root cause, and to either eliminate or at least mitigate this risk. For example, a bank's fraud prevention system could make the legitimacy of a customer's transaction dependent solely on the neighborhood the customer resides in.
While leveraging AI, companies should be pragmatic about involving humans in decision making, especially for critical use cases. Although BDAI is continuously evolving, there are still some things algorithms cannot achieve without the application of human cognitive skills. It is also useful to define the time frames in which a decision can still be reversed and humans can intervene.
The approval process should be clearly defined in advance in a risk-oriented manner wherever AI-based decisions are made. This could be in the form of tiered rules set up for outcomes, which govern the approval process. This reduces the risk of erroneous decisions reached in an algorithmic decision-making process and can improve the quality of results in the long term, thanks to an ongoing feedback mechanism.
Companies should set out alternative measures whereby business operations can continue to run if problems arise in the algorithm-based decision-making processes.
Once the algorithm is implemented, ongoing valuations, validations, and adjustments will ensure that it remains aligned to the objective of the use case, without any discrepancies creeping in. Continuous checks are important because new or unforeseeable internal or external risks that were not factored in while the algorithm was being created could be addressed swiftly with this approach.
BaFin understands and acknowledges the need to work along with other national and international supervisory authorities and standard setters. These include, but are not limited to, European Supervisory Authorities, EIOPA InsurTech Task Force, ESMA Financial Innovation Standing Committee, the Basel Committee on Banking Supervision (BCBS), and International Association of Insurance Supervisors (IAIS).
To conclude, it would be fitting to sum up the importance of supervisory principles for BDAI with this statement by Felix Hufeld, former President of BaFin, from a recent report: “What I find interesting in this context is the fundamental question of where the limits of data collection and analysis should be in the case of BDAI. At what point does a marginal improvement in risk assessment justify the collection of more data? Which data can we categorize as offering real, long-term and material advantages, while ensuring a balance between the information that needs to be obtained and other objectives such as data minimization (Datensparsamkeit)? I think we need to have a broad dialog with all those concerned – but we also need to ask ourselves, as a society, where we want red lines to be drawn in the brave new world of data.”