The UK financial services regulators, the Bank of England (BoE), the Prudential Regulation Authority (PRA) and the Financial Conduct Authority (FCA) (collectively regulators) have jointly published a discussion paper (DP5/22) on artificial intelligence (AI). and machine learning on 11 October 2022. The purpose of the discussion paper was to facilitate a public debate on the safe and responsible adoption of AI in UK financial services.
Basically, DP5/22 investigates:
- the potential merits of a regulatory definition for AI;
- the benefits, risks and harms associated with the use of AI and machine learning, which could significantly affect or even transform the way financial services and markets operate; and
- how the current regulatory framework for AI might apply.
Regulators have also raised discussion questions for stakeholder input to understand whether the current regulatory framework is sufficient to address the potential risks and harms associated with AI and how additional interventions can support the safe and responsible adoption of AI in UK financial services .
Regulators have not provided a new legal framework or their intended future approaches to regulating the use of AI and machine learning in UK financial services. However, the discussion paper provides a valuable platform for regulators, experts and stakeholders to work together and collectively assess whether the current regulatory framework can adequately regulate AI technology by protecting each regulator’s objectives, while encouraging innovation in UK financial services .
This consultation comes in parallel with the UK Government’s ongoing work to develop its own cross-sectoral approach to regulating AI technology and will therefore make a valuable contribution to this broader policy debate.
Possible benefits of a regulatory definition for AI
Despite the challenges of defining AI, regulators point out that a precise definition of AI brings benefits including: (i) creating a common language for companies and regulators, which can reduce uncertainty; (ii) supporting a consistent and harmonized regulatory response to AI; and (iii) providing a basis for determining whether or not certain use cases might be covered under certain rules and policies.
Regulators also point to the merits of distinguishing between AI and non-AI to provide clarity on what constitutes AI in the context of a particular regulatory regime and also to manage risk and expectations by either:
Benefits and risks associated with using AI in financial services
The benefits and risks of using AI were categorized in the discussion paper based on the regulators’ individual objectives, namely consumer protection, competition, corporate safety and soundness, policyholder protection, financial stability and market integrity.
- Consumer Protection (FCA): AI can help identify consumer characteristics and preferences by processing large amounts of data, which in turn can provide tailored and personalized services, e.g. B. Providing financial services to consumers with non-standard histories. However, there is a risk that AI may provide biased results and discriminate against consumers based on protected characteristics such as race, religion or gender.
- Competition (FCA): Consumer-focused AI systems, such as those used in open banking, can improve competition in a marketplace by improving consumers’ ability to assess, access, and act on information. However, AI systems could facilitate collusion between sellers by making price changes more easily recognizable, and the high cost of entry in terms of data, labor and AI technology could hamper competition.
- Security and Solidity (PRA and FCA): AI enables financial services companies to develop more accurate decision-making tools, create new insights and safer products and services for consumers, and improve operational efficiencies. However, AI could adversely increase regulatory risks (credit, liquidity, market, operational, reputational, etc.) and threaten the security and soundness of companies.
- Policyholder Protection (PRA and FCA): AI can offer automation of data collection, underwriting and claims processing, helping to offer policyholders more personalized insurance products. However, biased or unrepresentative input data can cause AI systems to unfairly treat certain policyholders, which can lead to inappropriate pricing and marketing.
- Financial Stability and Market Integrity (BoE and FCA): AI can be used to process large amounts of data and information more efficiently, particularly related to credit decisions, insurance contracts and customer interactions, which can contribute to an overall more efficient financial system. However, there are risks that with a growing number of financial services firms deploying AI technology using similar datasets and AI algorithms, and relying on third-party providers, AI may amplify existing financial stability and systematic risks.
Existing legal requirements for the use of AI
In the discussion paper, regulators have provided current and future legal requirements and guidance relevant to mitigating the risks associated with AI, including but not limited to the FCA Consumer Duty Rules, the UK General Data Protection Regulation (UK GDPR) , Equality Act 2010 and Senior Managers and Certification Regime (SM&CR).
Legal requirements covering the above and other relevant regulations and guidance will be covered in more detail in our next article in our upcoming series on AI and machine learning in financial services.
The discussion paper closes on 10 February 2023 and stakeholders can send comments or inquiries to [email protected] before the deadline. We will be closely monitoring any reactions to this discussion paper and the UK Government’s future approach to regulating AI.
As mentioned above, this article kicks off an upcoming series of articles that we will be publishing about the range of regulations and areas of law affected by AI and machine learning.