AI could amplify risks and disadvantage customers, say FS leaders

The use of artificial intelligence (AI) and machine learning (ML) represent a step change for complexity, speed, and scale, all of which could amplify existing risks in financial services, the Bank of England’s deputy governor for markets and banking said at a recent event.

Speaking at the Artificial Intelligence Public-Private Forum, hosted by the Bank of England and the Financial Conduct Authority (FCA), co-chair Dave Ramsden highlighted the importance of model risk and model risk management (MRM) as a primary framework to addressing risks related to AI.

The forum, attended by representatives from across the financial services sector, was set up to help the industry better understand the impact of the technology on financial services.

In attendance were members from Google Cloud, Amazon Web Services, Experian, Mastercard, Credit Suisse, Capital One UK, Visa, Starling Bank, Microsoft UK, Royal Bank of Canada, National Australian Bank, University College London, Truera, Datactics, and the Alan Turing Institute.

Sheldon Mills, executive director at the FCA, said that while MRM may seem a very technical area, it could provide a basis from which to develop a broader regulatory approach to AI.

Mills added that it would be useful to consider existing MRM frameworks and how effective they are in relation to the use of AI, including if they need adjusting; how MRM practice differs between banking and other areas of financial services; if there are general MRM principles that can be applied to AI, both from financial services and other sectors; and how the right balance can be struck between providing a framework that allows for certainty, regulatory effectiveness and transparency, as well as beneficial innovation.

Members identified the risks arising from AI models, categorising them within three broad areas: risks to the consumer, risks to the firm, and systemic risks. Some of the key risks included: deterioration of model performance due to incorrect training data, operational risk exposures and change management problems, tacit collusion, and amplification of herd behaviour.

Guests agreed that the majority of risks related to AI in the industry exist in other frameworks and sectors, but said that the scale at which AI is beginning to be deployed and the complexity of the models is something new in the industry.

Attendees also said there was an overarching theme emerging around shifting power relationships between individuals, groups and institutions. In some cases, these shifts involve the creation of new power relationships and in others they can widen existing misalignments.

It was suggested AI has given firms the capacity to influence, profile and target consumers in a way that hasn’t happened in the past, because it was not technically possible. The group warned that in extremes, the shift in power could significantly disadvantage customers.

One member questioned if this could have implications for life insurance underwriting and pooling of risk, for example, since the insurers could potentially know everything about an individual, including aspects that couldn’t be analysed in the past.

Another member said that the use of AI models in life insurance underwriting provides a more concise but not totally holistic picture of customer behaviours.

One attendee spoke about the systemic risks and the potential for networks or clusters of AI models to have a significant and unpredictable impact on wholesale market structure, which may in turn have implications for consumers, firms and the system as a whole.

Several members agreed that inadvertent risks can emerge because there are many unknowns with AI, especially when multiple models interact within a network.

A further challenge highlighted by members was around identifying when model outputs shift or degrade, especially with reinforcement learning models that can change their behaviour over time. This challenge is often amplified because models are trained separately and in isolation, so it can be very difficult to understand how they will interact and what emergent behaviour may look like.

In addition to this, one of the members said there could be an increase in cybersecurity risks to and from AI models which could pose systemic risks.

    Share Story:

Recent Stories


Data trust in the AI era: Building customer confidence through responsible banking
In the second episode of FStech’s three-part video podcast series sponsored by HCLTech, Sudip Lahiri, Executive Vice President & Head of Financial Services for Europe & UKI at HCLTech examines the critical relationship between data trust, transparency, and responsible AI implementation in financial services.

Banking's GenAI evolution: Beyond the hype, building the future
In the first episode of a three-part video podcast series sponsored by HCLTech, Sudip Lahiri, Executive Vice President & Head of Financial Services for Europe & UKI at HCLTech explores how financial institutions can navigate the transformative potential of Generative AI while building lasting foundations for innovation.

Beyond compliance: Transforming document management into a strategic advantage for financial institutions
In this exclusive fireside chat, John Rockliffe, Pre-Sales Manager at d.velop, discusses the findings of Adapting to a Digital-Native World: Financial Services Document Management Beyond 2025 and explores how FSIs can turn document workflows into a competitive advantage.

Sanctions evasion in an era of conflict: Optimising KYC and monitoring to tackle crime
The ongoing war in Ukraine and resulting sanctions on Russia, and the continuing geopolitical tensions have resulted in an unprecedented increase in parties added to sanctions lists.