AI could amplify risks and disadvantage customers, say FS leaders

The use of artificial intelligence (AI) and machine learning (ML) represent a step change for complexity, speed, and scale, all of which could amplify existing risks in financial services, the Bank of England’s deputy governor for markets and banking said at a recent event.

Speaking at the Artificial Intelligence Public-Private Forum, hosted by the Bank of England and the Financial Conduct Authority (FCA), co-chair Dave Ramsden highlighted the importance of model risk and model risk management (MRM) as a primary framework to addressing risks related to AI.

The forum, attended by representatives from across the financial services sector, was set up to help the industry better understand the impact of the technology on financial services.

In attendance were members from Google Cloud, Amazon Web Services, Experian, Mastercard, Credit Suisse, Capital One UK, Visa, Starling Bank, Microsoft UK, Royal Bank of Canada, National Australian Bank, University College London, Truera, Datactics, and the Alan Turing Institute.

Sheldon Mills, executive director at the FCA, said that while MRM may seem a very technical area, it could provide a basis from which to develop a broader regulatory approach to AI.

Mills added that it would be useful to consider existing MRM frameworks and how effective they are in relation to the use of AI, including if they need adjusting; how MRM practice differs between banking and other areas of financial services; if there are general MRM principles that can be applied to AI, both from financial services and other sectors; and how the right balance can be struck between providing a framework that allows for certainty, regulatory effectiveness and transparency, as well as beneficial innovation.

Members identified the risks arising from AI models, categorising them within three broad areas: risks to the consumer, risks to the firm, and systemic risks. Some of the key risks included: deterioration of model performance due to incorrect training data, operational risk exposures and change management problems, tacit collusion, and amplification of herd behaviour.

Guests agreed that the majority of risks related to AI in the industry exist in other frameworks and sectors, but said that the scale at which AI is beginning to be deployed and the complexity of the models is something new in the industry.

Attendees also said there was an overarching theme emerging around shifting power relationships between individuals, groups and institutions. In some cases, these shifts involve the creation of new power relationships and in others they can widen existing misalignments.

It was suggested AI has given firms the capacity to influence, profile and target consumers in a way that hasn’t happened in the past, because it was not technically possible. The group warned that in extremes, the shift in power could significantly disadvantage customers.

One member questioned if this could have implications for life insurance underwriting and pooling of risk, for example, since the insurers could potentially know everything about an individual, including aspects that couldn’t be analysed in the past.

Another member said that the use of AI models in life insurance underwriting provides a more concise but not totally holistic picture of customer behaviours.

One attendee spoke about the systemic risks and the potential for networks or clusters of AI models to have a significant and unpredictable impact on wholesale market structure, which may in turn have implications for consumers, firms and the system as a whole.

Several members agreed that inadvertent risks can emerge because there are many unknowns with AI, especially when multiple models interact within a network.

A further challenge highlighted by members was around identifying when model outputs shift or degrade, especially with reinforcement learning models that can change their behaviour over time. This challenge is often amplified because models are trained separately and in isolation, so it can be very difficult to understand how they will interact and what emergent behaviour may look like.

In addition to this, one of the members said there could be an increase in cybersecurity risks to and from AI models which could pose systemic risks.

    Share Story:

Recent Stories


Meet Evelyn, your Economic Sanctions/PEP/Adverse Media Alert Adjudication Analyst
Meet Evelyn, an Economic Sanctions/PEP/Adverse Media Alert Adjudication Analyst, who uses native AI/ML capabilities to automate the Customer/PEP screening and Negative News screening alert adjudication processes for leading BFS organizations with greater speed, accuracy, and consistency than human analysts.

New Business Frontiers
FStech’s Mark Evans discusses the future of financial services with Liu Jianning of Huawei, covering the limitations that current thinking can impose, how financial institutions can embrace technology to be both agile and resilient, and making space for the organisation to focus on the job of creating innovative business models and on delivering business value for their customers.

The Future of Intelligent Finance
FStech Group Editor Mark Evans sits down with Jason Cao, President of Global Financial Services Business Unit, Enterprise BG at Huawei ahead of its Intelligent Finance Summit which was held on 3rd and 4th of June in Shanghai. This Q&A delves into key trends in digital transformation of the financial services industry as well as a look at how data, robotic infrastructure, intelligent storage and innovative technologies are shaping the future for FSIs.