FCA proposes AI transparency framework

The Financial Conduct Authority (FCA) and The Alan Turing Institute have proposed a high-level framework for thinking about artificial intelligence (AI) transparency in financial markets.

As part of a year-long collaboration on the subject, the regulator and thinktank have presented an initial framework for thinking about transparency needs in relation to machine learning in financial markets.

Henrike Mueller, technical specialist in the Innovation Division at the FCA, and Florian Ostmann, who leads the public policy programme at The Alan Turing Institute, suggested that transparency can play a key role in the pursuit of responsible innovation.

A recent survey on machine learning published by the FCA and the Bank of England highlighted that financial services are witnessing rapidly growing interest in AI, but while it has the potential to enable positive transformations, the technology also raises important ethical and regulatory questions.

The FCA followed this last month by starting work to better understand how developments in AI and machine learning ML are driving change in financial markets; including business models, products, services and consumer engagement.

“Especially when they have a significant impact on consumers, AI systems must be designed and implemented in ways that are safe and ethical,” read the blog post. “From a public policy perspective, there is a role for government and regulators to help define what these objectives mean in practice.”

The Information Commissioner’s Office also yesterday launched its own consultation on the use of AI, with draft proposals on how to audit risk, governance and accountability in AI applications.

“One important function of transparency is to demonstrate trustworthiness which, in turn, is a key factor for the adoption and public acceptance of AI systems,” the pair wrote. “Providing information may, for instance, address concerns about a particular AI system’s performance, reliability and robustness; discrimination and unfair treatment; data management and privacy; or user competence and accountability.”

For instance, transparency may enable customers to understand and - where appropriate - challenge the basis of particular outcomes, with the post giving the example of an unfavourable loan decision based on an algorithmic creditworthiness assessment that involved factually incorrect information.

“Information about the factors that determine outcomes may also enable customers to make informed choices about their behaviour with a view to achieving favourable outcomes,” stated Ostmann and Mueller. “An illustration for this rationale would be the value to customers of knowing that credit scores depend on the frequency of late payments.”

The post noted that many common concerns raise process-related questions. Information about the quality of the data that was used in developing an algorithmic decision-support tool, for example, can play an important role in addressing concerns about bias.

“Rather than narrowly focusing on questions of model transparency, a balanced perspective on transparency needs will thus be based on a broader assessment of possible transparency measures that involve model-related as well as process-related information,” the pair explained.

The post suggested that decision-makers may find it helpful to develop a ‘transparency matrix’ that, for a particular use case, maps different types of relevant information against different types of relevant stakeholders.

This matrix can then be used as a tool to structure a systematic assessment of transparency interests, providing a basis for considering different stakeholder types one by one, identifying their respective reasons for caring about transparency, and then evaluating the case for making the different types of information listed in the matrix accessible to a given stakeholder type.

Ostmann and Mueller concluded that the opportunities and risks associated with the use of AI models depend on context and vary from use case to use case.

“In the absence of a one-size-fits-all approach to AI transparency, a systematic framework can assist in identifying transparency needs and deciding how best to respond to them, bringing into focus the respective roles of process-related and model-related information in demonstrating trustworthiness and contributing to beneficial innovation.”

    Share Story:

Recent Stories

Offloading Cyber Risk in the Cloud
As cyber attacks and data breaches are in the news on an increasingly regular basis - with regulatory penalties and customer trust on the line for financial services firms - it has never been more crucial to be compliant in the cloud.

This video, with Akamai’s EMEA director of security technology and strategy Richard Meeus, will help explain what your company can be doing to make sure it’s not embroiled in the next big fine or front-page scandal.

Using Adobe analytics helps businesses achieve 1.7x customer retention
99% of experience driven FS Businesses consistently use analytics for testing and optimisation, improving customer satisfaction metrics by 1.8 times.

To learn more – please read the 2019 Digital Trends: Financial Services in Focus report. Download here