FCA helps launch AI hub to guide developers on regulation

The Financial Conduct Authority (FCA) is joining other key regulators on a pilot scheme designed to help companies developing AI to meet existing regulatory standards.

The Digital Regulation Cooperation Forum (DRCF), which includes the FCA, the Competition & Markets Authority (CMA), the Information Commissioner’s Office (ICO), and Ofcom, will give companies access to informal advice to support them in complying with different regulatory regimes that govern the development and deployment of AI models.

The announcement comes ahead of a government deadline set for 30th April which calls on UK regulators to outline their strategic approach to AI.

The deadline also requires UK watchdogs to explain the steps they are taking following the launch of a governmental whitepaper last year which proposed a new framework for governing AI based on five principles: safety, security and robustness; appropriate transparency and explainability; fairness; accountability and governance; and contestability and redress.

It is up to the regulators to interpret and apply these principles to AI use cases within their remits.

While the framework is non-statutory, the government said it may become necessary to enforce these standards at a later point.

Speaking about the launch of the new hub, the technology secretary described AI as the "defining technology of our generation".

“Through the AI and Digital Hub, we can bring groundbreaking innovators together with our expert regulators to streamline the process of harnessing the technology’s incredible potential," said secretary of state for science, innovation, and technology, Michelle Donelan.
“Our regulatory approach to AI places innovation at its heart, and this pilot scheme will play a vital role in helping us to refine that approach both now and in the years to come.”

The tech department said that the hub will give regulators the opportunity to gain first-hand insights and feedback from innovators, helping them to refine their regulatory regimes for AI models which will strengthen the UK’s overall regulatory approach and help them to "inform new guidance".

It also delivers on the Vallance review recommendation to establish an AI Regulation Sandbox, which will invite applications from a range of tech developers from across the economy.



Share Story:

Recent Stories


Data trust in the AI era: Building customer confidence through responsible banking
In the second episode of FStech’s three-part video podcast series sponsored by HCLTech, Sudip Lahiri, Executive Vice President & Head of Financial Services for Europe & UKI at HCLTech examines the critical relationship between data trust, transparency, and responsible AI implementation in financial services.

Banking's GenAI evolution: Beyond the hype, building the future
In the first episode of a three-part video podcast series sponsored by HCLTech, Sudip Lahiri, Executive Vice President & Head of Financial Services for Europe & UKI at HCLTech explores how financial institutions can navigate the transformative potential of Generative AI while building lasting foundations for innovation.

Beyond compliance: Building unshakeable operational resilience in financial services
In today's rapidly evolving financial landscape, operational resilience has become a critical focus for institutions worldwide. As regulatory requirements grow more complex and cyber threats, particularly ransomware, become increasingly sophisticated, financial services providers must adapt and strengthen their defences. The intersection of compliance, technology, and security presents both challenges and opportunities.

Unleashing generative AI: A force multiplier for financial crime teams
This FStech webinar, sponsored by NICE Actimize sees industry experts examine the revolutionary impact of generative AI on financial crime operations, and provides actionable insights to enhance your compliance strategies.