The head of digital policy at Barclays has said that disparate cross-sector regulatory frameworks for AI could pose a "potential challenge" for the UK.
During a panel discussion on Tuesday about what is next for AI regulation across the UK, EU and US, Nicole Sandler told delegates that with differing frameworks and definitions of AI, firms need to prepare for the "most onerous" regulation.
Sandler, who was attending City & Financial Global’s AI conference at City Week, said that while she was not criticising the UK's approach to AI regulation, she acknowledges that having different policymaker guidance across various sectors could lead to differing approaches to AI and "fragmentation".
The UK has taken a different approach to the EU's AI Act, which is set to launch next month.
In contrast with the bloc's legislation, Britain will instead take a cross-sector, outcomes-based approach to AI regulation which will focus on five principles: safety, security and robustness; appropriate transparency and explainability; fairness; accountability and governance; contestability and redress.
Sandler said that with the UK not yet launching specific AI regulation, it is unclear how the different rules will "map together".
The EU approach
During the panel discussion, Kai Zenner, head of office and digital policy advisor for MEP Axel Voss (EPP Group), European Parliament, expressed support for some of the UK's approaches to AI regulation whilst criticising the EU’s AI Act – saying that the upcoming legislation is “extremely vague”.
“Right now, no one really knows how to fulfil the AI act,” he said.
Zenner praised the UK's Digital Regulation Cooperation Forum (DRCF), which was launched in July 2020 to bring together the Competition and Markets Authority (CMA), the Financial Conduct Authority (FCA), the Information Commissioner’s Office (ICO), and the Office of Communications (Ofcom).
He told delegates that the EU is missing an initiative like the DRCF, suggesting that the bloc works in silos which are creating "a lot of problems".
The advisor explained that it is unclear who will make decisions in the EU's AI Office, which is set to be the centre of AI expertise across the EU and aims to play a key role in implementing the new act.
Zenner went on to say that secondary legislation would help to address specific areas such as how to fulfil human oversight.
"This needs to be broken down into sectoral use cases," added Zenner, who said he hopes a second set of EU AI legislation will be launched in the next two or three years.
He said that the EU Commission is being questioned about guidelines around what is and isn't prohibited, as well as what the high risk use cases are, adding that he wants to see the centralisation a lot of points to avoid "GDPR scenarios happening again with 27 view points".
Recent Stories