Report warns AI could crash markets again
Written by Peter Walker
A new report into the use of artificial intelligence (AI) in finance has warned that regulators need to reign in the technology or face serious market instability caused by unaccountable algorithms.
Risk management consultancy Parker Fitzgerald stated that AI’s power to improve productivity in financial services is undeniable, while predictive analytics and machine learning have opened new possibilities in the detection of fraudulent activity and financial crime.
For example, report contributor Ayasdi is engaged with HSBC to improve its anti-money laundering systems, with intelligent segmentation of customers already reducing the number of false positives by 20 per cent, while enhancing the overall risk profile for the bank.
However, aside from the focus on threats to job security, the more pressing concern around AI is the financial stability implications.
New and unexpected forms of interconnectedness between financial markets were raised as one risk, while the use of AI may also make it difficult for human users at financial institutions and regulators to grasp how decisions, such as those for trading and investment, have been formulated.
Institutional interdependencies and risk correlations are central to financial crises and market crashes. The 2010 ‘flash crash’ was triggered by an automated algorithmic trade, with US stocks and futures markets losing 10 per cent of market value in a matter of minutes, only to recover hours later. In February this year, the Dow Jones collapsed by 1,000 points in 11 minutes – the biggest points fall in the benchmark’s history.
In a recent report examining the risk implications of AI, the US Financial Stability Board (FSB) highlighted that the lack of transparency and auditability of AI algorithms in trading poses macro-level risks.
The use of AI and machine learning also risks the creation of ‘black-boxes’ in decision-making. The communication mechanism used by such tools may be incomprehensible to humans, posing monitoring challenges for the human operators.
Another challenge relates to new sources of market concentration in financial services, especially with regards to third-party relationships. Word of mouth and scalability of new technologies could cause the provision of AI to concentrate among a small number of advanced third-party providers, hence increasing market concentration among some functions in the financial system, stated Parker Fitzgerald.
As the FSB noted, this may lead to “the emergence of new systemically important players that could fall outside the regulatory perimeter” and trigger systemic risks if a large technology provider were to face a major disruption or insolvency. This has caused the Financial Conduct Authority (FCA) to heighten supervisory efforts on third-party dependencies and supply.
A recent survey of 200 global tier one and tier two banks found that 83 per cent have evaluated AI and machine learning solutions, while 67 per cent have actively deployed them.
As the applications of AI continue to grow, the report set out three principles for managing the risk:
1. Regulators need to specify their ‘red lines’ for the use of AI by companies. Explainability auditability and reproducibility are key in governing the use of AI and other technology in finance.
2. Greater RegTech use will be critical for improving regulatory efficiency. Further use of ‘tried and tested’ tools, like the FCA’s FinTech Sandbox could also prove effective.
3. Macro-level standards on AI and international data regulations will be integral to the responsible adoption of AI. This is pertinent in the context of Brexit.
Parker Fitzgerald concluded that if AI is not properly understood, unexpected market jitters may in turn lead to bigger shocks to wider macro-financial stability. “Artificial intelligence can only be as beneficial as the supervisory systems in place allow it to be,” the authors commented.
“To encourage commitment to AI adoption and ensure that this benefits the financial industry both now and in the long-term, regulators across the globe need to provide clear-cut definitions of the allowed reach of this technology and avoid regulatory balkanisation for both AI and FinTech overall.”
In April, the UK government announced plans to collaborate with more than 50 businesses and organisations to develop a £1 billion deal to put the UK at the forefront of the AI industry.
However, shadow chancellor John McDonnell warned that effective regulation was be needed to realise the benefits of such technology.
“Perhaps the greatest single lesson of the last decade in finance is that deregulation of complex and essential activities like financial services will not lead to the best result for society,” he stated. “The market, left to its own devices, will not always produce the best possible outcome.”