Will AI create operational risk?

The Bank of England has released a paper listing out challenges with Artificial Intelligence in financial institutions.

Fund Operator Editor POSTED ON 4/15/2025 3:00:00 PM

@ClearPathAnalysis.

In a new paper, the UK’s central bank says Artificial Intelligence (AI) could be a great boon for industries across the economy, particularly finance, but it comes with many risks – some of which are still not fully understood.

In its paper, “Financial Stability in Focus: Artificial intelligence in the financial system,” the bank sets out the Financial Policy Committee’s (FPC) view on specific topics related to financial stability.

It says that “Operational risks in relation to AI service providers” exist and are “bringing potential impacts on the operational delivery of vital services”. 

“Greater use of AI to inform trading and investment decisions

could help increase market efficiency."

“Financial institutions generally rely on providers outside the financial sector for AI-related services to develop and deploy AI (just as they do for various other IT services),” it said. “Reliance on a small number of providers for a given service could lead to systemic risks in the event of disruptions to them, especially if is not feasible to migrate rapidly to alternative providers.”

The paper listed several key areas where such operational risk due to overuse – or a misuse of AI - could lead asset managers, and other institutional investors such as banks, and insurance companies to be at risk.

It specified that the greater use of AI in financial markets could bring potential risks to systemic markets. 

“Greater use of AI to inform trading and investment decisions could help increase market efficiency,” it said. “But it could also lead market participants inadvertently to take actions collectively in such a way that reduces stability.”

“Such market instability can then affect the availability

and cost of funding the real economy.”

For instance, it said, the potential future use of more advanced AI-based trading strategies could lead to companies taking increasingly correlated positions and acting in a similar way during a stress [event] thereby amplifying shocks. 

“Such market instability can then affect the availability and cost of funding the real economy.”

This rhetoric is reminiscent of the thoughts expressed by Hens Steehouwer, Chief innovation officer at Ortec Finance, who recently told Fund Operator’s sister site Insurance Investor that AI could be great, but it needed to be dealt with carefully.

He said the key to successful deployment of AI is to apply it at the point in existing processes where it can add the most value and its operation can be most easily understood – and explained.

“You need to be able to explain and understand why things work well,” he said. “With AI you might think there's an explainability issue because we don't know what these algorithms do. If you pick and choose carefully what type of applications, it's not an issue.”

Steehouwer said at this stage of the journey, although AI can perform a wide range of functions, it should be applied to parts of complicated modelling, risk or investment processes where “explainability” to those key stakeholders does not become an issue.

“We apply these algorithms in our traditional optimisation approach to come up with suggested alternative candidate portfolios in a smart way,” he said, explaining specifically how they use it in Strategic Asset Allocation (SAA) modelling. “But the end result is just an SAA, a portfolio composition, which is very transparent and visible. Yes, there are a lot of complex calculations in the background, but the end result is very tangible, and the analysts can look at it and drill down on it. In that way, explainability is no issue at all. You could say it's just another very sophisticated optimisation algorithm, which is efficient and comes up with answers that traditional algorithms cannot.”

What it can be used for in terms of operations is also under much of the same criteria – easily digestible to various stakeholders and not putting it everywhere for the sake of it but instead using it wisely where real benefits can be seen.

Already in operational use

The report highlighted that AI in financial services is already helping to improve operational efficiency and effectiveness, especially through assisting employees with routine tasks.

For instance, new research revealed that over half of fund administrators (55%) are struggling with data acquisition and governance due to the rapid growth of private markets with AI listed as a possible solution.

It specified that the 2024 AI Survey indicates that the top near-term (in the next three years) use cases for AI include optimising internal processes, enhancing customer support and combatting financial crime. Financial firms appear most willing to deploy AI in these types of operationally focused use cases, which are expected, according to the same survey, to be among those delivering the biggest benefits in three years’ time.

Fraud detection, cybersecurity, and customer support were all in the top five uses as well.

The FPC said it intends to build out its monitoring approach to enable it to track the development of AI-related risks to financial stability.

Whatever the pace it does occur at, it’s likely to create more work in the interim for operational teams before the benefits truly start becoming known.

 

Please Sign In or Register to leave a Comment.