Lawmakers Call for ‘AI Stress Tests’ to Safeguard Britain’s Financial Services

Britain’s financial watchdogs are facing criticism for their insufficient measures to prevent artificial intelligence (AI) from harming consumers or destabilizing markets. A cross-party group of lawmakers has urged regulators to abandon their “wait and see” approach, emphasizing the need for proactive regulation.
In a report on AI in financial services, the Treasury Committee recommended that the Financial Conduct Authority (FCA) and the Bank of England (BoE) implement AI-specific stress tests. These tests would help financial firms prepare for potential market shocks caused by automated systems.
The committee also urged the FCA to provide comprehensive guidance by the end of 2026 regarding how consumer protection rules apply to AI. This includes clarifying the extent to which senior managers should understand the AI systems they oversee.
“Based on the evidence I’ve seen, I do not feel confident that our financial system is prepared if there was a major AI-related incident, and that is worrying,” stated committee chair Meg Hillier.
Technology Carries ‘Significant Risks’
The FCA has previously warned that a race among banks to adopt agentic AI—capable of making decisions and taking autonomous actions—poses new risks for retail customers. Approximately three-quarters of UK financial firms are now utilizing AI, applying the technology across essential functions, from processing insurance claims to conducting credit assessments.
While the report acknowledges the advantages of AI, it also highlights “significant risks.” These include opaque credit decisions, the potential exclusion of vulnerable consumers through algorithmic tailoring, fraud, and the dissemination of unregulated financial advice via AI chatbots.
Experts contributing to the report pointed out additional threats to financial stability, particularly the reliance on a limited number of U.S. tech giants for AI and cloud services. Concerns were also raised that AI-driven trading systems could exacerbate herding behavior in markets, potentially leading to a financial crisis in extreme scenarios.
An FCA spokesperson expressed support for the focus on AI and indicated that the regulator would review the report. However, the FCA has previously stated that it does not favor AI-specific regulations due to the rapid pace of technological change.
The BoE has yet to respond to requests for comment on the matter.
Hillier further noted that increasingly sophisticated forms of generative AI are influencing financial decisions. “If something has gone wrong in the system, that could have a very big impact on the consumer,” she remarked.
In a related development, Britain’s finance ministry has appointed Starling Bank CIO Harriet Rees and Lloyds Banking Group’s Rohit Dhawan as “AI Champions” to guide the adoption of AI in financial services.
(Reporting by Phoebe Seers; editing by Tommy Reggiori Wilkes)

Britain’s financial watchdogs are facing criticism for their insufficient measures to prevent artificial intelligence (AI) from harming consumers or destabilizing markets. A cross-party group of lawmakers has urged regulators to abandon their “wait and see” approach, emphasizing the need for proactive regulation.
In a report on AI in financial services, the Treasury Committee recommended that the Financial Conduct Authority (FCA) and the Bank of England (BoE) implement AI-specific stress tests. These tests would help financial firms prepare for potential market shocks caused by automated systems.
The committee also urged the FCA to provide comprehensive guidance by the end of 2026 regarding how consumer protection rules apply to AI. This includes clarifying the extent to which senior managers should understand the AI systems they oversee.
“Based on the evidence I’ve seen, I do not feel confident that our financial system is prepared if there was a major AI-related incident, and that is worrying,” stated committee chair Meg Hillier.
Technology Carries ‘Significant Risks’
The FCA has previously warned that a race among banks to adopt agentic AI—capable of making decisions and taking autonomous actions—poses new risks for retail customers. Approximately three-quarters of UK financial firms are now utilizing AI, applying the technology across essential functions, from processing insurance claims to conducting credit assessments.
While the report acknowledges the advantages of AI, it also highlights “significant risks.” These include opaque credit decisions, the potential exclusion of vulnerable consumers through algorithmic tailoring, fraud, and the dissemination of unregulated financial advice via AI chatbots.
Experts contributing to the report pointed out additional threats to financial stability, particularly the reliance on a limited number of U.S. tech giants for AI and cloud services. Concerns were also raised that AI-driven trading systems could exacerbate herding behavior in markets, potentially leading to a financial crisis in extreme scenarios.
An FCA spokesperson expressed support for the focus on AI and indicated that the regulator would review the report. However, the FCA has previously stated that it does not favor AI-specific regulations due to the rapid pace of technological change.
The BoE has yet to respond to requests for comment on the matter.
Hillier further noted that increasingly sophisticated forms of generative AI are influencing financial decisions. “If something has gone wrong in the system, that could have a very big impact on the consumer,” she remarked.
In a related development, Britain’s finance ministry has appointed Starling Bank CIO Harriet Rees and Lloyds Banking Group’s Rohit Dhawan as “AI Champions” to guide the adoption of AI in financial services.
(Reporting by Phoebe Seers; editing by Tommy Reggiori Wilkes)
