top of page

FCA Helps Firms to Test AI Safely

  • Writer: OpusDatum
    OpusDatum
  • Dec 3
  • 2 min read
FCA logo in burgundy and white, with the text "FINANCIAL CONDUCT AUTHORITY" on a plain white background.

The Financial Conduct Authority’s launch of AI Live Testing marks a significant step in shaping the safe deployment of artificial intelligence across UK financial services. As firms accelerate adoption of machine learning and generative AI, the regulator is positioning itself as both an enabler of innovation and a guardian of consumer protection. By providing a supervised, real-world environment for firms to test AI applications, the FCA is actively reducing uncertainty around compliance, model risk and governance expectations.


AI Live Testing is notable for being the first initiative within the UK financial sector that allows firms to test AI in live markets with regulatory oversight. Participating firms receive tailored support from the FCA’s regulatory specialists and technical partner Advai, which offers independent evaluation and assurance of AI systems. This dual-track support is designed to help firms understand risk profiles, strengthen model governance frameworks and establish credible monitoring mechanisms before AI tools are deployed at scale.


The first cohort reflects a wide cross-section of the industry, including Gain Credit, Homeprotect (part of the Avantia Group), NatWest, Monzo, Santander, Scottish Widows (part of Lloyds Banking Group) and Snorkl. The breadth of participating organisations demonstrates both the appetite for AI adoption and the need for structured regulatory engagement as firms experiment with advanced technologies that directly affect consumers.


Many of the applications being tested focus on retail financial services where AI has high potential impact. Use cases include AI-enabled debt resolution, automated financial advice, enhanced customer engagement, streamlined complaints handling and tools designed to support more informed spending and saving decisions. These areas present both opportunities and risks: while AI may improve efficiency and outcomes for consumers, it also introduces challenges around bias, explainability, fairness and model drift. Live Testing seeks to address these issues early by building robust evaluation frameworks and risk controls into the development lifecycle.


Jessica Rusu, the FCA’s chief data, information and intelligence officer, highlighted the regulator’s commitment to ensuring that AI is used safely in UK markets. Her statement underscores a broader strategic direction in which the FCA supports responsible digital innovation without imposing additional layers of regulation. Instead, it aims to leverage existing supervisory frameworks while providing clarity on how they apply to emerging technologies.


The initiative also strengthens the FCA’s wider innovation ecosystem. AI Live Testing complements the Supercharged Sandbox, which supports firms in earlier discovery and experimentation phases. Together, these mechanisms create a pipeline through which AI concepts can progress from initial exploration to controlled real-world deployment with regulatory insight.


Applications for the second cohort open in January 2026, with testing to begin in April. As more firms engage with the programme, the FCA will build an increasingly detailed understanding of AI’s operational and market impacts. This insight will help shape future supervisory approaches, ensuring that regulation remains proportionate, principles-based and aligned with technological developments.


For industry, the message is clear: AI can be adopted at pace, provided it is accompanied by rigorous testing, transparent governance and continuous monitoring. For consumers, the expectation is that AI-driven services will be delivered with greater safety, fairness and accountability. And for the FCA, AI Live Testing offers a structured pathway to both support innovation and manage the risks associated with rapidly evolving technologies.


Read the press release here.

bottom of page