FCA invites applications for second AI Live Testing cohort
The Financial Conduct Authority (FCA) has opened applications for the second cohort of its AI Live Testing program. The initiative allows firms to test AI-driven services in real-world conditions with regulatory support and oversight.
From perpetual pilots to market deployment
The Financial Conduct Authority's AI Live Testing program targets firms with mature proof-of-concepts ready for imminent market deployment.
This initiative aims to move UK innovators beyond 'POC paralysis' by enabling testing in controlled market environments.
The FCA distinguishes between the AI system and the underlying AI model, asserting that risks and benefits are best understood within specific enterprise-level use cases.
The program adopts a holistic approach, defining the AI system to include the model, deployment context, core risks, governance, human-in-the-loop considerations, evaluation techniques, and input/output controls, rather than focusing narrowly on the model itself.
Regulatory insights from real-world challenges
Participating firms receive comprehensive AI technical and regulatory support from subject matter experts across three sequential phases: Discovery, Framework validation, and AI system testing.
This collaborative process allows the FCA to deepen its understanding of safe and responsible AI, particularly how to translate regulatory principles into tangible outcomes for financial consumers and markets.
The program also provides the regulator with crucial intelligence on industry challenges in interpreting and aligning with the evolving regulatory landscape, enabling a productive adaptation to real-world industry behavior during a major technology shift.
Bridging the deployment chasm
The program addresses a critical gap in AI deployment by bridging the chasm between theoretical proof-of-concepts and real-world regulatory compliance.
While beneficial for participating firms, its broader impact hinges on the FCA's ability to translate specific learnings into scalable, transparent regulatory frameworks.
Without clear, generalizable outcomes, the initiative risks remaining a niche sandbox rather than a catalyst for widespread safe AI adoption.