Bowman: Fed adapts AI supervision, eyes global stability
Federal Reserve Vice Chair Michelle W Bowman outlined the central bank's evolving supervisory approach to artificial intelligence in the banking system. Speaking at an FSOC roundtable, Bowman also addressed AI's role in international financial stability.
Supervisory frameworks adapt to AI
Michelle W Bowman, Vice Chair for Supervision of the Federal Reserve, discussed the rapid evolution of artificial intelligence (AI) and its integration into the financial system.
She noted that financial institutions are developing their own applications and implementing vendor-assisted tools, with AI becoming a "force multiplier" for efficiency and effectiveness.
For nearly a decade, Federal Reserve supervisors have engaged with banks to monitor AI use, evolving their approach to ensure responsible deployment while fostering innovation.
The Fed, along with the OCC and FDIC, recently amended its model risk management guidance to clarify that it does not apply to generative or agentic AI, recognizing that novel technologies may require a different approach.
This revised guidance now applies narrowly to traditional models and basic AI applications, with other risk-management and governance practices expected to support ongoing innovation.
Innovation's dual-edged sword
Bowman emphasized the need for adaptable supervisory guidance for AI, particularly concerning third-party risk management for vendor-provided tools and model risk management aspects.
She highlighted the dual nature of tools like Anthropic's Mythos AI model, which can identify cyber vulnerabilities for both protective and malicious purposes.
To address emerging technology, Bowman stressed continued inter-agency coordination, noting a recent meeting convened by Secretary Bessent and Chair Powell with large banks to discuss Mythos's cybersecurity implications.
Regular communication of unique risks to supervised institutions and industry feedback are crucial for refining supervisory approaches.
Proactive, not prescriptive
Bowman's speech signals a pragmatic and proactive shift in regulatory thinking, moving beyond rigid frameworks to embrace adaptable oversight for AI.
The explicit exclusion of generative AI from traditional model risk guidance is a crucial acknowledgment of technological novelty.
This forward-looking stance is essential to foster responsible innovation while safeguarding financial stability in an increasingly AI-driven world.