Supervisors develop approaches for AI data use in financial services
BIS Paper Auf Deutsch lesen

Supervisors develop approaches for AI data use in financial services

A new Bank for International Settlements (BIS) paper explores emerging policy and supervisory approaches to artificial intelligence (AI) data use in financial services. It emphasizes the critical role of data for AI systems and identifies key challenges related to privacy, quality, and security.

AI's data dependency amplifies existing challenges

Artificial intelligence, particularly generative AI, is transforming the financial sector by amplifying the critical role of data across all stages of its life cycle.

Data are essential for training models, assessing performance, identifying biases, and refining outputs.

However, financial institutions consistently identify data management challenges as significant barriers to broader AI adoption.

Long-standing issues include the incompatibility of numerous, fragmented data sources, leading to inconsistent data quality.

The scaling of generative AI exacerbates these weaknesses and introduces new complexities, such as those related to synthetic and alternative data.

Pressing concerns include data privacy, quality, and security, which are further complicated by third-party dependencies.

These shortcomings can heighten consumer protection risks and micro- and macroprudential vulnerabilities, potentially eroding the anticipated benefits of generative AI in finance.

Navigating privacy, quality, and security

Data protection has received considerable attention, with cross-sectoral guidance emerging on how data requirements apply to AI practices.

These frameworks emphasize data privacy, focusing on individuals' control over personal data.

However, advanced AI systems, relying on extensive personal data, challenge core privacy principles like lawful basis, consent, and data minimization.

Data quality is equally central, with frameworks stressing accuracy, completeness, and representativeness to prevent biased or harmful AI outputs.

Data security, encompassing confidentiality, integrity, and availability, is another core aspect, requiring robust measures to safeguard data in AI systems, especially given the scale and sensitivity of processed information.

Sound data governance provides the backbone for managing these complexities, establishing clear roles, responsibilities, processes, and policies to ensure compliance and accountability.

A necessary but incomplete framework

The current policy framework struggles to keep pace with rapid AI advancements, creating persistent tensions between technological capabilities and data protection requirements.

Existing regulations, designed for traditional data uses, often fall short in addressing the complexities of generative AI ecosystems and third-party dependencies.

Tailored guidance and enhanced cross-authority collaboration are therefore crucial to bridge these gaps and ensure robust financial stability.