Generative AI simulates payment app privacy views with caveats
A BIS working paper explores using ChatGPT to simulate survey responses on payment app privacy and benefits. The study finds that while AI-generated views align with real surveys on some aspects, it lacks variability and overstates privacy concerns.
AI agents mirror human privacy attitudes
The study leverages ChatGPT to simulate survey responses regarding payment app usage, with a specific focus on privacy perceptions and perceived benefits.
By crafting prompts that reflect real user characteristics, the generated AI responses largely align with empirical findings from a Dutch consumer survey.
Specifically, AI agents expressing privacy concerns tend to view financial apps less favorably and perceive higher risks, mirroring human behavior.
Furthermore, the simulation accurately captures that users of payment apps exhibit more positive attitudes than non-users, even when these traits are not explicitly specified in the prompts.
This suggests generative AI can reproduce key behavioral patterns observed in real-world survey data, offering a novel approach to understanding consumer sentiment towards new payment technologies.
Variability and bias remain AI's blind spots
The study highlights significant caveats for generative AI in surveys.
A key limitation is the AI's inability to reproduce the wide variability of human responses, resulting in unnaturally low variance in synthetic data.
ChatGPT also tends to overstate privacy concerns, classifying most agents as "privacy fundamentalists," a bias inconsistent with real survey distributions.
This bias proves challenging to rectify, even when altering simulated demographics.
Furthermore, specifying many detailed features in prompts can cause ChatGPT to disproportionately weight certain characteristics, complicating the generation of accurate synthetic surveys.
These issues underscore the need for caution when using AI for perception studies.
Complementary, not a replacement
This research confirms GenAI's potential as a complementary tool for market surveys, particularly for brainstorming questions and conducting preliminary simulations.
However, its inherent biases and lack of nuanced human variability mean it cannot fully replace real human data for comprehensive perception studies.
Policymakers and researchers must recognize these limitations, ensuring AI-generated insights are used judiciously and validated against actual consumer behavior.