2 min readfrom Machine Learning

Charting the AI Perception Gap: Across 71 scenarios, AI experts (N=119) and the public (N=1100) have differing views on the risks, benefits, and value of AI. More importantly, AI experts discount the influence of risks stronger than the public does when forming their value judgments [R]

Charting the AI Perception Gap: Across 71 scenarios, AI experts (N=119) and the public (N=1100) have differing views on the risks, benefits, and value of AI. More importantly, AI experts discount the influence of risks stronger than the public does when forming their value judgments [R]
Charting the AI Perception Gap: Across 71 scenarios, AI experts (N=119) and the public (N=1100) have differing views on the risks, benefits, and value of AI. More importantly, AI experts discount the influence of risks stronger than the public does when forming their value judgments [R]

https://preview.redd.it/evw6ah88kczg1.png?width=1024&format=png&auto=webp&s=be8bafe0099c362a187489f95cbfa5398f537107

Abstract: Artificial intelligence (AI) is reshaping society, raising questions about trust, risks, and the asymmetries between public and academic perspectives. We examine how the German public (N = 1,110), comprising individuals who interact with or are affected by AI, and academic AI experts (N = 119, mainly from Germany), who contribute to research, educate practitioners, and inform policymaking, construct mental models of AI’s capabilities and impacts across 71 scenarios. These scenarios span diverse domains (including sustainability, healthcare, employment, inequality, art, and warfare) and were evaluated across four dimensions using the psychometric model: likelihood, perceived risk, perceived benefit, and overall value. Across scenarios, academic experts generally anticipated higher probabilities of occurrence, perceived lower risks, and reported greater benefits than the public, while also expressing more positive overall evaluations of AI. Beyond differences in absolute assessments, the two groups exhibited systematically different evaluative patterns: experts’ value judgments were driven primarily by perceived benefits, whereas public evaluations placed more weight on perceived risks, reflecting distinct risk–benefit trade-offs. Visual mappings indicate convergent domains (e.g., medical diagnoses and criminal use) and tension points (e.g., justice and political decision-making) that may warrant targeted communication or policy attention. While this study does not assess AI systems or design practices directly, the observed divergence in mental models suggests that the research, implementation, and use of AI may inadvertently neglect the risk-related priorities of the public. Such biases in research and implementation may yield “procrustean AI”—systems insufficiently aligned with the needs of the affected public (akin to the Bed of Procrustes). We address the socio-technical challenge of expert-centric governance and advocate for participatory practices.

Full article: https://link.springer.com/article/10.1007/s00146-026-03023-8

submitted by /u/lipflip
[link] [comments]

Want to read more?

Check out the full article on the original site

View original article

Tagged with

#rows.com
#financial modeling with spreadsheets
#natural language processing for spreadsheets
#generative AI for data analysis
#business intelligence tools
#AI-driven spreadsheet solutions
#Excel alternatives for data analysis
#AutoML capabilities
#artificial intelligence
#AI experts
#risk-benefit trade-offs
#risks
#benefits
#public perception
#mental models
#value judgments
#participatory practices
#sustainability
#healthcare
#employment