This section reviews the relevant literature. It begins with an overview of prior research on the perception of AI, followed by studies comparing how experts and the public perceive technology in general and AI in particular. The section concludes by identifying the research gaps and formulating the research questions that guide the present study.
2.1 Public perception of AI
While AI has existed for decades, the rapid adoption of generative tools like ChatGPT (Hu 2023) has intensified academic focus on how the public perceives AI’s implications across diverse domains. These perceptions are not a monolithic reaction to technology but a multifaceted social construction shaped by a tension between utopian narratives of benefit and deep-seated anxieties regarding control. Although the literature identifies various drivers, including media framing, cultural context, and individual literacy, it reveals a landscape of “polarized expectations” where public imagination often diverges from technical reality. Currently, a consolidated understanding of how these factors can be integrate into a holistic model remains absent. This section reviews these influences to highlight a critical research gap: the need for an empirical foundation that systematically maps public sentiment across a broad spectrum of societal domains, moving beyond the fragmented, application-specific insights that currently dominate the field.
Media plays a significant role in shaping public opinion about AI. Fast and Horvitz (2017) conducted an analysis of three decades of AI coverage in The New York Times, observing a rise in public interest after 2009. Coverage has generally been more positive than negative, balancing optimism with concerns over control and ethical issues. Recently, reporting has reflected heightened enthusiasm for AI’s potential, particularly for healthcare, mobility, and education. News coverage often emphasizes the benefits of AI while downplaying potential risks, contributing to a perception of AI as superior to human capabilities and fostering the anthropomorphization of technology (Puzanova et al. 2024). Cave et al. (2019) examined AI narratives in the UK and identified four optimistic and four pessimistic themes. These narratives frequently evoke anxiety, with only two highlighting benefits over risks, such as AI’s potential to make life easier.
Furthermore, sentiment analysis of WIRED articles reveals an increase in polarized views, with both positive and negative sentiments intensifying over time (Moriniello et al. 2024). Sanguinetti and Palomo (2024) investigated how news outlets portray AI and found that coverage often frames AI as something to be feared, depicting it as an autonomous and opaque entity beyond human control. Using an AI anxiety index, the study analyzed newspaper headlines before and after the launch of ChatGPT, reporting both increased coverage and heightened negative sentiment.
Perceived risks and benefits play a crucial role in shaping public attitudes toward AI. Surveys indicate that the public views AI as both a risk and an opportunity. Common concerns include privacy violations and cybersecurity threats (Brauner et al. 2023), while perceived benefits are often associated with applications in urban services and disaster management (Schwesig et al. 2023; Yigitcanlar et al. 2022). Both perceived risks and opportunities shape individuals’ behavioral intentions to use AI applications and a higher perceived opportunity-risk ratio is associated with greater willingness to adopt AI, though with notable variation depending on the application context (Schwesig et al. 2023).
Neri and Cozman (2019) argue that experts significantly influence public perceptions of AI risks. Their public statements can amplify awareness of certain threats, such as existential risks, which may be rooted more in expert discourse than in actual incidents. Lee et al. (2024) found that individuals with higher education levels, greater political interest, and more knowledge about ChatGPT tend to perceive AI as more risky. This finding challenges the conventional “knowledge deficit” model, suggesting that negative perceptions may stem from a critical mindset that engages with AI technology more cautiously.
Trust in AI varies across contexts, demographic groups, and individual attitudes. For instance, people tend to trust AI more in personal lifestyle applications but remain more skeptical about its use by companies and governments (Yigitcanlar et al. 2022). Willingness to engage with AI is shaped by the perceived balance of risks and benefits, which differs across domains such as healthcare, transportation, and media (Schwesig et al. 2023). In addition to variations in trust, there remains a significant gap in public understanding of AI, which can contribute to irrational fears and misinformed beliefs about control. Promoting AI literacy is therefore essential to enable informed decision-making and support responsible innovation (Ng et al. 2021; Marx et al. 2022).
Public perception of AI also varies significantly based on local context, political ideology, and exposure to science news. For example, people in the United States generally expect more benefits than harms from AI, with a substantial portion supporting regulation to mitigate potential risks (Elsey and Moss 2023). However, existential risks are not a primary concern for most; instead, the public tends to worry more about tangible issues such as job displacement.
Sindermann et al. (2022) examined cross-cultural differences in AI attitudes among Chinese and German participants, linking fear of AI to neuroticism in both groups, while also highlighting cultural variations in AI acceptance and concern. Kelley et al. (2021) surveyed over 10,000 participants across eight countries (Australia, Canada, the United States, South Korea, France, Brazil, India, and Nigeria), finding that respondents in developed nations predominantly expressed worry and futuristic expectations, whereas those in developing countries showed greater enthusiasm for AI’s potential. In particular, South Korea emphasized AI’s practical usefulness and future applications, though widespread uncertainty about its broader societal impact persisted across all regions. In Taiwan, science news consumption and respect for scientific authority positively influence AI perceptions (Wen et al. 2024). Interestingly, a recent large cross-cultural study building on Hofstede’s cultural dimensions found that AI perception is rather shaped by individual differences than by cultural context (Wang 2025).
Public discourse on AI often oscillates between fear and inflated expectations—particularly regarding artificial general intelligence (AGI), which remains largely speculative and fictional at present (Jungherr 2023). A survey by Ipsos (2022) found that the public frequently lacks a nuanced understanding of AI’s technical capabilities and limitations. Similarly, Pew Research and Center (2023) reported that only a small percentage of Americans could accurately identify AI in everyday scenarios, highlighting widespread confusion about its scope and functionality. The Alan Turing Institute (2023) likewise observed that public understanding of AI varies considerably depending on education level and context. Common concerns tend to focus on automation and robotics, especially in relation to employment and security. This limited awareness contributes to persistent misconceptions and overly simplistic views of AI’s societal impact, ultimately hindering informed public discourse.
In summary, the drivers of public sentiment are well-documented individually as multifaceted and shaped by media narratives, perceived risks and benefits, levels of trust, and cultural and contextual factors. Yet, the resulting perceptions remain fragmented across specific use cases; this underscores the need for a systematic, cross-domain mapping to identify the stable mental models that underpin societal acceptance of AI.
2.2 Similarities and differences in risk perception between experts and the public
A robust finding across scientific disciplines is the systematic divergence between expert and lay perceptions of risk, a phenomenon that is particularly pronounced in the context of emerging technologies like AI. This “perception gap” is not merely a product of varying knowledge levels but reflects a difference in evaluative frameworks: while experts typically adopt a probabilistic and technical approach to risk, the public prioritizes qualitative dimensions such as trust, ethical implications, and dread. By comparing findings from general risk research with AI-specific studies, this section illustrates that experts often report higher trust and lower perceived risk than the general public. Identifying this perception gap is essential for governance, yet direct comparisons using identical psychometric frameworks remain remarkably scarce; a research gap that the present study seeks to fill.
First, research has shown that health experts and the public often differ in their assessments of health risks (Krewski et al. 2012). Experts typically perceive behavioral health risks (such as smoking and obesity) as more significant, while the public may prioritize other concerns. This discrepancy underscores the importance of effective risk communication strategies to better align public perception with expert evaluations.
Similarly, public perception of risks related to industrial production facilities tends to be more subjective and emotionally driven, in contrast to the more objective evaluations made by safety professionals (Botheju and Abeysinghe 2015). Such misalignments call for two-way communication approaches to address concerns proactively and prevent unnecessary escalation.
In their study of expert and laypeople perception of nanotechnology, Siegrist et al. 2007 found that while experts’ judgments are primarily driven by technical evidence and probabilistic risk, the public relies heavily on trust and the ‘affect heuristic’ (the inverse relationship between perceived risk and benefit). The findings revealed that differences in risk perception extended beyond knowledge gaps, reflecting underlying value-based judgments. We will extend this inquiry to the domain of AI to determine if this systematic perception gap persists across diverse AI applications, or if the unique societal integration of AI alters the traditional expert–layperson divergence identified in other studies.
In the case of environmental hazards, such as nuclear waste, experts and laypeople also differ markedly in their perceived risks. Here, risk perception is shaped more by attitudes and moral values than by cognitive factors (Sjöberg 1998). Laypeople, who generally have less technical knowledge, tend to rely on intuitive and emotional reasoning, whereas experts base their assessments more on technical evidence and probabilistic analysis (Sjöberg 1998). This difference often leads to divergent views on policy and regulation, with members of the public perceiving higher levels of risk than experts (Siegrist et al. 2007). The public prioritize ethical and societal implications, while experts focus primarily on scientific and technical risks.
In contrast, when it comes to natural hazards such as hurricanes and cyclones, there is often a considerable degree of agreement between expert risk assessments and public perceptions, particularly in high-risk areas (Peacock et al. 2005; Md. Abdus and Cheung 2019). However, public risk perception in these contexts can still be shaped by factors such as trust in authorities and prior experience with disasters. In the domain of autonomous vehicles (AVs), public risk perception is strongly influenced by trust in both the technology and the institutions that regulate it. Greater knowledge about AVs can enhance trust, which in turn reduces perceived risk, highlighting the importance of targeted trust-building initiatives (Robinson-Tay and Peng 2024). In aviation, by contrast, experts typically possess a more accurate understanding of relative risks, while novices’ perceptions may be distorted by overconfidence or limited experience (Thomson et al. 2004).
Elena and Johnson (2015) examined differences in expert and public perceptions of cloud computing services. The findings indicate that experts tend to have a more nuanced understanding of risks, particularly regarding data security and integrity, while members of the public are more likely to experience a generalized dread risk in response to unfamiliar or abstract technological threats. Perceptions were also influenced by factors such as trust in regulatory bodies and the perceived benefits of the technology.
A decade ago, Müller and Bostrom (2016) surveyed AI experts on their expectations for the future capabilities of artificial intelligence, finding that most anticipated the emergence of superintelligence between 2040 and 2050. Notably, one-third of these experts considered this development bad or extremely bad, highlighting substantial concerns even within the expert community.
Crockett et al. (2020) compared trust and risk perceptions of AI between the public and computer science students as individuals with—supposedly—above-average expertise in AI. The findings revealed clear differences, suggesting that education plays a crucial role in increasing trust and reducing perceived risks. Similarly, Novozhilova et al. (2024) found that greater technological competence and familiarity with AI are associated with higher levels of trust in AI systems.
Still, comprehensive comparisons between experts and the public remain relatively rare. Recently, Jensen et al. (2024) conducted interviews with 25 members of the public and 20 AI experts in the United States to examine their perceptions of AI. Both groups emphasized that AI systems reflect the values and biases of their creators, acknowledging inherent limitations in the technology. Ethical concerns cited by participants included AI’s lack of transparency, its profit-driven development, and the risk of exacerbating existing social inequalities. Human oversight was widely supported, particularly in high-stakes contexts such as healthcare. Although AI is perceived as efficient, its inability to replicate human empathy emerged as a central barrier to trust. Across both groups, reflections on humanness and ethics played a critical role in shaping attitudes toward AI.
A recent study surveyed 111 AI experts to assess their beliefs about catastrophic AI risk, familiarity with AI safety concepts, and responses to alignment arguments (Field 2025). It finds that experts divide into two broad perspectives: viewing AI as a controllable tool or as a potentially uncontrollable agent, overall with lower concern about AI risk closely linked to limited familiarity with core AI safety concepts.
Recently, in a survey of 2,778 AI researchers, Grace et al. (2025) report that experts have accelerated their forecasts for high level of machine intelligence, now predicting a 50% probability of its arrival by 2047 (thirteen years earlier than estimated in 2022). However, while a majority of experts foresee positive outcomes, nearly half assigned at least a 10% probability to extremely catastrophic scenarios, including human extinction.
In summary, while the literature highlights a consistent divergence between expert and public risk assessments, the underlying reasons for this “perception gap” remain under-explored. This study addresses this gap by employing a unified psychometric framework to compare how both groups weigh risks and benefits across a broad spectrum of AI scenarios.
2.3 Risk perception and the psychometric model
To address the fragmented nature of AI perception research, this study adopts the psychometric paradigm as its primary theoretical lens. Originally developed by Slovic et al. (1986), this framework posits that risk perception is not a technical calculation of probability but a subjective construct influenced by factors like “dread” and “knowability”. It focuses on individuals’ subjective evaluations of risk, often measured through rating scales (Fischhoff et al. 1978; Slovic et al. 1986). The psychometric paradigm has been successfully applied across diverse contexts, including nuclear energy (Slovic et al. 2000), gene technology (Connor and Siegrist 2010), genetically modified food (Verdurme and Viaene 2003), climate change (Pidgeon and Fischhoff 2011), and carbon capture (Arning et al. 2020). Consequently, it is well suited for studying risk perception in emerging technologies, as it offers a structured framework for understanding how individuals evaluate and navigate the complex balance between perceived risks and benefits.
Crucially, research within this paradigm has revealed a consistent inverse relationship between perceived risk and benefit, known as the affect heuristic (Alhakami and Slovic 1994; Slovic et al. 2007; Efendić et al. 2021). This heuristic suggests that evaluative judgments are often guided by an overall affective state: technologies perceived as highly beneficial are frequently judged as lower risk, while those viewed as risky are rated as less beneficial. By applying this framework to AI, we can determine whether the perception gap results from experts and the public utilizing different weighting schemes—that is, whether one group’s overall value judgments are driven more by perceived benefits while the other is more sensitive to perceived risks.
2.4 Research questions
The preceding review reveals two critical limitations in current AI perception research: (1) a lack of direct comparisons between academic experts and the general public using consistent evaluative criteria, and (2) a focus on narrow domains that obscures the broader mental models underlying perception. This study addresses these gaps by applying the psychometric paradigm to 71 diverse scenarios, providing a comprehensive cognitive map of the “AI perception gap”. Identifying these divergences is essential, as misaligned risk–benefit perceptions may impede technology design, individual and societal acceptance, and regulatory effectiveness. To this end, we investigate the following research questions: