

Towards calibrated trust in conversational generative AI
Theo Araujo, Principal Investigator
Hundreds of millions of individuals now interact with generative artificial intelligence (GenAI) in the form of conversational agents such as ChatGPT. Originally, GenAI attracted curiosity due to its ability to hold coherent conversations and generate credible text, images, and videos. Now, individuals integrate GenAI-agents to their routines, asking for recommendations, as productivity tools for work or school, or even as companions for emotional support.
While useful, GenAI is not without risks. Agents are often trained on biased data, prone to leaks of private or sensitive information and often provide inappropriate, inaccurate, or biased answers. The adoption of this potentially useful yet untested technology is a major societal challenge, which becomes even more urgent as more digital platforms launch and promote GenAI-agents for personal use.
We know, however, little about how individuals use GenAI-agents in their personal lives, and even less about under which circumstances they trust these agents. This is an urgent gap in our knowledge, as their answers are credibly looking yet not always accurate, and usage of these agents can become deeply personalised.
cAlibrate addresses this gap with an innovative, multi-methodological approach. It will build an over-time, large-scale mapping of GenAI-agent interactions and tests tailored interventions aimed at empowering individuals to calibrate their trust in this technology i.e., ensuring they can reap its benefits while being resilient against overestimating its capabilities, critically assessing its answers, and able to actively manage their privacy and personal data.
Exploring the questions that drive our investigation
Guiding Questions
Key reflections to deepen your understanding of AI’s impact and applications.
I. Use
How do individuals use conversational Generative AI in their daily lives?
II. Trust-and-Use Dynamics
How does trust in conversational Generative AI is formed, and evolves over time and use?
III. Trust Calibration
How can individuals be empowered to adequately calibrate their trust in conversational Generative AI?