Towards calibrated trust in conversational generative AI

Hundreds of millions of individuals now interact with generative artificial intelligence (GenAI) in the form of conversational agents such as ChatGPT. Originally, GenAI attracted curiosity due to its ability to hold coherent conversations and generate credible text, images, and videos. Now, individuals integrate GenAI-agents to their routines, asking for recommendations, as productivity tools for work or school, or even as companions for emotional support.

While useful, GenAI is not without risks. Agents are often trained on biased data, prone to leaks of private or sensitive information and often provide inappropriate, inaccurate, or biased answers. The adoption of this potentially useful yet untested technology is a major societal challenge, which becomes even more urgent as more digital platforms launch and promote GenAI-agents for personal use.
We know, however, little about how individuals use GenAI-agents in their personal lives, and even less about under which circumstances they trust these agents. This is an urgent gap in our knowledge, as their answers are credibly looking yet not always accurate, and usage of these agents can become deeply personalised.

cAlibrate addresses this gap with an innovative, multi-methodological approach. It will build an over-time, large-scale mapping of GenAI-agent interactions and tests tailored interventions aimed at empowering individuals to calibrate their trust in this technology i.e., ensuring they can reap its benefits while being resilient against overestimating its capabilities, critically assessing its answers, and able to actively manage their privacy and personal data.

Our key objectives:

01

Understanding GenAI use in our daily lives

Adopting a citizen science approach, we investigate how individuals integrate this technology in their daily lives and how their use develops across time.

02

Unravelling trust-and-use dynamics

We explore how characteristics of individual use and of the system influence the trust development and the over-time dynamics of trust-and-use of GenAI.

03

Fostering trust calibration

We develop develop and test tailored trust-calibration interventions deployed at the individual- and system-level to empower individuals in their GenAI use.