Understanding linguistic and visual factors that affect human trust perception of virtual agents

PI: Sarah Levitan
Co-PI: Tatiana Emmanouil

Abstract

Advances in machine learning and speech technologies are enabling human-like conversations with virtual agents, gaining traction across various domains such as customer service, healthcare, and education [6, 5, 1]. A crucial factor that profoundly influences user engagement and satisfaction is trust [2, 3]. Users must feel confident in trustworthiness of the information provided by these agents. Researchers from diverse disciplines have sought to identify specific signals of trust [4], but there is little interdisciplinary work that investigate how combinations of multimodal factors affect user trust. As systems are increasingly multimodal, it is critical to understand how multimodal factors interact to affect user perception of agent trustworthiness. The main objective of this work is to understand linguistic and visual factors that affect human trust perception of virtual agents. In the first year of this project, we conducted a crowdsourced perception study to collect judgments of trustworthiness of multimodal audio and visual stimulus pairs. We analyzed the trust ratings to understand how the two modalities interact to affect trust perception. We also analyzed visual and spoken cues to understand how they correlate with trust perception. Our findings have implications for creating multimodal virtual agents that can maximize user perception of trustworthiness.

[1] T. Belpaeme, J. Kennedy, A. Ramachandran, B. Scassellati, and F. Tanaka. “Social robots for education: A review”. In: Science robotics 3.21 (2018), eaat5954.
[2] J. D. Lee and K. A. See. “Trust in automation: Designing for appropriate reliance”. In: Human factors 46.1 (2004), pp. 50–80.
[3] E. Luger and A. Sellen. “Like Having a Really Bad PA” The Gulf between User Expectation and Experience of Conversational Agents”. In: Proceedings of the 2016 CHI conference on human factors in computing systems. 2016, pp. 5286–5297.
[4] M. Rheu, J. Y. Shin, W. Peng, and J. Huh-Yoo. “Systematic review: Trust-building factors and implications for conversational agent design”. In: International Journal of Human–Computer Interaction 37.1 (2021), pp. 81–96.
[5] S. Tian, W. Yang, J. M. Le Grange, P. Wang, W. Huang, and Z. Ye. “Smart healthcare: making medical care more intelligent”. In: Global Health Journal 3.3 (2019), pp. 62–65.
[6] T. Verhagen, J. Van Nes, F. Feldberg, and W. Van Dolen. “Virtual customer service agents: Using social
presence and personalization to shape online service encounters”. In: Journal of Computer-Mediated Communication 19.3 (2014), pp. 529–545.

Publications

Natalia Tyulina, Tatiana Aloi Emmanouil, Sarah Ita Levitan. Understanding Linguistic and Visual Factors that Affect Human Trust Perception of Virtual Agents. Submitted to ACM Conversational User Interfaces 2024. Under Review

Resources

In progress