Embodied Virtual Agents (EVAs) are human-like computer agents which can serve as assistants and companions in different tasks. They have numerous applications such as interfaces for social robots, educational tutors, game counterparts, medical assistants, and companions for the elderly and/or individuals with psychological or behavioral conditions. Forming a reliable and trustworthy interaction is critical to the success and acceptability of this new type of user interface. This dissertation explores the interaction between humans and EVAs in cooperative and unco-operative conditions to increase understanding of how trust operates in these interactions. It also investigates how interactions with one agent influences the perception of other agents. In addition to participants achieving significantly higher performance and having higher trust for the cooperative agent, we found that participants' trust for the cooperative agent was significantly higher if they interacted with an uncooperative agent in one of the sets, compared to working with cooperative agents in both sets. The results suggest that the trust for an EVA is relative and it is dependent on agent behavior and user history of interaction with different agents. We found out that biases such as primacy bias, can contribute into humans trusting one agent over the other even if they look similar and serve the same purpose. Primacy bias can also be responsible for having higher trust for the first agent when working with multiple cooperative agents having the same behavior and performing the same task. We also observed that working with one agent will have a significant effect on users' initial trust for other agents within the same system, even before collaborating with the agent in an actual task. Based on lessons learnt through conducting the experiments, specifically through users' personal reflections on their interactions with EVAs, we discuss ethical issues that arise in interactions with virtual worlds. Based on the experimental results obtained in the user experiments, and the findings in previous literature in the field of trust between humans and virtual agents, we suggest guidelines for trust-adaptive virtual agents. We provide justifications for each guideline to increase transparency and provide additional resources to researchers and developers who are interested in these suggestions. The results of this dissertation provide insights into interaction between humans and virtual agents in scenarios which require the collaboration of humans and computers under uncertainty in a timely and efficient way. It also provides directions for future research to use EVAs as primary user interfaces due to the similarity of interaction with such agents to natural human-human interaction and possibility of building high-level, resilient trust toward them.
Metrics
75 File views/ downloads
82 Record Views
Details
Title
Toward Trust-Adaptive Embodied Virtual Agents
Creators
Reza Moradinezhad
Contributors
Vasilis Gkatzelis (Advisor)
Erin Solovey (Advisor)
Awarding Institution
Drexel University
Degree Awarded
Doctor of Philosophy (Ph.D.)
Publisher
Drexel University; Philadelphia, Pennsylvania
Number of pages
ix, 26 pages
Resource Type
Dissertation
Language
English
Academic Unit
Computer Science (Computing) [Historical]; College of Computing and Informatics (2013-2026); Drexel University