Follow Me:A Study on the Dynamics of Alignment Between Humans and LLM-Based Social Robots
Abstract
While robots are perceived as reliable in delivering factual data, their ability to achieve meaningful alignment with humans during subjective interactions remains unclear. Gaining insights into this alignment is vital to integrating robots more deeply into decision-making frameworks and enhancing their roles in social interactions. This study examines the impact of personality-prompted large language models (LLMs) on alignment in human-robot interactions. Participants interacted with a Furhat robot under two conditions: a baseline control condition and an experimental condition using personality prompts designed to simulate distinct personality traits through the LLM. Alignment was assessed by measuring changes in similarity between participants’ rankings and the robot’s rankings of factual (objective) and contestable (subjective) concepts before and after interaction. The findings indicate that participants aligned more with the robot on objective, factual concepts than on subjective, contestable ones, regardless of personality prompts. These results suggest that the current personality prompting method may be insufficient to significantly influence alignment in subjective interactions. This may be attributed to the conveyed traits lacking sufficient impact or the limitations of current system capabilities, which may not yet be advanced enough to foster the desired influence on participants’ perceptions.</p- contributionToPeriodical
- Alignment
- Human-Robot Interaction (HRI)
- LLM
- Personality Prompting (P)
- /dk/atira/pure/subjectarea/asjc/2600/2614; name=Theoretical Computer Science
- /dk/atira/pure/subjectarea/asjc/1700/1700; name=General Computer Science
- /dk/atira/pure/subjectarea/asjc/1700/1702; name=Artificial Intelligence