In dialogue, interlocutors build similar representations of what they are talking about. That is, they align on representations that are relevant to the conversation. We define aligned representations simply as representations held by at least two interlocutors, and which are similar to each other. Alignment at the level of representations leads to similarities in interlocutors’ behaviour: aligned interlocutors are likely to use the same words to refer to the same objects or events (i.e., they entrain on their lexical choices). We define entrainment as between-interlocutors repetitions of words during dialogue, and we understand it as a consequence of the alignment of lexical representations. Alignment is pervasive: speakers entrain with their interlocutors in written and spoken, online and in-person interactions, when talking with both human and conversational agents (such as bots and robots). At the same time, speakers can use entrainment for strategic purposes: to aid comprehension or to regulate social distances and enhance rapport. In this sense, alignment must be a mechanism that speakers can use flexibly. On top of that, interlocutors keep track of alignment and communicate about whether they believe they are aligned or not, through commentaries on alignment. We define a commentary on alignment as any behaviour that indicates an interlocutor’s belief about whether the interlocutors are aligned or not. Interlocutors can comment on alignment in several ways: by using backchannels (e.g., mh-mm, yeah, okay), by requesting repairs when needed (e.g., what was that? or who are we talking about?), or by repeating and sometimes elaborating on what their interlocutor said.
This thesis is about the mechanisms that allow interlocutors to align and to communicate whether they are aligned in situated interactions, with other people and with conversational agents. The first part of the thesis focuses on entrainment in highly controlled interactions, where participants believed they were playing a reference game with a remote human interlocutor or with a virtual agent, under normal or high cognitive load. In Experiments 1-2 we presented interlocutors as high or low in social status and asked participants to rate the social status of the interlocutors either before or after playing the reference game. We found that participants entrained more with interlocutors presented as high in social status than low in social status, but only when they rated the social status before playing the game. In Experiments 3-4 we presented interlocutors as highly or poorly competent virtual agents, and we manipulated participants’ cognitive load using a dual-task paradigm. We found that participants entrained more with virtual agents when presented as poorly than highly competent, but only when participants could fully focus on the reference game. These results suggest that speakers can use alignment strategically when they can focus on the interaction, and when there are salient properties of the interaction that trigger communicative and social intentions, but that they can also rely on simpler – and more automatic - mechanisms when such intentions are less salient and when they are distracted.
The second part of the thesis (Chapters 5-6) focuses on alignment and use of commentaries in more naturalistic interactions, and whether they are affected by the topic of the conversation and the interlocutors involved. In Experiment 5, participants ranked some items from the most to the least useful for people stranded in the desert, and, after discussing the items with a partner, they ranked the items once again. The order was presented as factual in one group (i.e., unknown to participants, but defined by the British Army) or contestable in the other group (i.e., unknown to the participants, who were made aware that there was no right or wrong order). In Experiment 6, participants performed the same task but discussed with a social robot, whose appearance was either human- or machine-like. Alignment was measured as the increase in similarity between rankings after the discussion, compared to the similarity in rankings before the discussion. In both experiments, participants aligned with their interlocutor, but there was no effect of how the ranking was framed or of how the robot appeared. However, participants used more commentaries when discussing the items in the ranking presented as contestable compared to the ranking presented as factual. Additionally, participants’ use of commentaries was similar in human-human and human-robot interactions.
Overall, these results confirm that alignment – and its commentaries – are pervasive in interactions with human and conversational agents, but that they can be adapted flexibly to context. Such ubiquity and flexibility may be allowed by the existence of multiple mechanisms and which mechanism is used may be triggered by specific situational properties of the interaction. These properties include the communicative needs of the interlocutors, their nature, the social dynamics embedded in the interaction, whether or not the speakers can dedicate full attention to them, and the topic of conversation
Is data on this page outdated, violates copyrights or anything else? Report the problem now and we will take corresponding actions after reviewing your request.