10 research outputs found
What's in an accent? The impact of accented synthetic speech on lexical choice in human-machine dialogue
The assumptions we make about a dialogue partner's knowledge and
communicative ability (i.e. our partner models) can influence our language
choices. Although similar processes may operate in human-machine dialogue, the
role of design in shaping these models, and their subsequent effects on
interaction are not clearly understood. Focusing on synthesis design, we
conduct a referential communication experiment to identify the impact of
accented speech on lexical choice. In particular, we focus on whether accented
speech may encourage the use of lexical alternatives that are relevant to a
partner's accent, and how this is may vary when in dialogue with a human or
machine. We find that people are more likely to use American English terms when
speaking with a US accented partner than an Irish accented partner in both
human and machine conditions. This lends support to the proposal that synthesis
design can influence partner perception of lexical knowledge, which in turn
guide user's lexical choices. We discuss the findings with relation to the
nature and dynamics of partner models in human machine dialogue.Comment: In press, accepted at 1st International Conference on Conversational
User Interfaces (CUI 2019
The Partner Modelling Questionnaire: A validated self-report measure of perceptions toward machines as dialogue partners
Recent work has looked to understand user perceptions of speech agent
capabilities as dialogue partners (termed partner models), and how this affects
user interaction. Yet, currently partner model effects are inferred from
language production as no metrics are available to quantify these subjective
perceptions more directly. Through three studies, we develop and validate the
Partner Modelling Questionnaire (PMQ): an 18-item self-report semantic
differential scale designed to reliably measure people's partner models of
non-embodied speech interfaces. Through principal component analysis and
confirmatory factor analysis, we show that the PMQ scale consists of three
factors: communicative competence and dependability, human-likeness in
communication, and communicative flexibility. Our studies show that the measure
consistently demonstrates good internal reliability, strong test-retest
reliability over 12 and 4-week intervals, and predictable convergent/divergent
validity. Based on our findings we discuss the multidimensional nature of
partner models, whilst identifying key future research avenues that the
development of the PMQ facilitates. Notably, this includes the need to identify
the activation, sensitivity, and dynamism of partner models in speech interface
interaction.Comment: Submitted (TOCHI
Listening to the Voices: Describing Ethical Caveats of Conversational User Interfaces According to Experts and Frequent Users
Advances in natural language processing and understanding have led to a rapid
growth in the popularity of conversational user interfaces (CUIs). While CUIs
introduce novel benefits, they also yield risks that may exploit people's
trust. Although research looking at unethical design deployed through graphical
user interfaces (GUIs) established a thorough understanding of so-called dark
patterns, there is a need to continue this discourse within the CUI community
to understand potentially problematic interactions. Addressing this gap, we
interviewed 27 participants from three cohorts: researchers, practitioners, and
frequent users of CUIs. Applying thematic analysis, we construct five themes
reflecting each cohort's insights about ethical design challenges and introduce
the CUI Expectation Cycle, bridging system capabilities and user expectations
while considering each theme's ethical caveats. This research aims to inform
future development of CUIs to consider ethical constraints while adopting a
human-centred approach.Comment: 18 pages; 4 tables; and 1 figure. This is the author's version and
pre-print of the work. It is posted here for your personal use. Not for
redistribution. The definitive Version of Record will be published in
Proceedings of the CHI Conference on Human Factors in Computing Systems (CHI
'24), May 11--16, 2024, Honolulu, HI, USA,
https://doi.org/https://doi.org/10.1145/3613904.364254
A Special Interest Group on Designed and Engineered Friction in Interaction
A lot of academic and industrial HCI work has focused on making interactions easier and less effortful. As the potential risks of optimising for effortlessness have crystallised in systems designed to take advantage of the way human attention and cognition works, academic researchers and industrial practitioners have wondered whether increasing the g€friction' in interactions, making them more effortful might make sense in some contexts. The goal of this special interest group is to provide a forum for researchers and practitioners to discuss and advance the theoretical underpinnings of designed friction, the relation of friction to other design paradigms, and to identify the domains and interaction flows that frictions might best suit. During the SIG, attendees will attempt to prioritise a set of research questions about frictions in HCI
What Makes a Good Conversation? Challenges in Designing Truly Conversational Agents
Conversational agents promise conversational interaction but fail to deliver. Efforts often emulate functional rules from human speech, without considering key characteristics that conversation must encapsulate. Given its potential in supporting long-term human-agent relationships, it is paramount that HCI focuses efforts on delivering this promise. We aim to understand what people value in conversation and how this should manifest in agents. Findings from a series of semi-structured interviews show people make a clear dichotomy between social and functional roles of conversation, emphasising the long-term dynamics of bond and trust along with the importance of context and relationship stage in the types of conversations they have. People fundamentally questioned the need for bond and common ground in agent communication, shifting to more utilitarian definitions of conversational qualities. Drawing on these findings we discuss key challenges for conversational agent design, most notably the need to redefine the design parameters for conversational agent interaction
Targeting Human Model-Free Processing
Examining the effect of early rewards on model-free decision making
How Early Rewards Influence Choice: Targeting model-free processing through reward timing
While many people claim to have the intention to perform certain behaviours, it is commonly the case these intentions do not come to fruition. This issue is particularly pronounced in cases where there is a long delay between intention and the behaviour, or cases where there is a strong automatic impulse that acts against the intention. According to dual-process theories, this intention-behaviour gap is a result of a conflict between two types of systems: a habitual model-free system and a deliberate model-based system. Usually, interventions target the model-based system, providing important information necessary to convince individuals that the behaviour is desirable or beneficial. However, this approach mostly ignores the model-free system, leaving a large part of the decision-making process outside of the intervention. The early reward strategy is a method to target the model-free system directly and considers the known mechanisms behind how reward information is processed. In particular, it focuses on how reward timing affects decision making within a sequence of actions. Due to how temporal discounting and temporal difference learning lead to reductions in the value of the reward based on how far it is placed from the first action in the sequence, placing the reward as close to the start of the sequence as possible is likely to prevent this reduction from occurring as much as possible. This early reward strategy was tested across four experiments and was found to successfully alter behaviour in a way predicted by the theory. Two of the experiments focused on a computational approach, using reinforcement learning algorithms to predict behaviour and compare it against the participant responses. The other two experiments were conducted with a more applied approach that used tasks more representative of real-world action sequences to test the extent to which behaviour was affected by early rewards. Whether the reward was monetary or gamified, placing a reward earlier in a sequence improved the frequency of selection for that sequence significantly when compared to other reward placements. The results have important implications for anyone attempting to incentivise new behaviours by providing a theory-driven approach towards maximising the effectiveness of the reward, particularly to the model-free system. As a result, consideration for reward timing should be integral to any incentive system that involves sequences of actions, with a strong emphasis on providing rewards as early in the interaction as possible
Audience design and egocentrism in reference production during human-computer dialogue
Our current understanding of the mechanisms that underpin language production in human-computer dialogue (HCD) is sparse. What work there is in the field of human-computer interaction (HCI) supposes that people tend to adapt their language allocentrically, taking into account the perceived limitations of their partners, when talking to computers. Yet, debates in human-human dialogue (HHD) research suggest that people may also act egocentrically when producing language in dialogue. Our research aims to identify whether, similar to HHD, users also produce egocentric language within speech-based HCD interactions and how this behaviour compares to interaction with human dialogue partners. Such knowledge benefits the field of HCI by better understanding the mechanisms present in language production during HCD, which can be used to build more nuanced theories and models of user behaviour to inform research and design of speech interfaces. Through two controlled experiments using an adapted director-matcher task similar to those used in research on perspective-taking in psycholinguistics, we show that people do take the computer's perspective into account less (i.e. behave more egocentrically) during HCD than in HHD (Experiment 1). However, this egocentric effect is eliminated when computers are framed as separate interlocutors rather than computers integrated in the interactive system and where differences in perspective are made salient, leading to similar levels of perspective-taking as with human partners (Experiment 2). We discuss the findings, emphasising potential explanations for this effect, focusing on how egocentric and allocentric production processes may interact, along with the impact of partner roles and the division of labour in HCD as an underlying explanation for the effects seen.</p