93 research outputs found
The Partner Modelling Questionnaire: A validated self-report measure of perceptions toward machines as dialogue partners
Recent work has looked to understand user perceptions of speech agent
capabilities as dialogue partners (termed partner models), and how this affects
user interaction. Yet, currently partner model effects are inferred from
language production as no metrics are available to quantify these subjective
perceptions more directly. Through three studies, we develop and validate the
Partner Modelling Questionnaire (PMQ): an 18-item self-report semantic
differential scale designed to reliably measure people's partner models of
non-embodied speech interfaces. Through principal component analysis and
confirmatory factor analysis, we show that the PMQ scale consists of three
factors: communicative competence and dependability, human-likeness in
communication, and communicative flexibility. Our studies show that the measure
consistently demonstrates good internal reliability, strong test-retest
reliability over 12 and 4-week intervals, and predictable convergent/divergent
validity. Based on our findings we discuss the multidimensional nature of
partner models, whilst identifying key future research avenues that the
development of the PMQ facilitates. Notably, this includes the need to identify
the activation, sensitivity, and dynamism of partner models in speech interface
interaction.Comment: Submitted (TOCHI
Recommended from our members
Children and adults produce distinct technology- and human-directed speech.
This study compares how English-speaking adults and children from the United States adapt their speech when talking to a real person and a smart speaker (Amazon Alexa) in a psycholinguistic experiment. Overall, participants produced more effortful speech when talking to a device (longer duration and higher pitch). These differences also varied by age: children produced even higher pitch in device-directed speech, suggesting a stronger expectation to be misunderstood by the system. In support of this, we see that after a staged recognition error by the device, children increased pitch even more. Furthermore, both adults and children displayed the same degree of variation in their responses for whether Alexa seems like a real person or not, further indicating that childrens conceptualization of the systems competence shaped their register adjustments, rather than an increased anthropomorphism response. This work speaks to models on the mechanisms underlying speech production, and human-computer interaction frameworks, providing support for routinized theories of spoken interaction with technology
What's in an accent? The impact of accented synthetic speech on lexical choice in human-machine dialogue
The assumptions we make about a dialogue partner's knowledge and
communicative ability (i.e. our partner models) can influence our language
choices. Although similar processes may operate in human-machine dialogue, the
role of design in shaping these models, and their subsequent effects on
interaction are not clearly understood. Focusing on synthesis design, we
conduct a referential communication experiment to identify the impact of
accented speech on lexical choice. In particular, we focus on whether accented
speech may encourage the use of lexical alternatives that are relevant to a
partner's accent, and how this is may vary when in dialogue with a human or
machine. We find that people are more likely to use American English terms when
speaking with a US accented partner than an Irish accented partner in both
human and machine conditions. This lends support to the proposal that synthesis
design can influence partner perception of lexical knowledge, which in turn
guide user's lexical choices. We discuss the findings with relation to the
nature and dynamics of partner models in human machine dialogue.Comment: In press, accepted at 1st International Conference on Conversational
User Interfaces (CUI 2019
Mapping Theoretical and Methodological Perspectives for Understanding Speech Interface Interactions
CHI 2019: The ACM CHI Conference on Human Factors in Computing Systems - Weaving the Threads of CHI, Glasgow, United Kingdom, 4-9 May 2019The use of speech as an interaction modality has grown considerably through the integration of Intelligent Personal Assistants (IPAs- e.g. Siri, Google Assistant) into smartphones and voice based devices (e.g. Amazon Echo). However, there remain significant gaps in using theoretical frameworks to understand user behaviours and choices and how they may applied to specific speech interface interactions. This part-day multidisciplinary workshop aims to critically map out and evaluate the- oretical frameworks and methodological approaches across a number of disciplines and establish directions for new paradigms in understanding speech interface user behaviour. In doing so, we will bring together participants from HCI and other speech related domains to establish a cohesive, diverse and collaborative community of researchers from academia and industry with interest in exploring theoretical and methodological issues in the field.Irish Research Counci
What Do We See in Them? Identifying Dimensions of Partner Models for Speech Interfaces Using a Psycholexical Approach
Perceptions of system competence and communicative ability, termed partner models, play a significant role in speech interface interaction. Yet we do not know what the core dimensions of this concept are. Taking a psycholexical approach, our paper is the first to identify the key dimensions that define partner models in speech agent interaction. Through a repertory grid study (N=21), a review of key subjective questionnaires, an expert review of resulting word pairs and an online study of 356 users of speech interfaces, we identify three key dimensions that make up a usersâ partner model: 1) perceptions towards partner competence and dependability; 2) assessment of human-likeness; and 3) a systemâs perceived cognitive flexibility. We discuss the implications for partner modelling as a concept, emphasising the importance of salience and the dynamic nature of these perceptions
Credibility of Virtual Influencers: The Role of Design Stimuli, Knowledge Cues, and User Disposition
Virtual Influencers (VIs) are digital influencers that can look and behave like human beings but project themselves as ârobotsâ. They influence peopleâs attitudes and behaviors through their presence and interaction. While human-like design can lead to acceptance, additional information about machine-like description (robot) can create conflict about the influencerâs identity and lead to unfavorable social responses. Social perceptions are also subjective. In this study, we examine the influence of human-like design, knowledge cues, and user disposition on user perceptions of VI credibility. In doing so, we present a case for the substitution of human influencers by âlesser humanâ counterparts in the context of social media
Can Google Translate Rewire Your L2 English Processing?
Abstract: In this article, we address the question of whether exposure to the translated output of MT systems could result in changes in the cognitive processing of English as a second language (L2 English). To answer this question, we first conducted a survey with 90 Brazilian Portuguese L2 English speakers with the aim of understanding how and for what purposes they use web-based MT systems. To investigate whether MT systems are capable of influencing L2 English cognitive processing, we carried out a syntactic priming experiment with 32 Brazilian Portuguese speakers. We wanted to test whether speakers re-use in their subsequent speech in English the same syntactic alternative previously seen in the MT output, when using the popular Google Translate system to translate sentences from Portuguese into English. The results of the survey show that Brazilian Portuguese L2 English speakers use Google Translate as a tool supporting their speech in English as well as a source of English vocabulary learning. The results of the syntactic priming experiment show that exposure to an English syntactic alternative through GT can lead to the re-use of the same syntactic alternative in subsequent speech even if it is not the speakerâs preferred syntactic alternative in English. These findings suggest that GT is being used as a tool for language learning purposes and so is indeed capable of rewiring the processing of L2 English syntax
- âŠ