1,253 research outputs found
Mapping Perceptions of Humanness in Intelligent Personal Assistant Interaction
Humanness is core to speech interface design. Yet little is known about how users conceptualise perceptions of humanness and how people define their interaction with speech interfaces through this. To map these perceptions n=21 participants held dialogues with a human and two speech interface based intelligent personal assistants, and then reflected and compared their experiences using the repertory grid technique. Analysis of the constructs show that perceptions of humanness are multidimensional, focusing on eight key themes: partner knowledge set, interpersonal connection, linguistic content, partner performance and capabilities, conversational interaction, partner identity and role, vocal qualities and behavioral affordances. Through these themes, it is clear that users define the capabilities of speech interfaces differently to humans, seeing them as more formal, fact based, impersonal and less authentic. Based on the findings, we discuss how the themes help to scaffold, categorise and target research and design efforts, considering the appropriateness of emulating humanness
Recommended from our members
Is Google Duplex too human? : exploring user perceptions of opaque conversational agents
Conversational Agents (CAs) are increasingly embedded in consumer products, such as smartphones, home devices, and industry devices. Advancements in machine generated voice, such as the Google Duplex feature released in May 2018, aim to perfectly mimic the human voice while constructing a scenario in which users do not know whether they are talking to a human or a CA. Exactly how well users can distinguish between human/machine voices, how the degree of humanness impacts user emotional perception, and what ethical concerns this raises, remains an underexplored area. To answer these questions, I collected 405 surveys, including both an experimental design that exposed users to three different voices (human, advanced machine, and simple machine) and questions about the ethical implication of CAs. Results of the experiment revealed that users have difficulty distinguishing between human and advanced machine voices. Users do not experience the negative feeling referred to as the uncanny valley when listening to advanced synthetic audio and they only narrowly prefer a real human voice over a synthetic voice. Results from the questions about ethical implications revealed the importance of context and transparency. Drawing on these findings, I discuss the implications of advanced CAs and suggest strategies for ethical design.Journalis
Developing a Personality Model for Speech-based Conversational Agents Using the Psycholexical Approach
We present the first systematic analysis of personality dimensions developed
specifically to describe the personality of speech-based conversational agents.
Following the psycholexical approach from psychology, we first report on a new
multi-method approach to collect potentially descriptive adjectives from 1) a
free description task in an online survey (228 unique descriptors), 2) an
interaction task in the lab (176 unique descriptors), and 3) a text analysis of
30,000 online reviews of conversational agents (Alexa, Google Assistant,
Cortana) (383 unique descriptors). We aggregate the results into a set of 349
adjectives, which are then rated by 744 people in an online survey. A factor
analysis reveals that the commonly used Big Five model for human personality
does not adequately describe agent personality. As an initial step to
developing a personality model, we propose alternative dimensions and discuss
implications for the design of agent personalities, personality-aware
personalisation, and future research.Comment: 14 pages, 2 figures, 3 tables, CHI'2
Designing Personality-Adaptive Conversational Agents for Mental Health Care
Millions of people experience mental health issues each year, increasing the necessity for health-related services. One emerging technology with the potential to help address the resulting shortage in health care providers and other barriers to treatment access are conversational agents (CAs). CAs are software-based systems designed to interact with humans through natural language. However, CAs do not live up to their full potential yet because they are unable to capture dynamic human behavior to an adequate extent to provide responses tailored to users’ personalities. To address this problem, we conducted a design science research (DSR) project to design personality-adaptive conversational agents (PACAs). Following an iterative and multi-step approach, we derive and formulate six design principles for PACAs for the domain of mental health care. The results of our evaluation with psychologists and psychiatrists suggest that PACAs can be a promising source of mental health support. With our design principles, we contribute to the body of design knowledge for CAs and provide guidance for practitioners who intend to design PACAs. Instantiating the principles may improve interaction with users who seek support for mental health issues
Sending an Avatar to Do a Human’s Job: Compliance with Authority Persists Despite the Uncanny Valley
Just as physical appearance affects social influence in human communication, it may also affect the processing of advice conveyed through avatars, computer-animated characters, and other human-like interfaces. Although the most persuasive computer interfaces are often the most human-like, they have been predicted to incur the greatest risk of falling into the uncanny valley, the loss of empathy attributed to characters that appear eerily human. Previous studies compared interfaces on the left side of the uncanny valley, namely, those with low human likeness. To examine interfaces with higher human realism, a between-groups factorial experiment was conducted through the internet with 426 midwestern U.S. undergraduates. This experiment presented a hypothetical ethical dilemma followed by the advice of an authority figure. The authority was manipulated in three ways: depiction (digitally recorded or computer animated), motion quality (smooth or jerky), and advice (disclose or refrain from disclosing sensitive information). Of these, only the advice changed opinion about the ethical dilemma, even though the animated depiction was significantly eerier than the human depiction. These results indicate that compliance with an authority persists even when using an uncannily realistic computer-animated double
A Systematic Review of Ethical Concerns with Voice Assistants
Siri's introduction in 2011 marked the beginning of a wave of domestic voice
assistant releases, and this technology has since become commonplace in
consumer devices such as smartphones and TVs. But as their presence expands
there have also been a range of ethical concerns identified around the use of
voice assistants, such as the privacy implications of having devices that are
always recording and the ways that these devices are integrated into the
existing social order of the home. This has created a burgeoning area of
research across a range of fields including computer science, social science,
and psychology. This paper takes stock of the foundations and frontiers of this
work through a systematic literature review of 117 papers on ethical concerns
with voice assistants. In addition to analysis of nine specific areas of
concern, the review measures the distribution of methods and participant
demographics across the literature. We show how some concerns, such as privacy,
are operationalized to a much greater extent than others like accessibility,
and how study participants are overwhelmingly drawn from a small handful of
Western nations. In so doing we hope to provide an outline of the rich tapestry
of work around these concerns and highlight areas where current research
efforts are lacking
Artificial Intelligence Service Agents: Role of Parasocial Relationship
Increased use of artificial intelligence service agents (AISA) has been associated with improvements in AISA service performance. Whilst there is consensus that unique forms of attachment develop between users and AISA that manifest as parasocial relationships (PSRs), the literature is less clear about the AISA service attributes and how they influence PSR and the users’ subjective well-being. Based on a dataset collected from 408 virtual assistant users from the US, this research develops and tests a model that can explain how AISA-enabled service influences subjective well-being through the mediating effect of PSR. Findings also indicate significant gender and AISA experience differences in the PSR effect on subjective well-being. This study advances current understanding of AISA in service encounters by investigating the mediating role of PSR in AISA’s effect on users’ subjective well-being. We also discuss managerial implications for practitioners who are increasingly using AISA for delivering customer service
- …