1,095 research outputs found

    Mapping Perceptions of Humanness in Intelligent Personal Assistant Interaction

    Get PDF
    Humanness is core to speech interface design. Yet little is known about how users conceptualise perceptions of humanness and how people define their interaction with speech interfaces through this. To map these perceptions n=21 participants held dialogues with a human and two speech interface based intelligent personal assistants, and then reflected and compared their experiences using the repertory grid technique. Analysis of the constructs show that perceptions of humanness are multidimensional, focusing on eight key themes: partner knowledge set, interpersonal connection, linguistic content, partner performance and capabilities, conversational interaction, partner identity and role, vocal qualities and behavioral affordances. Through these themes, it is clear that users define the capabilities of speech interfaces differently to humans, seeing them as more formal, fact based, impersonal and less authentic. Based on the findings, we discuss how the themes help to scaffold, categorise and target research and design efforts, considering the appropriateness of emulating humanness

    Developing a Personality Model for Speech-based Conversational Agents Using the Psycholexical Approach

    Full text link
    We present the first systematic analysis of personality dimensions developed specifically to describe the personality of speech-based conversational agents. Following the psycholexical approach from psychology, we first report on a new multi-method approach to collect potentially descriptive adjectives from 1) a free description task in an online survey (228 unique descriptors), 2) an interaction task in the lab (176 unique descriptors), and 3) a text analysis of 30,000 online reviews of conversational agents (Alexa, Google Assistant, Cortana) (383 unique descriptors). We aggregate the results into a set of 349 adjectives, which are then rated by 744 people in an online survey. A factor analysis reveals that the commonly used Big Five model for human personality does not adequately describe agent personality. As an initial step to developing a personality model, we propose alternative dimensions and discuss implications for the design of agent personalities, personality-aware personalisation, and future research.Comment: 14 pages, 2 figures, 3 tables, CHI'2

    A Systematic Review of Ethical Concerns with Voice Assistants

    Get PDF

    Dishonesty and social presence in retail

    Get PDF

    A Systematic Review of Ethical Concerns with Voice Assistants

    Full text link
    Siri's introduction in 2011 marked the beginning of a wave of domestic voice assistant releases, and this technology has since become commonplace in consumer devices such as smartphones and TVs. But as their presence expands there have also been a range of ethical concerns identified around the use of voice assistants, such as the privacy implications of having devices that are always recording and the ways that these devices are integrated into the existing social order of the home. This has created a burgeoning area of research across a range of fields including computer science, social science, and psychology. This paper takes stock of the foundations and frontiers of this work through a systematic literature review of 117 papers on ethical concerns with voice assistants. In addition to analysis of nine specific areas of concern, the review measures the distribution of methods and participant demographics across the literature. We show how some concerns, such as privacy, are operationalized to a much greater extent than others like accessibility, and how study participants are overwhelmingly drawn from a small handful of Western nations. In so doing we hope to provide an outline of the rich tapestry of work around these concerns and highlight areas where current research efforts are lacking

    See What I’m Saying? Comparing Intelligent Personal Assistant Use for Native and Non-Native Language Speakers

    Get PDF
    Limited linguistic coverage for Intelligent Personal Assistants (IPAs) means that many interact in a non-native language. Yet we know little about how IPAs currently support or hinder these users. Through native (L1) and non-native (L2) English speakers interacting with Google Assistant on a smartphone and smart speaker, we aim to understand this more deeply. Interviews revealed that L2 speakers prioritised utterance planning around perceived linguistic limitations, as opposed to L1 speakers prioritising succinctness because of system limitations. L2 speakers see IPAs as insensitive to linguistic needs resulting in failed interaction. L2 speakers clearly preferred using smartphones, as visual feedback supported diagnoses of communication breakdowns whilst allowing time to process query results. Conversely, L1 speakers preferred smart speakers, with audio feedback being seen as sufficient. We discuss the need to tailor the IPA experience for L2 users, emphasising visual feedback whilst reducing the burden of language production

    The State of Speech in HCI: Trends, Themes and Challenges

    Get PDF

    Sending an Avatar to Do a Human’s Job: Compliance with Authority Persists Despite the Uncanny Valley

    Get PDF
    Just as physical appearance affects social influence in human communication, it may also affect the processing of advice conveyed through avatars, computer-animated characters, and other human-like interfaces. Although the most persuasive computer interfaces are often the most human-like, they have been predicted to incur the greatest risk of falling into the uncanny valley, the loss of empathy attributed to characters that appear eerily human. Previous studies compared interfaces on the left side of the uncanny valley, namely, those with low human likeness. To examine interfaces with higher human realism, a between-groups factorial experiment was conducted through the internet with 426 midwestern U.S. undergraduates. This experiment presented a hypothetical ethical dilemma followed by the advice of an authority figure. The authority was manipulated in three ways: depiction (digitally recorded or computer animated), motion quality (smooth or jerky), and advice (disclose or refrain from disclosing sensitive information). Of these, only the advice changed opinion about the ethical dilemma, even though the animated depiction was significantly eerier than the human depiction. These results indicate that compliance with an authority persists even when using an uncannily realistic computer-animated double

    Designing Personality-Adaptive Conversational Agents for Mental Health Care

    Get PDF
    Millions of people experience mental health issues each year, increasing the necessity for health-related services. One emerging technology with the potential to help address the resulting shortage in health care providers and other barriers to treatment access are conversational agents (CAs). CAs are software-based systems designed to interact with humans through natural language. However, CAs do not live up to their full potential yet because they are unable to capture dynamic human behavior to an adequate extent to provide responses tailored to users’ personalities. To address this problem, we conducted a design science research (DSR) project to design personality-adaptive conversational agents (PACAs). Following an iterative and multi-step approach, we derive and formulate six design principles for PACAs for the domain of mental health care. The results of our evaluation with psychologists and psychiatrists suggest that PACAs can be a promising source of mental health support. With our design principles, we contribute to the body of design knowledge for CAs and provide guidance for practitioners who intend to design PACAs. Instantiating the principles may improve interaction with users who seek support for mental health issues
    • …
    corecore