168 research outputs found

    CUI@CSCW: Collaborating through Conversational User Interfaces

    Get PDF
    This virtual workshop seeks to bring together the burgeoning communities centred on the design, development, application and study of so-called Conversational User Interfaces (CUIs). CUIs are used in myriad contexts, from online support chatbots through to entertainment devices in the home. In this workshop, we will examine the challenges involved in transforming CUIs into everyday computing devices capable of supporting collaborative activities across space and time. Additionally, this workshop seeks to establish a cohesive CUI community and research agenda within CSCW. We will examine the roles in which CSCW research can contribute insights into understanding how CUIs are or can be used in a variety of settings, from public to private, and how they can be brought into a potentially unlimited number of tasks. This proposed workshop will bring together researchers from academia and practitioners from industry to survey the state-of-the-art in terms of CUI design, use, and understanding, and will map new areas for work including addressing the technical, social, and ethical challenges that lay ahead. By bringing together existing researchers and new ideas in this space, we intend to foster a strong community and enable potential future collaborations

    A Need for Trust in Conversational Interface Research

    Full text link
    Across several branches of conversational interaction research including interactions with social robots, embodied agents, and conversational assistants, users have identified trust as a critical part of those interactions. Nevertheless, there is little agreement on what trust means within these sort of interactions or how trust can be measured. In this paper, we explore some of the dimensions of trust as it has been understood in previous work and we outline some of the ways trust has been measured in the hopes of furthering discussion of the concept across the field

    Entertaining and Opinionated but Too Controlling: A Large-Scale User Study of an Open Domain Alexa Prize System

    Full text link
    Conversational systems typically focus on functional tasks such as scheduling appointments or creating todo lists. Instead we design and evaluate SlugBot (SB), one of 8 semifinalists in the 2018 AlexaPrize, whose goal is to support casual open-domain social inter-action. This novel application requires both broad topic coverage and engaging interactive skills. We developed a new technical approach to meet this demanding situation by crowd-sourcing novel content and introducing playful conversational strategies based on storytelling and games. We collected over 10,000 conversations during August 2018 as part of the Alexa Prize competition. We also conducted an in-lab follow-up qualitative evaluation. Over-all users found SB moderately engaging; conversations averaged 3.6 minutes and involved 26 user turns. However, users reacted very differently to different conversation subtypes. Storytelling and games were evaluated positively; these were seen as entertaining with predictable interactive structure. They also led users to impute personality and intelligence to SB. In contrast, search and general Chit-Chat induced coverage problems; here users found it hard to infer what topics SB could understand, with these conversations seen as being too system-driven. Theoretical and design implications suggest a move away from conversational systems that simply provide factual information. Future systems should be designed to have their own opinions with personal stories to share, and SB provides an example of how we might achieve this.Comment: To appear in 1st International Conference on Conversational User Interfaces (CUI 2019

    Progressivity for Voice Interface Design

    Get PDF
    Drawing from Conversation Analysis (CA), we examine how the orientation towards progressivity in talk---keeping things moving---might help us better understand and design for voice interactions. We introduce progressivity by surveying its explication in CA, and then look at how a strong preference for progressivity in conversation works out practically in sequences of voice interaction recorded in people's homes. Following \citeauthor{sti06}'s work on progressivity, we find our data shows: how non-answer responses impede progress; how accounts offered for non-answer responses can lead to recovery; how participants work to receive answers; and how, ultimately, moving the interaction forwards does not necessarily involve a fitted answer, but other kinds of responses as well. We discuss the wider potential of applying progressivity to evaluate and understand voice interactions, and consider what designers of voice experiences might do to design for progressivity. Our contribution is a demonstration of the progressivity principle and its interactional features, which also points towards the need for specific kinds of future developments in speech technology

    What's in an accent? The impact of accented synthetic speech on lexical choice in human-machine dialogue

    Full text link
    The assumptions we make about a dialogue partner's knowledge and communicative ability (i.e. our partner models) can influence our language choices. Although similar processes may operate in human-machine dialogue, the role of design in shaping these models, and their subsequent effects on interaction are not clearly understood. Focusing on synthesis design, we conduct a referential communication experiment to identify the impact of accented speech on lexical choice. In particular, we focus on whether accented speech may encourage the use of lexical alternatives that are relevant to a partner's accent, and how this is may vary when in dialogue with a human or machine. We find that people are more likely to use American English terms when speaking with a US accented partner than an Irish accented partner in both human and machine conditions. This lends support to the proposal that synthesis design can influence partner perception of lexical knowledge, which in turn guide user's lexical choices. We discuss the findings with relation to the nature and dynamics of partner models in human machine dialogue.Comment: In press, accepted at 1st International Conference on Conversational User Interfaces (CUI 2019

    How language works & What machines can do about it

    Get PDF
    © 2019 Copyright held by the owner/author(s). Publication rights licensed to Association for Computing Machinery. The current skills shortage in dialog system development is being filled by very clever graduates that are however victims of historical forces. The current batch of engineers, at all levels, need to study history rather than repeat it. The claim is that techniques from the human and social sciences might provide the way forward

    Inquisitive Mind : A conversational news companion

    Get PDF
    With an ever-increasing amount of information and ever-more hectic lifestyles, many people rely on news briefs to stay up to date. Consequently, the reliance on single-source media narratives can lead to a biased and narrow perception of the world. Conversational interfaces, as a medium for delivering news stories, can help to address this problem by encouraging users to explore information resources and news stories by formulating curiosity driven comments and questions. We propose Inquisitive Mind (IM) - a conversational companion that proactively points out different narratives of the story, refers users to source materials, and encourages deeper exploration of the topic. We argue that IM could foster curiosity, encourage critical thinking, and effectively lead to more conscious media consumption

    The Partner Modelling Questionnaire: A validated self-report measure of perceptions toward machines as dialogue partners

    Full text link
    Recent work has looked to understand user perceptions of speech agent capabilities as dialogue partners (termed partner models), and how this affects user interaction. Yet, currently partner model effects are inferred from language production as no metrics are available to quantify these subjective perceptions more directly. Through three studies, we develop and validate the Partner Modelling Questionnaire (PMQ): an 18-item self-report semantic differential scale designed to reliably measure people's partner models of non-embodied speech interfaces. Through principal component analysis and confirmatory factor analysis, we show that the PMQ scale consists of three factors: communicative competence and dependability, human-likeness in communication, and communicative flexibility. Our studies show that the measure consistently demonstrates good internal reliability, strong test-retest reliability over 12 and 4-week intervals, and predictable convergent/divergent validity. Based on our findings we discuss the multidimensional nature of partner models, whilst identifying key future research avenues that the development of the PMQ facilitates. Notably, this includes the need to identify the activation, sensitivity, and dynamism of partner models in speech interface interaction.Comment: Submitted (TOCHI

    People's Perceptions Toward Bias and Related Concepts in Large Language Models: A Systematic Review

    Full text link
    Large language models (LLMs) have brought breakthroughs in tasks including translation, summarization, information retrieval, and language generation, gaining growing interest in the CHI community. Meanwhile, the literature shows researchers' controversial perceptions about the efficacy, ethics, and intellectual abilities of LLMs. However, we do not know how lay people perceive LLMs that are pervasive in everyday tools, specifically regarding their experience with LLMs around bias, stereotypes, social norms, or safety. In this study, we conducted a systematic review to understand what empirical insights papers have gathered about people's perceptions toward LLMs. From a total of 231 retrieved papers, we full-text reviewed 15 papers that recruited human evaluators to assess their experiences with LLMs. We report different biases and related concepts investigated by these studies, four broader LLM application areas, the evaluators' perceptions toward LLMs' performances including advantages, biases, and conflicting perceptions, factors influencing these perceptions, and concerns about LLM applications
    • …
    corecore