410 research outputs found

    Transparency in Language Generation: Levels of Automation

    Get PDF
    Language models and conversational systems are growing increasingly advanced, creating outputs that may be mistaken for humans. Consumers may thus be misled by advertising, media reports, or vagueness regarding the role of automation in the production of language. We propose a taxonomy of language automation, based on the SAE levels of driving automation, to establish a shared set of terms for describing automated language. It is our hope that the proposed taxonomy can increase transparency in this rapidly advancing field.Comment: Accepted for publication at CUI 202

    CUI@CSCW: Collaborating through Conversational User Interfaces

    Get PDF
    This virtual workshop seeks to bring together the burgeoning communities centred on the design, development, application and study of so-called Conversational User Interfaces (CUIs). CUIs are used in myriad contexts, from online support chatbots through to entertainment devices in the home. In this workshop, we will examine the challenges involved in transforming CUIs into everyday computing devices capable of supporting collaborative activities across space and time. Additionally, this workshop seeks to establish a cohesive CUI community and research agenda within CSCW. We will examine the roles in which CSCW research can contribute insights into understanding how CUIs are or can be used in a variety of settings, from public to private, and how they can be brought into a potentially unlimited number of tasks. This proposed workshop will bring together researchers from academia and practitioners from industry to survey the state-of-the-art in terms of CUI design, use, and understanding, and will map new areas for work including addressing the technical, social, and ethical challenges that lay ahead. By bringing together existing researchers and new ideas in this space, we intend to foster a strong community and enable potential future collaborations

    What Do We See in Them? Identifying Dimensions of Partner Models for Speech Interfaces Using a Psycholexical Approach

    Get PDF
    Perceptions of system competence and communicative ability, termed partner models, play a significant role in speech interface interaction. Yet we do not know what the core dimensions of this concept are. Taking a psycholexical approach, our paper is the first to identify the key dimensions that define partner models in speech agent interaction. Through a repertory grid study (N=21), a review of key subjective questionnaires, an expert review of resulting word pairs and an online study of 356 users of speech interfaces, we identify three key dimensions that make up a users’ partner model: 1) perceptions towards partner competence and dependability; 2) assessment of human-likeness; and 3) a system’s perceived cognitive flexibility. We discuss the implications for partner modelling as a concept, emphasising the importance of salience and the dynamic nature of these perceptions

    Mental Workload and Language Production in Non-Native Speaker IPA Interaction

    Get PDF
    Through proliferation on smartphones and smart speakers, intel- ligent personal assistants (IPAs) have made speech a common in- teraction modality. Yet, due to linguistic coverage and varying lev- els of functionality, many speakers engage with IPAs using a non- native language. This may impact the mental workload and pat- tern of language production displayed by non-native speakers. We present a mixed-design experiment, wherein native (L1) and non- native (L2) English speakers completed tasks with IPAs through smartphones and smart speakers. We found significantly higher mental workload for L2 speakers during IPA interactions. Contrary to our hypotheses, we found no significant differences between L1 and L2 speakers in terms of number of turns, lexical complexity, diversity, or lexical adaptation when encountering errors. These findings are discussed in relation to language production and pro- cessing load increases for L2 speakers in IPA interaction

    See What I’m Saying? Comparing Intelligent Personal Assistant Use for Native and Non-Native Language Speakers

    Get PDF
    Limited linguistic coverage for Intelligent Personal Assistants (IPAs) means that many interact in a non-native language. Yet we know little about how IPAs currently support or hinder these users. Through native (L1) and non-native (L2) English speakers interacting with Google Assistant on a smartphone and smart speaker, we aim to understand this more deeply. Interviews revealed that L2 speakers prioritised utterance planning around perceived linguistic limitations, as opposed to L1 speakers prioritising succinctness because of system limitations. L2 speakers see IPAs as insensitive to linguistic needs resulting in failed interaction. L2 speakers clearly preferred using smartphones, as visual feedback supported diagnoses of communication breakdowns whilst allowing time to process query results. Conversely, L1 speakers preferred smart speakers, with audio feedback being seen as sufficient. We discuss the need to tailor the IPA experience for L2 users, emphasising visual feedback whilst reducing the burden of language production

    Shocks as predictors of survival in patients with implantable cardioverter-defibrillators

    Get PDF
    AbstractOBJECTIVESThe objective of the study was to determine whether the occurrence of shocks for ventricular tachyarrhythmias during therapy with implantable cardioverter-defibrillators (ICD) is predictive of shortened survival.BACKGROUNDVentricular tachyarrhythmias eliciting shocks are often associated with depressed ventricular function, making assessment of shocks as an independent risk factor difficult.METHODSConsecutive patients (n = 421) with a mean follow-up of 756 ± 523 days were classified into those who had received no shock (n = 262) or either one of two shock types, defined as single (n = 111) or multiple shocks (n = 48) per arrhythmia episode. Endpoints were all-cause and cardiac deaths. A survival analysis using a stepwise proportional hazards model evaluated the influence of two primary variables, shock type and left ventricular ejection fraction (LVEF <35% or >35%). Covariates analyzed were age, gender, NYHA Class, coronary artery disease, myocardial infarction, coronary revascularization, defibrillation threshold and tachyarrhythmia inducibility.RESULTSThe most complete model retained LVEF (p = 0.005) and age (p = 0.023) for the comparison of any shock versus no shock (p = 0.031). The occurrence of any versus no shock, or of multiple versus single shocks significantly decreased survival at four years, and these differences persisted after adjustment for LVEF. In the LVEF subgroups <35% and <25%, occurrence of multiple versus no shock more than doubled the risk of death. Compared with the most favorable group LVEF ≥35% and no shock, risk in the group multiple shocks and LVEF <35% was increased 16-fold.CONCLUSIONSIn defibrillator recipients, shocks act as potent predictors of survival independent of several other risk factors, particularly ejection fraction

    Mapping the hot gas temperature in galaxy clusters using X-ray and Sunyaev-Zel'dovich imaging

    Get PDF
    We propose a method to map the temperature distribution of the hot gas in galaxy clusters that uses resolved images of the thermal Sunyaev-Zel’dovich (tSZ) effect in combination with X-ray data. Application to images from the New IRAM KIDs Array (NIKA) and XMM-Newton allows us to measure and determine the spatial distribution of the gas temperature in the merging cluster MACS J0717.5+3745, at z = 0.55. Despite the complexity of the target object, we find a good morphological agreement between the temperature maps derived from X-ray spectroscopy only – using XMM-Newton (TXMM) and Chandra (TCXO) – and the new gas-mass-weighted tSZ+X-ray imaging method (TSZX). We correlate the temperatures from tSZ+X-ray imaging and those from X-ray spectroscopy alone and find that TSZX is higher than TXMM and lower than TCXO by ~ 10% in both cases. Our results are limited by uncertainties in the geometry of the cluster gas, contamination from kinetic SZ (~10%), and the absolute calibration of the tSZ map (7%). Investigation using a larger sample of clusters would help minimise these effects

    CUI @ Auto-UI:Exploring the Fortunate and Unfortunate Futures of Conversational Automotive User Interfaces

    Get PDF
    This work aims to connect the Automotive User Interfaces (Auto-UI) and Conversational User Interfaces (CUI) communities through discussion of their shared view of the future of automotive conversational user interfaces. The workshop aims to encourage creative consideration of optimistic and pessimistic futures, encouraging attendees to explore the opportunities and barriers that lie ahead through a game. Considerations of the future will be mapped out in greater detail through the drafting of research agendas, by which attendees will get to know each other's expertise and networks of resources. The two day workshop, consisting of two 90-minute sessions, will facilitate greater communication and collaboration between these communities, connecting researchers to work together to influence the futures they imagine in the workshop.Comment: Workshop published and presented at Automotive User Interfaces 2021 (AutoUI 21

    What's in an accent? The impact of accented synthetic speech on lexical choice in human-machine dialogue

    Full text link
    The assumptions we make about a dialogue partner's knowledge and communicative ability (i.e. our partner models) can influence our language choices. Although similar processes may operate in human-machine dialogue, the role of design in shaping these models, and their subsequent effects on interaction are not clearly understood. Focusing on synthesis design, we conduct a referential communication experiment to identify the impact of accented speech on lexical choice. In particular, we focus on whether accented speech may encourage the use of lexical alternatives that are relevant to a partner's accent, and how this is may vary when in dialogue with a human or machine. We find that people are more likely to use American English terms when speaking with a US accented partner than an Irish accented partner in both human and machine conditions. This lends support to the proposal that synthesis design can influence partner perception of lexical knowledge, which in turn guide user's lexical choices. We discuss the findings with relation to the nature and dynamics of partner models in human machine dialogue.Comment: In press, accepted at 1st International Conference on Conversational User Interfaces (CUI 2019
    corecore