370 research outputs found

    Better Driving and Recall When In-car Information Presentation Uses Situationally-Aware Incremental Speech Output Generation

    Get PDF
    Kennington C, Kousidis S, Baumann T, Buschmeier H, Kopp S, Schlangen D. Better Driving and Recall When In-car Information Presentation Uses Situationally-Aware Incremental Speech Output Generation. In: AutomotiveUI 2014: Proceedings of the 6th International Conference on Automotive User Interfaces and Interactive Vehicular Applications. Seattle, Washington, USA; 2014: 7:1-7:7.It is established that driver distraction is the result of sharing cognitive resources between the primary task (driving) and any other secondary task. In the case of holding conversations, a human passenger who is aware of the driving conditions can choose to interrupt his speech in situations potentially requiring more attention from the driver, but in-car information systems typically do not exhibit such sensitivity. We have designed and tested such a system in a driving simulation environment. Unlike other systems, our system delivers infor- mation via speech (calendar entries with scheduled meetings) but is able to react to signals from the environment to interrupt when the driver needs to be fully attentive to the driving task and subsequently resume its delivery. Distraction is measured by a secondary short-term memory task. In both tasks, drivers perform significantly worse when the system does not adapt its speech, while they perform equally well to control conditions (no concurrent task) when the system intelligently interrupts and resumes

    Better Driving and Recall When In-car Information Presentation Uses Situationally-Aware Incremental Speech Output Generation

    Get PDF
    Kennington C, Kousidis S, Baumann T, Buschmeier H, Kopp S, Schlangen D. Better Driving and Recall When In-car Information Presentation Uses Situationally-Aware Incremental Speech Output Generation. In: AutomotiveUI 2014: Proceedings of the 6th International Conference on Automotive User Interfaces and Interactive Vehicular Applications. Seattle, Washington, USA; 2014: 7:1-7:7.It is established that driver distraction is the result of sharing cognitive resources between the primary task (driving) and any other secondary task. In the case of holding conversations, a human passenger who is aware of the driving conditions can choose to interrupt his speech in situations potentially requiring more attention from the driver, but in-car information systems typically do not exhibit such sensitivity. We have designed and tested such a system in a driving simulation environment. Unlike other systems, our system delivers infor- mation via speech (calendar entries with scheduled meetings) but is able to react to signals from the environment to interrupt when the driver needs to be fully attentive to the driving task and subsequently resume its delivery. Distraction is measured by a secondary short-term memory task. In both tasks, drivers perform significantly worse when the system does not adapt its speech, while they perform equally well to control conditions (no concurrent task) when the system intelligently interrupts and resumes

    A Multimodal In-Car Dialogue System That Tracks The Driver's Attention

    Get PDF
    Kousidis S, Kennington C, Baumann T, Buschmeier H, Kopp S, Schlangen D. A Multimodal In-Car Dialogue System That Tracks The Driver's Attention. In: Proceedings of the 16th International Conference on Multimodal Interfaces. Istanbul, Turkey; 2014: 26-33.When a passenger speaks to a driver, he or she is co-located with the driver, is generally aware of the situation, and can stop speaking to allow the driver to focus on the driving task. In-car dialogue systems ignore these important aspects, making them more distracting than even cell-phone conversations. We developed and tested a ``situationally-aware'' dialogue system that can interrupt its speech when a situation which requires more attention from the driver is detected, and can resume when driving conditions return to normal. Furthermore, our system allows driver-controlled resumption of interrupted speech via verbal or visual cues (head nods). Over two experiments, we found that the situationally-aware spoken dialogue system improves driving performance and attention to the speech content, while driver-controlled speech resumption does not hinder performance in either of these two tasks

    Silence, Please!: Interrupting In-Car Phone Conversations

    Get PDF
    Holding phone conversations while driving is dangerous not only because it occupies the hands, but also because it requires attention. Where driver and passenger can adapt their conversational behavior to the demands of the situation, and e.g. interrupt themselves when more attention is needed, an interlocutor on the phone cannot adjust as easily. We present a dialogue assistant which acts as \u27bystander\u27 in phone conversations between a driver and an interlocutor, interrupting them and temporarily cutting the line during potentially dangerous situations. The assistant also informs both conversation partners when the line has been cut, as well as when it has been reestablished. We show that this intervention improves drivers\u27 performance in a standard driving task

    Silence, Please! Interrupting In-Car Phone Conversations

    Get PDF
    Lopez Gambino MS, Kennington C, Schlangen D. Silence, Please! Interrupting In-Car Phone Conversations. In: Cafaro A, Coutinho E, Gebhard P, Potard B, eds. Proceedings of the First Workshop on Conversational Interruptions in Human-Agent Interactions (CIHAI 2017). CEUR Workshop proceedings. Vol 1943. 2017: 9-18

    Investigating Fluidity for Human-Robot Interaction with Real-Time, Real-World Grounding Strategies

    Get PDF
    Hough J, Schlangen D. Investigating Fluidity for Human-Robot Interaction with Real-Time, Real-World Grounding Strategies. In: Proceedings of the 17th Annual SIGdial Meeting on Discourse and Dialogue. 2016

    Interactive Hesitation Synthesis: Modelling and Evaluation

    Get PDF
    Betz S, Carlmeyer B, Wagner P, Wrede B. Interactive Hesitation Synthesis: Modelling and Evaluation. Multimodal Technologies and Interaction. 2018;2(1): 9

    From Manual Driving to Automated Driving: A Review of 10 Years of AutoUI

    Full text link
    This paper gives an overview of the ten-year devel- opment of the papers presented at the International ACM Conference on Automotive User Interfaces and Interactive Vehicular Applications (AutoUI) from 2009 to 2018. We categorize the topics into two main groups, namely, manual driving-related research and automated driving-related re- search. Within manual driving, we mainly focus on studies on user interfaces (UIs), driver states, augmented reality and head-up displays, and methodology; Within automated driv- ing, we discuss topics, such as takeover, acceptance and trust, interacting with road users, UIs, and methodology. We also discuss the main challenges and future directions for AutoUI and offer a roadmap for the research in this area.https://deepblue.lib.umich.edu/bitstream/2027.42/153959/1/From Manual Driving to Automated Driving: A Review of 10 Years of AutoUI.pdfDescription of From Manual Driving to Automated Driving: A Review of 10 Years of AutoUI.pdf : Main articl

    Hesitations in Spoken Dialogue Systems

    Get PDF
    Betz S. Hesitations in Spoken Dialogue Systems. Bielefeld: Universität Bielefeld; 2020

    Incrementally resolving references in order to identify visually present objects in a situated dialogue setting

    Get PDF
    Kennington C. Incrementally resolving references in order to identify visually present objects in a situated dialogue setting. Bielefeld: Universität Bielefeld; 2016.The primary concern of this thesis is to model the resolution of spoken referring expressions made in order to identify objects; in particular, everyday objects that can be perceived visually and distinctly from other objects. The practical goal of such a model is for it to be implemented as a component for use in a live, interactive, autonomous spoken dialogue system. The requirement of interaction imposes an added complication; one that has been ignored in previous models and approaches to automatic reference resolution: the model must attempt to resolve the reference incrementally as it unfolds–not wait until the end of the referring expression to begin the resolution process. Beyond components in dialogue systems, reference has been a major player in the philosophy of meaning for longer than a century. For example, Gottlob Frege (1892) has distinguished between Sinn (sense) and Bedeutung (reference), discussed how they are related and how they relate to the meaning of words and expressions. It has furthermore been argued (e.g., Dahlgren (1976)) that reference to entities in the actual world is not just a fundamental notion of semantic theory, but the fundamental notion; for an individual acquiring a language, understanding the meaning of many words and concepts is done via the task of reference, beginning in early childhood. In this thesis, we pursue an account of word meaning that is based on perception of objects; for example, the meaning of the word red is based on visual features that are selected as distinguishing red objects from non-red ones. This thesis proposes two statistical models of incremental reference resolution. Given ex- amples of referring expressions and visual aspects of the objects to which those expressions referred, both model components learn a functional mapping between the words of the refer- ring expressions and the visual aspects. A generative model, the simple incremental update model, presented in Chapter 5, uses a mediating variable to learn the mapping, whereas a dis- criminative model, the words-as-classifiers model, presented in Chapter 6, learns the mapping directly and improves over the generative model. Both models have been evaluated in various reference resolution tasks to objects in virtual scenes as well as real, tangible objects. This thesis shows that both models work robustly and are able to resolve referring expressions made in reference to visually present objects despite realistic, noisy conditions of speech and object recognition. A theoretical and practical comparison is also provided. Special emphasis is given to the discriminative model in this thesis because of its simplicity and ability to represent word meanings. It is in the learning and application of this model that gives credence to the above claim that reference is the fundamental notion for semantic theory and that meanings of (visual) words is done through experiencing referring expressions made to objects that are visually perceivable
    • …
    corecore