58 research outputs found

    Novel Multimodal Feedback Techniques for In-Car Mid-Air Gesture Interaction

    Get PDF
    This paper presents an investigation into the effects of different feedback modalities on mid-air gesture interaction for infotainment systems in cars. Car crashes and near-crash events are most commonly caused by driver distraction. Mid-air interaction is a way of reducing driver distraction by reducing visual demand from infotainment. Despite a range of available modalities, feedback in mid-air gesture systems is generally provided through visual displays. We conducted a simulated driving study to investigate how different types of multimodal feedback can support in-air gestures. The effects of different feedback modalities on eye gaze behaviour, and the driving and gesturing tasks are considered. We found that feedback modality influenced gesturing behaviour. However, drivers corrected falsely executed gestures more often in non-visual conditions. Our findings show that non-visual feedback can reduce visual distraction significantl

    Bimodal Feedback for In-car Mid-air Gesture Interaction

    Get PDF
    This demonstration showcases novel multimodal feedback designs for in-car mid-air gesture interaction. It explores the potential of multimodal feedback types for mid-air gestures in cars and how these can reduce eyes-off-the-road time thus make driving safer. We will show four different bimodal feedback combinations to provide effective information about interaction with systems in a car. These feedback techniques are visual-auditory, auditory-ambient (peripheral vision), ambient-tactile, and tactile-auditory. Users can interact with the system after a short introduction, creating an exciting opportunity to deploy these displays in cars in the future

    What If Your Car Would Care? Exploring Use Cases For Affective Automotive User Interfaces

    Full text link
    In this paper we present use cases for affective user interfaces (UIs) in cars and how they are perceived by potential users in China and Germany. Emotion-aware interaction is enabled by the improvement of ubiquitous sensing methods and provides potential benefits for both traffic safety and personal well-being. To promote the adoption of affective interaction at an international scale, we developed 20 mobile in-car use cases through an inter-cultural design approach and evaluated them with 65 drivers in Germany and China. Our data shows perceived benefits in specific areas of pragmatic quality as well as cultural differences, especially for socially interactive use cases. We also discuss general implications for future affective automotive UI. Our results provide a perspective on cultural peculiarities and a concrete starting point for practitioners and researchers working on emotion-aware interfaces

    Building BROOK: A multi-modal and facial video database for Human-Vehicle Interaction research

    Get PDF
    With the growing popularity of Autonomous Vehicles, more opportunities have bloomed in the context of Human-Vehicle Interactions. However, the lack of comprehensive and concrete database support for such specific use case limits relevant studies in the whole design spaces. In this paper, we present our work-in-progress BROOK, a public multi-modal database with facial video records, which could be used to characterise drivers' affective states and driving styles. We first explain how we over-engineer such database in details, and what we have gained through a ten-month study. Then we showcase a Neural Network-based predictor, leveraging BROOK, which supports multi-modal prediction (including physiological data of heart rate and skin conductance and driving status data of speed) through facial videos. Finally we discuss related issues when building such a database and our future directions in the context of BROOK. We believe BROOK is an essential building block for future Human-Vehicle Interaction Research. More details and updates about the project BROOK is online at https: //unnc-idl-ucc.github.io/BROOK/

    Designing for Collaborative Non-Driving-Related Activities in Future Cars:Fairness and Team Performance

    Get PDF
    With the gradual transition towards assisted and automated driving, the car will transform into a more social environment where passengers and drivers engage in Non-Driving-Related Activities (NDRA). To support collaboration among occupants in future vehicles, research suggests interactive systems controlled by several users at once. In this paper, we explore five concepts for the collaborative performance of NDRA with the use-case of music playlist creation. While prior work investigated the effect on social connectedness, we expand insights towards team performance and fairness. Results from a mixed-subject experiment (N=27) show that the concepts have major consequences on team performance and fairness. Certain concepts can promote or hinder coordination effectiveness and, in turn, impact intra-vehicular collaboration. Our observations also indicate that fairness is key to fostering social collaboration in AVs, while it does not naturally define a high team performance. Subsequently, we provide recommendations to guide future designs of collaborative NDRAs in vehicles.</p

    HapWheel: in-car infotainment system feedback using haptic and hovering techniques

    Get PDF
    Abstract—In-car devices are growing both in complexity and capacity, integrating functionalities that used to be divided among other controls in the vehicles. These systems appear increasingly in the form of touchscreens as a cost-saving measure. Screens lack the physicality of traditional buttons or switches, requiring drivers to look away from the road to operate them. This paper presents the design, implementation, and two studies that evaluated HapWheel, a system that provides the driver with haptic feedback in the steering wheel while interacting with an Infotainment System. Results show that the proposed system reduced both the duration of and the number of times a driver looked away from the road. HapWheel was also successful at reducing the number of mistakes during the interaction.info:eu-repo/semantics/publishedVersio

    Building BROOK: A multi-modal and facial video database for Human-Vehicle Interaction research

    Get PDF
    With the growing popularity of Autonomous Vehicles, more opportunities have bloomed in the context of Human-Vehicle Interactions. However, the lack of comprehensive and concrete database support for such specific use case limits relevant studies in the whole design spaces. In this paper, we present our work-in-progress BROOK, a public multi-modal database with facial video records, which could be used to characterise drivers' affective states and driving styles. We first explain how we over-engineer such database in details, and what we have gained through a ten-month study. Then we showcase a Neural Network-based predictor, leveraging BROOK, which supports multi-modal prediction (including physiological data of heart rate and skin conductance and driving status data of speed) through facial videos. Finally we discuss related issues when building such a database and our future directions in the context of BROOK. We believe BROOK is an essential building block for future Human-Vehicle Interaction Research. More details and updates about the project BROOK is online at https: //unnc-idl-ucc.github.io/BROOK/

    A comparative study of speculative retrieval for multi-modal data trails: towards user-friendly Human-Vehicle interactions

    Get PDF
    In the era of growing developments in Autonomous Vehicles, the importance of Human-Vehicle Interaction has become apparent. However, the requirements of retrieving in-vehicle drivers’ multi- modal data trails, by utilizing embedded sensors, have been consid- ered user unfriendly and impractical. Hence, speculative designs, for in-vehicle multi-modal data retrieval, has been demanded for future personalized and intelligent Human-Vehicle Interaction. In this paper, we explore the feasibility to utilize facial recog- nition techniques to build in-vehicle multi-modal data retrieval. We first perform a comprehensive user study to collect relevant data and extra trails through sensors, cameras and questionnaire. Then, we build the whole pipeline through Convolution Neural Net- works to predict multi-model values of three particular categories of data, which are Heart Rate, Skin Conductance and Vehicle Speed, by solely taking facial expressions as input. We further evaluate and validate its effectiveness within the data set, which suggest the promising future of Speculative Designs for Multi-modal Data Retrieval through this approach

    May the Force Be with You: Ultrasound Haptic Feedback for Mid-Air Gesture Interaction in Cars

    Get PDF
    The use of ultrasound haptic feedback for mid-air gestures in cars has been proposed to provide a sense of control over the user's intended actions and to add touch to a touchless interaction. However, the impact of ultrasound feedback to the gesturing hand regarding lane deviation, eyes-off-the-road time (EORT) and perceived mental demand has not yet been measured. This paper investigates the impact of uni- and multimodal presentation of ultrasound feedback on the primary driving task and the secondary gesturing task in a simulated driving environment. The multimodal combinations of ultrasound included visual, auditory, and peripheral lights. We found that ultrasound feedback presented uni-modally and bi-modally resulted in significantly less EORT compared to visual feedback. Our results suggest that multimodal ultrasound feedback for mid-air interaction decreases EORT whilst not compromising driving performance nor mental demand and thus can increase safety while driving

    Move, hold and touch: A framework for Tangible gesture interactive systems

    Get PDF
    © 2015 by the authors. Technology is spreading in our everyday world, and digital interaction beyond the screen, with real objects, allows taking advantage of our natural manipulative and communicative skills. Tangible gesture interaction takes advantage of these skills by bridging two popular domains in Human-Computer Interaction, tangible interaction and gestural interaction. In this paper, we present the Tangible Gesture Interaction Framework (TGIF) for classifying and guiding works in this field. We propose a classification of gestures according to three relationships with objects: move, hold and touch. Following this classification, we analyzed previous work in the literature to obtain guidelines and common practices for designing and building new tangible gesture interactive systems. We describe four interactive systems as application examples of the TGIF guidelines and we discuss the descriptive, evaluative and generative power of TGIF
    • …
    corecore