32 research outputs found

    Autojammin' - Designing progression in traffic and music

    Get PDF
    Since the early days of automotive entertainment, music has played a crucial role in establishing pleasurable driving experiences. Future autonomous driving technologies will relieve the driver from the responsibility of driving and will allow for more interactive types of non-driving activities. However, there is a lack of research on how the liberation from the driving task will impact in-car music experiences. In this paper we present AutoJam, an interactive music application designed to explore the potential of (semi-) autonomous driving. We describe how the AutoJam prototype capitalizes on the context of the driving situation as structural features of the interactive music system. We report on a simulator pilot study and discuss participants’ driving experience with AutoJam in traffic. By proposing design implications that help to re- connect music entertainment with the driving experience of the future, we contribute to the design space for autonomous driving experiences

    HapWheel: in-car infotainment system feedback using haptic and hovering techniques

    Get PDF
    Abstract—In-car devices are growing both in complexity and capacity, integrating functionalities that used to be divided among other controls in the vehicles. These systems appear increasingly in the form of touchscreens as a cost-saving measure. Screens lack the physicality of traditional buttons or switches, requiring drivers to look away from the road to operate them. This paper presents the design, implementation, and two studies that evaluated HapWheel, a system that provides the driver with haptic feedback in the steering wheel while interacting with an Infotainment System. Results show that the proposed system reduced both the duration of and the number of times a driver looked away from the road. HapWheel was also successful at reducing the number of mistakes during the interaction.info:eu-repo/semantics/publishedVersio

    Building BROOK: A multi-modal and facial video database for Human-Vehicle Interaction research

    Get PDF
    With the growing popularity of Autonomous Vehicles, more opportunities have bloomed in the context of Human-Vehicle Interactions. However, the lack of comprehensive and concrete database support for such specific use case limits relevant studies in the whole design spaces. In this paper, we present our work-in-progress BROOK, a public multi-modal database with facial video records, which could be used to characterise drivers' affective states and driving styles. We first explain how we over-engineer such database in details, and what we have gained through a ten-month study. Then we showcase a Neural Network-based predictor, leveraging BROOK, which supports multi-modal prediction (including physiological data of heart rate and skin conductance and driving status data of speed) through facial videos. Finally we discuss related issues when building such a database and our future directions in the context of BROOK. We believe BROOK is an essential building block for future Human-Vehicle Interaction Research. More details and updates about the project BROOK is online at https: //unnc-idl-ucc.github.io/BROOK/

    Evaluation of Haptic Patterns on a Steering Wheel

    Get PDF
    Infotainment Systems can increase mental workload and divert visual attention away from looking ahead on the roads. When these systems give information to the driver, provide it through the tactile channel on the steering, it wheel might improve driving behaviour and safety. This paper describes an investigation into the perceivability of haptic feedback patterns using an actuated surface on a steering wheel. Six solenoids were embedded along the rim of the steering wheel creating three bumps under each palm. Maximally, four of the six solenoids were actuated simultaneously, resulting in 56 patterns to test. Participants were asked to keep in the middle road of the driving simulator as good as possible. Overall recognition accuracy of the haptic patterns was 81.3%, where identification rate increased with decreasing number of active solenoids (up to 92.2% for a single solenoid). There was no significant increase in lane deviation or steering angle during haptic pattern presentation. These results suggest that drivers can reliably distinguish between cutaneous patterns presented on the steering wheel. Our findings can assist in delivering non-critical messages to the driver (e.g. driving performance, incoming text messages, etc.) without decreasing driving performance or increasing perceived mental workload

    Building BROOK: A multi-modal and facial video database for Human-Vehicle Interaction research

    Get PDF
    With the growing popularity of Autonomous Vehicles, more opportunities have bloomed in the context of Human-Vehicle Interactions. However, the lack of comprehensive and concrete database support for such specific use case limits relevant studies in the whole design spaces. In this paper, we present our work-in-progress BROOK, a public multi-modal database with facial video records, which could be used to characterise drivers' affective states and driving styles. We first explain how we over-engineer such database in details, and what we have gained through a ten-month study. Then we showcase a Neural Network-based predictor, leveraging BROOK, which supports multi-modal prediction (including physiological data of heart rate and skin conductance and driving status data of speed) through facial videos. Finally we discuss related issues when building such a database and our future directions in the context of BROOK. We believe BROOK is an essential building block for future Human-Vehicle Interaction Research. More details and updates about the project BROOK is online at https: //unnc-idl-ucc.github.io/BROOK/

    A comparative study of speculative retrieval for multi-modal data trails: towards user-friendly Human-Vehicle interactions

    Get PDF
    In the era of growing developments in Autonomous Vehicles, the importance of Human-Vehicle Interaction has become apparent. However, the requirements of retrieving in-vehicle drivers’ multi- modal data trails, by utilizing embedded sensors, have been consid- ered user unfriendly and impractical. Hence, speculative designs, for in-vehicle multi-modal data retrieval, has been demanded for future personalized and intelligent Human-Vehicle Interaction. In this paper, we explore the feasibility to utilize facial recog- nition techniques to build in-vehicle multi-modal data retrieval. We first perform a comprehensive user study to collect relevant data and extra trails through sensors, cameras and questionnaire. Then, we build the whole pipeline through Convolution Neural Net- works to predict multi-model values of three particular categories of data, which are Heart Rate, Skin Conductance and Vehicle Speed, by solely taking facial expressions as input. We further evaluate and validate its effectiveness within the data set, which suggest the promising future of Speculative Designs for Multi-modal Data Retrieval through this approach

    Origo Steering Wheel: Improving Tactile Feedback for Steering Wheel IVIS Interaction using Embedded Haptic Wave Guides and Constructive Wave Interference

    Get PDF
    Automotive industry is evolving through “Electrification”, “Autonomous Driving Systems”, and “Ride Sharing”, and all three vectors of change are taking place in the same timeframe. One of the key challenges during this transition will be to present critical information collected through additional onboard systems, to the driver and passengers, enhancing multimodal in-vehicle interaction. In this research authors suggest creating embedded tactile-feedback zones on the steering wheel itself, which can be used to relay haptic signals to the driver with little to no visual demand. Using “Haptic Mediation” techniques such as 3D-printed Embedded Haptic Waveguides (EHWs) and Constructive Wave Interference (CWI), the authors were able to provide reliable tactile feedback in normal driving environments. Signal analysis shows that EHWs and CWI can reduce haptic signal distortion and attenuation in noisy environments and during user testing, this technique yielded better driving performance and required lower cognitive load while completing common IVIS tasks.acceptedVersionPeer reviewe

    A comparative study of speculative retrieval for multi-modal data trails: towards user-friendly Human-Vehicle interactions

    Get PDF
    In the era of growing developments in Autonomous Vehicles, the importance of Human-Vehicle Interaction has become apparent. However, the requirements of retrieving in-vehicle drivers’ multi- modal data trails, by utilizing embedded sensors, have been consid- ered user unfriendly and impractical. Hence, speculative designs, for in-vehicle multi-modal data retrieval, has been demanded for future personalized and intelligent Human-Vehicle Interaction. In this paper, we explore the feasibility to utilize facial recog- nition techniques to build in-vehicle multi-modal data retrieval. We first perform a comprehensive user study to collect relevant data and extra trails through sensors, cameras and questionnaire. Then, we build the whole pipeline through Convolution Neural Net- works to predict multi-model values of three particular categories of data, which are Heart Rate, Skin Conductance and Vehicle Speed, by solely taking facial expressions as input. We further evaluate and validate its effectiveness within the data set, which suggest the promising future of Speculative Designs for Multi-modal Data Retrieval through this approach

    Is Users’ Trust during Automated Driving Different When Using an Ambient Light HMI, Compared to an Auditory HMI?

    Get PDF
    The aim of this study was to compare the success of two different Human Machine Interfaces (HMIs) in attracting drivers’ attention when they were engaged in a Non-Driving-Related Task (NDRT) during SAE Level 3 driving. We also assessed the value of each on drivers’ perceived safety and trust. A driving simulator experiment was used to investigate drivers’ response to a non-safety-critical transition of control and five cut-in events (one hard; deceleration of 2.4 m/s2, and 4 subtle; deceleration of ~1.16 m/s2) over the course of the automated drive. The experiment used two types of HMI to trigger a takeover request (TOR): one Light-band display that flashed whenever the drivers needed to takeover control; and one auditory warning. Results showed that drivers’ levels of trust in automation were similar for both HMI conditions, in all scenarios, except during a hard cut-in event. Regarding the HMI’s capabilities to support a takeover process, the study found no differences in drivers’ takeover performance or overall gaze distribution. However, with the Light-band HMI, drivers were more likely to focus their attention to the road centre first after a takeover request. Although a high proportion of glances towards the dashboard of the vehicle was seen for both HMIs during the takeover process, the value of these ambient lighting signals for conveying automation status and takeover messages may be useful to help drivers direct their visual attention to the most suitable area after a takeover, such as the forward roadway

    Improving Connectedness between Drivers by Digital Augmentation

    Full text link
    corecore