19 research outputs found

    Autojammin' - Designing progression in traffic and music

    Get PDF
    Since the early days of automotive entertainment, music has played a crucial role in establishing pleasurable driving experiences. Future autonomous driving technologies will relieve the driver from the responsibility of driving and will allow for more interactive types of non-driving activities. However, there is a lack of research on how the liberation from the driving task will impact in-car music experiences. In this paper we present AutoJam, an interactive music application designed to explore the potential of (semi-) autonomous driving. We describe how the AutoJam prototype capitalizes on the context of the driving situation as structural features of the interactive music system. We report on a simulator pilot study and discuss participants’ driving experience with AutoJam in traffic. By proposing design implications that help to re- connect music entertainment with the driving experience of the future, we contribute to the design space for autonomous driving experiences

    Building BROOK: A multi-modal and facial video database for Human-Vehicle Interaction research

    Get PDF
    With the growing popularity of Autonomous Vehicles, more opportunities have bloomed in the context of Human-Vehicle Interactions. However, the lack of comprehensive and concrete database support for such specific use case limits relevant studies in the whole design spaces. In this paper, we present our work-in-progress BROOK, a public multi-modal database with facial video records, which could be used to characterise drivers' affective states and driving styles. We first explain how we over-engineer such database in details, and what we have gained through a ten-month study. Then we showcase a Neural Network-based predictor, leveraging BROOK, which supports multi-modal prediction (including physiological data of heart rate and skin conductance and driving status data of speed) through facial videos. Finally we discuss related issues when building such a database and our future directions in the context of BROOK. We believe BROOK is an essential building block for future Human-Vehicle Interaction Research. More details and updates about the project BROOK is online at https: //unnc-idl-ucc.github.io/BROOK/

    Building BROOK: A multi-modal and facial video database for Human-Vehicle Interaction research

    Get PDF
    With the growing popularity of Autonomous Vehicles, more opportunities have bloomed in the context of Human-Vehicle Interactions. However, the lack of comprehensive and concrete database support for such specific use case limits relevant studies in the whole design spaces. In this paper, we present our work-in-progress BROOK, a public multi-modal database with facial video records, which could be used to characterise drivers' affective states and driving styles. We first explain how we over-engineer such database in details, and what we have gained through a ten-month study. Then we showcase a Neural Network-based predictor, leveraging BROOK, which supports multi-modal prediction (including physiological data of heart rate and skin conductance and driving status data of speed) through facial videos. Finally we discuss related issues when building such a database and our future directions in the context of BROOK. We believe BROOK is an essential building block for future Human-Vehicle Interaction Research. More details and updates about the project BROOK is online at https: //unnc-idl-ucc.github.io/BROOK/

    Origo Steering Wheel: Improving Tactile Feedback for Steering Wheel IVIS Interaction using Embedded Haptic Wave Guides and Constructive Wave Interference

    Get PDF
    Automotive industry is evolving through “Electrification”, “Autonomous Driving Systems”, and “Ride Sharing”, and all three vectors of change are taking place in the same timeframe. One of the key challenges during this transition will be to present critical information collected through additional onboard systems, to the driver and passengers, enhancing multimodal in-vehicle interaction. In this research authors suggest creating embedded tactile-feedback zones on the steering wheel itself, which can be used to relay haptic signals to the driver with little to no visual demand. Using “Haptic Mediation” techniques such as 3D-printed Embedded Haptic Waveguides (EHWs) and Constructive Wave Interference (CWI), the authors were able to provide reliable tactile feedback in normal driving environments. Signal analysis shows that EHWs and CWI can reduce haptic signal distortion and attenuation in noisy environments and during user testing, this technique yielded better driving performance and required lower cognitive load while completing common IVIS tasks.acceptedVersionPeer reviewe

    A comparative study of speculative retrieval for multi-modal data trails: towards user-friendly Human-Vehicle interactions

    Get PDF
    In the era of growing developments in Autonomous Vehicles, the importance of Human-Vehicle Interaction has become apparent. However, the requirements of retrieving in-vehicle drivers’ multi- modal data trails, by utilizing embedded sensors, have been consid- ered user unfriendly and impractical. Hence, speculative designs, for in-vehicle multi-modal data retrieval, has been demanded for future personalized and intelligent Human-Vehicle Interaction. In this paper, we explore the feasibility to utilize facial recog- nition techniques to build in-vehicle multi-modal data retrieval. We first perform a comprehensive user study to collect relevant data and extra trails through sensors, cameras and questionnaire. Then, we build the whole pipeline through Convolution Neural Net- works to predict multi-model values of three particular categories of data, which are Heart Rate, Skin Conductance and Vehicle Speed, by solely taking facial expressions as input. We further evaluate and validate its effectiveness within the data set, which suggest the promising future of Speculative Designs for Multi-modal Data Retrieval through this approach

    A comparative study of speculative retrieval for multi-modal data trails: towards user-friendly Human-Vehicle interactions

    Get PDF
    In the era of growing developments in Autonomous Vehicles, the importance of Human-Vehicle Interaction has become apparent. However, the requirements of retrieving in-vehicle drivers’ multi- modal data trails, by utilizing embedded sensors, have been consid- ered user unfriendly and impractical. Hence, speculative designs, for in-vehicle multi-modal data retrieval, has been demanded for future personalized and intelligent Human-Vehicle Interaction. In this paper, we explore the feasibility to utilize facial recog- nition techniques to build in-vehicle multi-modal data retrieval. We first perform a comprehensive user study to collect relevant data and extra trails through sensors, cameras and questionnaire. Then, we build the whole pipeline through Convolution Neural Net- works to predict multi-model values of three particular categories of data, which are Heart Rate, Skin Conductance and Vehicle Speed, by solely taking facial expressions as input. We further evaluate and validate its effectiveness within the data set, which suggest the promising future of Speculative Designs for Multi-modal Data Retrieval through this approach

    Improving Connectedness between Drivers by Digital Augmentation

    Full text link

    Enhancing user experience and safety in the context of automated driving through uncertainty communication

    Get PDF
    Operators of highly automated driving systems may exhibit behaviour characteristic of overtrust issues due to an insufficient awareness of automation fallibility. Consequently, situation awareness in critical situations is reduced and safe driving performance following emergency takeovers is impeded. Previous research has indicated that conveying system uncertainties may alleviate these issues. However, existing approaches require drivers to attend the uncertainty information with focal attention, likely resulting in missed changes when engaged in non-driving-related tasks. This research project expands on existing work regarding uncertainty communication in the context of automated driving. Specifically, it aims to investigate the implications of conveying uncertainties under consideration of non-driving-related tasks and, based on the outcomes, develop and evaluate an uncertainty display that enhances both user experience and driving safety. In a first step, the impact of visually conveying uncertainties was investigated under consideration of workload, trust, monitoring behaviour, non-driving-related tasks, takeover performance, and situation awareness. For this, an anthropomorphic visual uncertainty display located in the instrument cluster was developed. While the hypothesised benefits for trust calibration and situation awareness were confirmed, the results indicate that visually conveying uncertainties leads to an increased perceived effort due to a higher frequency of monitoring glances. Building on these findings, peripheral awareness displays were explored as a means for conveying uncertainties without the need for focused attention to reduce monitoring glances. As a prerequisite for developing such a display, a systematic literature review was conducted to identify evaluation methods and criteria, which were then coerced into a comprehensive framework. Grounded in this framework, a peripheral awareness display for uncertainty communication was developed and subsequently compared with the initially proposed visual anthropomorphic uncertainty display in a driving simulator study. Eye tracking and subjective workload data indicate that the peripheral awareness display reduces the monitoring effort relative to the visual display, while driving performance and trust data highlight that the benefits of uncertainty communication are maintained. Further, this research project addresses the implications of increasing the functional detail of uncertainty information. Results of a driving simulator study indicate that particularly workload should be considered when increasing the functional detail of uncertainty information. Expanding upon this approach, an augmented reality display concept was developed and a set of visual variables was explored in a forced choice sorting task to assess their ordinal characteristics. Particularly changes in colour hue and animation-based variables received high preference ratings and were ordered consistently from low to high uncertainty. This research project has contributed a series of novel insights and ideas to the field of human factors in automated driving. It confirmed that conveying uncertainties improves trust calibration and situation awareness, but highlighted that using a visual display lessens the positive effects. Addressing this shortcoming, a peripheral awareness display was designed applying a dedicated evaluation framework. Compared with the previously employed visual display, it decreased monitoring glances and, consequentially, perceived effort. Further, an augmented reality-based uncertainty display concept was developed to minimise the workload increments associated with increases in the functional detail of uncertainty information.</div

    AutoPlay – driving pleasure in a future of autonomous driving

    Get PDF
    Automated driving technologies promise a relief from stressful or frustrating driving situations. Fully-autonomous cars of the future are expected to take over the responsibilities of driving and allow the now inactive driver to perform much more engaging non-driving activities than ever before. However, the design space of the autonomous driving situation is uniquely different from traditional driving. For example, research on advanced driving automation systems have shown that the transfer of the driving task from the driver to the system can be experienced as a loss of autonomy and competency and may result in a feeling of being at the mercy of technology. Furthermore, the relationship with our cars is not only instrumental. The car is a personal artefact, an extension of the driver&amp;rsquo;s body connoted with feelings of independence and power. The car&amp;rsquo;s emancipation to an autonomous agent require a new basis of interacting with the inactive driver to facilitate a pleasurable and meaningful driving experience. On the other hand, the relief from the driving task provides a unique opportunity for new types of activities during the piloted journey, amongst them, new forms of in-situ entertainment and games that are grounded in the contextual specificity of the automotive, mobile situation. This leads to the research objectives: What type of activities can support autonomous driving as pleasurable and meaningful? How should they be implemented to compensate for the constraints and drawbacks of the autonomous driving situation, but also to take advantage of the unique affordances of this new technology? To answer those questions, I designed and developed three working prototypes with the goal to envision future autonomous driving as a pleasurable and meaningful activity. Based on a research-through-design approach, I explored the potentials of the design space of autonomous driving by systematically aligning the core-interactions of the prototypes with the contextual constraints of dense urban traffic. Furthermore, I studied the impact of the three prototypes on the driving experience in a simulator set up as well as in a series of in-car user studies. This exegesis introduces the three prototypes as design artefacts and reflects on the findings of the complementary user studies. In doing so, it articulates a novel frame for understanding autonomous driving as a future design challenge for contextual activities. This research contributes to the increasing importance of user experience and game design in the automotive domain. As such, the contribution is threefold: (1) As design artefacts, the prototypes articulate a desired future of driving experiences in autonomous cars. (2) As a contextual design practice, the research contributes intermediate knowledge in the form of novel ideation methods and implementation strategies of non-driving activities. (3) As a conceptual frame for understanding autonomous driving, I propose three motivational affordances of autonomous driving (that were tangible experiences of the prototypes) as targets for aligning non-driving activities. The three prototypes presented in this exegesis articulate a desired pleasurable vision of autonomous driving of the future. As an inspirational frame, the three prototypes are studied to gain experiential insights into the challenge of designing pleasurable and meaningful non- driving interactions in a future autonomous driving context

    Data-Driven Evaluation of In-Vehicle Information Systems

    Get PDF
    Today’s In-Vehicle Information Systems (IVISs) are featurerich systems that provide the driver with numerous options for entertainment, information, comfort, and communication. Drivers can stream their favorite songs, read reviews of nearby restaurants, or change the ambient lighting to their liking. To do so, they interact with large center stack touchscreens that have become the main interface between the driver and IVISs. To interact with these systems, drivers must take their eyes off the road which can impair their driving performance. This makes IVIS evaluation critical not only to meet customer needs but also to ensure road safety. The growing number of features, the distraction caused by large touchscreens, and the impact of driving automation on driver behavior pose significant challenges for the design and evaluation of IVISs. Traditionally, IVISs are evaluated qualitatively or through small-scale user studies using driving simulators. However, these methods are not scalable to the growing number of features and the variety of driving scenarios that influence driver interaction behavior. We argue that data-driven methods can be a viable solution to these challenges and can assist automotive User Experience (UX) experts in evaluating IVISs. Therefore, we need to understand how data-driven methods can facilitate the design and evaluation of IVISs, how large amounts of usage data need to be visualized, and how drivers allocate their visual attention when interacting with center stack touchscreens. In Part I, we present the results of two empirical studies and create a comprehensive understanding of the role that data-driven methods currently play in the automotive UX design process. We found that automotive UX experts face two main conflicts: First, results from qualitative or small-scale empirical studies are often not valued in the decision-making process. Second, UX experts often do not have access to customer data and lack the means and tools to analyze it appropriately. As a result, design decisions are often not user-centered and are based on subjective judgments rather than evidence-based customer insights. Our results show that automotive UX experts need data-driven methods that leverage large amounts of telematics data collected from customer vehicles. They need tools to help them visualize and analyze customer usage data and computational methods to automatically evaluate IVIS designs. In Part II, we present ICEBOAT, an interactive user behavior analysis tool for automotive user interfaces. ICEBOAT processes interaction data, driving data, and glance data, collected over-the-air from customer vehicles and visualizes it on different levels of granularity. Leveraging our multi-level user behavior analysis framework, it enables UX experts to effectively and efficiently evaluate driver interactions with touchscreen-based IVISs concerning performance and safety-related metrics. In Part III, we investigate drivers’ multitasking behavior and visual attention allocation when interacting with center stack touchscreens while driving. We present the first naturalistic driving study to assess drivers’ tactical and operational self-regulation with center stack touchscreens. Our results show significant differences in drivers’ interaction and glance behavior in response to different levels of driving automation, vehicle speed, and road curvature. During automated driving, drivers perform more interactions per touchscreen sequence and increase the time spent looking at the center stack touchscreen. These results emphasize the importance of context-dependent driver distraction assessment of driver interactions with IVISs. Motivated by this we present a machine learning-based approach to predict and explain the visual demand of in-vehicle touchscreen interactions based on customer data. By predicting the visual demand of yet unseen touchscreen interactions, our method lays the foundation for automated data-driven evaluation of early-stage IVIS prototypes. The local and global explanations provide additional insights into how design artifacts and driving context affect drivers’ glance behavior. Overall, this thesis identifies current shortcomings in the evaluation of IVISs and proposes novel solutions based on visual analytics and statistical and computational modeling that generate insights into driver interaction behavior and assist UX experts in making user-centered design decisions
    corecore