106 research outputs found

    Why drivers are frustrated: results from a diary study and focus groups.

    Get PDF
    Designing emotion-aware systems has become a manageable aim through recent developments in computer vision and machine learning. In the context of driver behaviour, especially negative emotions like frustration have shifted into the focus of major car manufacturers. Recognition and mitigation of the same could lead to safer roads in manual and more comfort in automated driving. While frustration recognition and also general mitigation methods have been previously researched, the knowledge of reasons for frustration is necessary to offer targeted solutions for frustration mitigation. However, up to the present day, systematic investigations about reasons for frustration behind the wheel are lacking. Therefore, in this work a combination of diary study and user focus groups was employed to shed light on reasons why humans become frustrated during driving. In addition, participants of the focus groups were asked for their usual coping methods with frustrating situations. It was revealed that the main reasons for frustration in driving are related to traffic, in-car reasons, self-inflicted causes, and weather. Coping strategies that drivers use in everyday life include cursing, distraction by media and thinking about something else, amongst others. This knowledge will help to design a frustration-aware system that monitors the driver’s environment according to the spectrum of frustration causes found in the research presented here

    Involving users in Automotive HMI design: Design evaluation of an interactive simulation based on participatory design

    Get PDF
    Abstract: User-centered design (UCD) methods for human-machine interfaces (HMI) have been a key to develop safe and user-friendly interaction for years. Especially in safety-critical domains like transportation, humans need to have clear instructions and feedback loops to safely interact with the vehicle. With the shift towards more automation on the streets, human-machine interaction needs to be predictable to ensure safe road interaction. Understanding human behavior and prior user needs in crucial situation can be significant in a multitude of complex interactions for in-vehicle passengers, pedestrians and other traffic participants.While research mostly focused on addressing user behavior and user needs, the inclusion of users has often been limited to study participants with behavioral inputs or interviewees prompted for opinions. Although users do not have the knowledge and experience as professional designers and experts to create a product for others alone, unbiased insights into the future target groups’ mental models are a valuable and necessary asset. Hence, with stronger user participation and appropriate tools for users to design prototypes, the design process may deeper involve all type of stakeholders helping to provide insights into their mental models to understand user need and expectation.To extend current UCD practices in the development of automotive HMIs, our work introduces a user-interactive approach, based on the principles of participatory design (PD), to enable users to actively create and work within design process. A within-subject study was conducted based on evaluating users’ trust within an interaction with an AV and subsequently configuring the corresponding HMI. The scenario focuses on the interaction between a pedestrian (user’s point of view) deciding to cross path with an automated vehicle (AV, SAE L4). The AV would show its intention via a 360-degree light band HMI on its roof. The interactive simulation offered users hands-on options to iteratively experience, evaluate and improve HMI elements within changeable environmental settings (i.e., weather, daytime) until they were satisfied with the result. The addition of participation was provided by an interface using common visual user interface elements, i.e. sliders and buttons, giving users a range of variety for real-time HMI configuring.A first prototype of this interactive simulation was tested for the safety-critical use-case in a usability study (N=29). Results from questionnaires and interviews show high usability acceptance of the interactive simulation among participants as assessed by the system usability scale. Overall usability was rated high (System Usability Scale) and frustration low (NASA-TLX raw). Moreover, the interactive simulation was rated to have above average user experience (User Experience Questionnaire). Appended feedback interviews gave valuable insights on improving the simulation user interface, offering different design opportunities within the simulation and a wider parameter space. The short design session time shows the limit of customizability options within this study but needs to be further investigated to determine optimal range for longer evaluation and design sessions. Based on the study results, further requirements for PD simulative environments to assess limits for parameter spaces in virtual environments are derived

    Understanding the Multidimensional and Dynamic Nature of Facial Expressions based on Indicators for Appraisal Components as Basis for Measuring Drivers’ Fear

    Get PDF
    Facial expressions are one of the commonly used implicit measurements for the in-vehicle affective computing. However, the time courses and the underlying mechanism of facial expressions so far have been barely focused on. According to the Component Process Model of emotions, facial expressions are the result of an individual’s appraisals, which are supposed to happen in sequence. Therefore, a multidimensional and dynamic analysis of drivers’ fear by using facial expression data could profit from a consideration of these appraisals. A driving simulator experiment with 37 participants was conducted, in which fear and relaxation were induced. It was found that the facial expression indicators of high novelty and low power appraisals were significantly activated after a fear event (high novelty: Z = 2.80, p < .01, r = 0.46; low power: Z = 2.43, p < .05, r = 0.50). Furthermore, after the fear event, the activation of high novelty occurred earlier than low power. These results suggest that multidimensional analysis of facial expression is suitable as an approach for the in-vehicle measurement of the drivers’ emotions. Furthermore, a dynamic analysis of drivers’ facial expressions considering of effects of appraisal components can add valuable information for the in-vehicle assessment of emotions

    Enhanced response to music in pregnancy

    Get PDF
    Given a possible effect of estrogen on the pleasure-mediating dopaminergic system, musical appreciation in participants whose estrogen levels are naturally elevated during the oral contraceptive cycle and pregnancy has been investigated (n = 32, 15 pregnant, 17 nonpregnant; mean age 27.2). Results show more pronounced blood pressure responses to music in pregnant women. However, estrogen level differences during different phases of oral contraceptive intake did not have any effect, indicating that the observed changes were not related to estrogen. Effects of music on blood pressure were independent of valence, and dissonance elicited the greatest drop in blood pressure. Thus, the enhanced physiological response in pregnant women probably does not reflect a protective mechanism to avoid unpleasantness. Instead, this enhanced response is discussed in terms of a facilitation of prenatal conditioning to acoustical (musical) stimuli

    Task modelling and model validation for car driving

    Get PDF
    Task analysis is a powerful tool to model human behavior within sociotechnical systems. However, the validation beyond expert judgement has received inadequate attention. The problem is exacerbated in dynamic environments where discrete task execution stages are difficult to model. We argue that model validation should follow an iterative approach using the TASC conceptual framework presented here. Building on Gray and Boehm-Davis' (2000) notion of interactive behavior, TASC splits behavior into the Task under consideration, Actions taken, the Situation, and the human embodied Cognition. The approach consists of a) conducting the task analysis to define a task model, b) data collection, and c) validating the task model on the levels of action, situation, and cognition. This framework is demonstrated using a lightweight task analysis of the driving task. A Cognitive Work Analysis (CWA) of the driving task was conducted, yielding five top-level goals. Subsequently, data were gathered in a driving simulator. Twenty-one participants drove on a two-lane motorway in two scenarios in random order. The "controlled" scenario consisted of vehicles showing very predictable behavior; the "realistic" scenario had medium-dense traffic behaving similarly to everyday traffic. The participants were instructed to drive according to traffic rules. Eye-tracking data were recorded. Nine participants drove the two scenarios again while being instructed to think aloud focusing on perceptions and goals. Based on the data, we produced separate graphical representations for the TASC levels of action, situation, and cognition representing the time course of the drive for each subject. The cognition-level was split into perception (eye tracking) and goals (thinking aloud). Finally, on each level, each CWA goal was operationalized and statistically evaluated using linear mixed models. Behavior on right lane differed markedly from behavior on left lane in line with the CWA goals. Goals appeared clearly in driving actions, gaze behavior, and thinking aloud utterances. Visual behavior shows a distinctive pattern depending on situational requirements in different phazes of the drive. The TASC-framework proved very useful to validate the CWA task analysis. The idea of task analysis has limitations in modelling driving because of a strong reliance on discrete states. Yet an important property of the driving task is its execution in the continuous world of time, space, and energy. Goals act frequently not as states to be achieved but as constraints on possible actions and can be quickly altered depending on the dynamic situation. More effort should be directed towards validation of task models. We recommend making operationalization of task models standard practice when conducting task analysis to help planning of evaluation studies and assessment of generalizability of results beyond the task environment studied. To gain a better understanding of the cognition of task execution, more research into setting of multiple goals, action selection, and situation representation in dynamic environments is highly desirable

    Evaluation of a User-adaptive Light-based Interior Concept for Supporting Mobile Office Work during Highly Automated Driving

    Get PDF
    Automated driving promises that users can devote their travel time to activities like relaxing or mobile office (MO) work. We present an interior light concept for supporting MO work and evaluate it in a driving simulator study with participants. A vehicle mock-up was equipped as MO including light elements for focus and ambient illumination. Based on these, an adaptive (i.e. adapting to user activities) and an adaptable (i.e. could be changed by user according to preference) light set-up were created and compared to a baseline version. Regarding user experience, the adaptive variant was rated best on hedonic aspects, while the adaptable variant scored highest on pragmatic facets. In addition, the adaptable set-up was ranked best on preference before adaptive and baseline. This suggest that adaption of the interior light to non-driving related activities improves user experience. Future studies should evaluate combinations of the adaptive and the adaptive variants tested here

    Recognizing Frustration of Drivers From Face Video Recordings and Brain Activation Measurements With Functional Near-Infrared Spectroscopy

    Get PDF
    Experiencing frustration while driving can harm cognitive processing, result in aggressive behavior and hence negatively influence driving performance and traffic safety. Being able to automatically detect frustration would allow adaptive driver assistance and automation systems to adequately react to a driver’s frustration and mitigate potential negative consequences. To identify reliable and valid indicators of driver’s frustration, we conducted two driving simulator experiments. In the first experiment, we aimed to reveal facial expressions that indicate frustration in continuous video recordings of the driver’s face taken while driving highly realistic simulator scenarios in which frustrated or non-frustrated emotional states were experienced. An automated analysis of facial expressions combined with multivariate logistic regression classification revealed that frustrated time intervals can be discriminated from non-frustrated ones with accuracy of 62.0% (mean over 30 participants). A further analysis of the facial expressions revealed that frustrated drivers tend to activate muscles in the mouth region (chin raiser, lip pucker, lip pressor). In the second experiment, we measured cortical activation with almost whole-head functional near-infrared spectroscopy (fNIRS) while participants experienced frustrating and non-frustrating driving simulator scenarios. Multivariate logistic regression applied to the fNIRS measurements allowed us to discriminate between frustrated and non-frustrated driving intervals with higher accuracy of 78.1% (mean over 12 participants). Frustrated driving intervals were indicated by increased activation in the inferior frontal, putative premotor and occipito-temporal cortices. Our results show that facial and cortical markers of frustration can be informative for time resolved driver state identification in complex realistic driving situations. The markers derived here can potentially be used as an input for future adaptive driver assistance and automation systems that detect driver frustration and adaptively react to mitigate it

    Acceptance of automated Shuttles - Application and Extension of the UTAUT-2 Model to Wizard-of-Oz automated driving in real-life Traffic

    Get PDF
    Automated shuttles can make public transport more attractive and sustainable. Still, their successful implementation requires a high level of acceptance among users. This study investigates the impact of the predictors performance expectancy, social influence, facilitating conditions, hedonic motivation, and perceived risk of The Unified Theory of Acceptance (UTAUT)-2 on the behavioral intention to use automated shuttles. In earlier work, UTAUT-2 has already been successfully applied to study the acceptance of autonomous public transport. Here, we employed the UTAUT-2 to assess acceptance of a Wizard-of-Oz automated shuttle in real-life traffic, in a study with 35 participants, before and after a first ride and after a second ride on which two incidents occurred. The results show that behavioral intention to use automated shuttles is high even before the first ride and remains high after experiencing automated driving. Performance expectancy was the only significant predictor of behavioral intention for all measurement time points. The explanatory power of the model almost doubles from pre-ride to post-ride. The results indicate a crucial role of performance expectancy for the acceptance of automated shuttles at the current stage of implementation and provide guidance for a successful development and implementation of autonomous public transport

    An Integrated Model for User State Detection of Subjective Discomfort in Autonomous Vehicles

    Get PDF
    The quickly rising development of autonomous vehicle technology and increase of (semi-) autonomous vehicles on the road leads to an increased demand for more sophisticated human–machine-cooperation approaches to improve trust and acceptance of these new systems. In this work, we investigate the feeling of discomfort of human passengers while driving autonomously and the automatic detection of this discomfort with several model approaches, using the combination of different data sources. Based on a driving simulator study , we analyzed the discomfort reports of 50 participants for autonomous inner city driving. We found that perceived discomfort depends on the driving scenario (with discomfort generally peaking in complex situations) and on the passenger (resulting in interindividual differences in reported discomfort extend and duration). Further, we describe three different model approaches on how to predict the passenger discomfort using data from the vehicle’s sensors as well as physiological and behavioral data from the passenger. The model’s precision varies greatly across the approaches, the best approach having a precision of up to 80%. All of our presented model approaches use combinations of linear models and are thus fast, transparent, and safe. Lastly, we analyzed these models using the SHAP method, which enables explaining the models’ discomfort predictions. These explanations are used to infer the importance of our collected features and to create a scenario-based discomfort analysis. Our work demonstrates a novel approach on passenger state modelling with simple, safe, and transparent models and with explainable model predictions, which can be used to adapt the vehicles’ actions to the needs of the passenger
    • …
    corecore