25 research outputs found

    Accessible Automated Automotive Workshop Series (A3WS): International Perspective on Inclusive External Human-Machine Interfaces

    Get PDF
    The fact that automated vehicles will be part of road traffic raises the question of how human road users, like bicyclists or pedestrians, would safely interact with them. Research has proposed external human-machine interfaces (eHMIs) for automated vehicles as a potential solution. Concept prototypes and evaluations so far have mainly focused on young, healthy adults and people without disabilities, such as visual impairments. For a “one-for-all” holistic, inclusive solution, however, further target groups like children, seniors, or people with (other) special needs will have to be considered. In this workshop, we bring together researchers, experts, and practitioners working on eHMIs to broaden our perspective on inclusiveness. We aim to identify aspects of inclusive eHMI design that can be universal and tailored to any culture and will focus on discussing methods, tools, and scenarios for inclusive communication

    The Role and Potentials of Field User Interaction Data in the Automotive UX Development Lifecycle: An Industry Perspective

    Full text link
    We are interested in the role of field user interaction data in the development of IVIS, the potentials practitioners see in analyzing this data, the concerns they share, and how this compares to companies with digital products. We conducted interviews with 14 UX professionals, 8 from automotive and 6 from digital companies, and analyzed the results by emergent thematic coding. Our key findings indicate that implicit feedback through field user interaction data is currently not evident in the automotive UX development process. Most decisions regarding the design of IVIS are made based on personal preferences and the intuitions of stakeholders. However, the interviewees also indicated that user interaction data has the potential to lower the influence of guesswork and assumptions in the UX design process and can help to make the UX development lifecycle more evidence-based and user-centered

    Digitizing human-to-human interaction for automated vehicles

    Get PDF

    A comparative study of speculative retrieval for multi-modal data trails: towards user-friendly Human-Vehicle interactions

    Get PDF
    In the era of growing developments in Autonomous Vehicles, the importance of Human-Vehicle Interaction has become apparent. However, the requirements of retrieving in-vehicle drivers’ multi- modal data trails, by utilizing embedded sensors, have been consid- ered user unfriendly and impractical. Hence, speculative designs, for in-vehicle multi-modal data retrieval, has been demanded for future personalized and intelligent Human-Vehicle Interaction. In this paper, we explore the feasibility to utilize facial recog- nition techniques to build in-vehicle multi-modal data retrieval. We first perform a comprehensive user study to collect relevant data and extra trails through sensors, cameras and questionnaire. Then, we build the whole pipeline through Convolution Neural Net- works to predict multi-model values of three particular categories of data, which are Heart Rate, Skin Conductance and Vehicle Speed, by solely taking facial expressions as input. We further evaluate and validate its effectiveness within the data set, which suggest the promising future of Speculative Designs for Multi-modal Data Retrieval through this approach

    May the Force Be with You: Ultrasound Haptic Feedback for Mid-Air Gesture Interaction in Cars

    Get PDF
    The use of ultrasound haptic feedback for mid-air gestures in cars has been proposed to provide a sense of control over the user's intended actions and to add touch to a touchless interaction. However, the impact of ultrasound feedback to the gesturing hand regarding lane deviation, eyes-off-the-road time (EORT) and perceived mental demand has not yet been measured. This paper investigates the impact of uni- and multimodal presentation of ultrasound feedback on the primary driving task and the secondary gesturing task in a simulated driving environment. The multimodal combinations of ultrasound included visual, auditory, and peripheral lights. We found that ultrasound feedback presented uni-modally and bi-modally resulted in significantly less EORT compared to visual feedback. Our results suggest that multimodal ultrasound feedback for mid-air interaction decreases EORT whilst not compromising driving performance nor mental demand and thus can increase safety while driving

    A High-Fidelity VR Simulation Study: Do External Warnings Really Improve Pedestrian Safe Crossing Behavior?

    Get PDF
    To better communicate with pedestrians, adding external displays to autonomous vehicles (AVs) has been proposed as a potential communication method to encourage safe crossing behavior by pedestrians. Whereas, most researchers have conducted intercept interviews, lab studies, or simulation studies to explore the efficacy of these displays, these approaches only studied crossing intention but did not explore crossing behavior. We developed a high-fidelity virtual reality scenario where participants could demonstrate actual crossing behavior within an adequately replicated real-world street. We simulated a local street with scalability of the real world in a VR environment, conducted an experiment in an empty space large enough for participants to move across the road in the VR environment. A mixed-method approach assessed attitudinal and behavioral interactions with potential warning patterns. The results showed that the warning patterns contributed significantly to pedestrians’ perceptual vigilance, as in past studies, but safer crossing behavior was not observed. This suggests that crossing intention measures may not be an adequate substitute for behavioral measures of crossing

    A comparative study of speculative retrieval for multi-modal data trails: towards user-friendly Human-Vehicle interactions

    Get PDF
    In the era of growing developments in Autonomous Vehicles, the importance of Human-Vehicle Interaction has become apparent. However, the requirements of retrieving in-vehicle drivers’ multi- modal data trails, by utilizing embedded sensors, have been consid- ered user unfriendly and impractical. Hence, speculative designs, for in-vehicle multi-modal data retrieval, has been demanded for future personalized and intelligent Human-Vehicle Interaction. In this paper, we explore the feasibility to utilize facial recog- nition techniques to build in-vehicle multi-modal data retrieval. We first perform a comprehensive user study to collect relevant data and extra trails through sensors, cameras and questionnaire. Then, we build the whole pipeline through Convolution Neural Net- works to predict multi-model values of three particular categories of data, which are Heart Rate, Skin Conductance and Vehicle Speed, by solely taking facial expressions as input. We further evaluate and validate its effectiveness within the data set, which suggest the promising future of Speculative Designs for Multi-modal Data Retrieval through this approach

    Automation transparency: Implications of uncertainty communication for human-automation interaction and interfaces

    Get PDF
    Operators of highly automated driving systems may exhibit behaviour characteristic for overtrust issues due to an insufficient awareness of automation fallibility. Consequently, situation awareness in critical situations is reduced and safe driving performance following emergency takeovers is impeded. A driving simulator study was used to assess the impact of dynamically communicating system uncertainties on monitoring, trust, workload, takeovers, and physiological responses. The uncertainty information was conveyed visually using a stylised heart beat combined with a numerical display and users were engaged in a visual search task. Multilevel analysis results suggest that uncertainty communication helps operators calibrate their trust and gain situation awareness prior to critical situations, resulting in safer takeovers. Additionally, eye tracking data indicate that operators can adjust their gaze behaviour in correspondence with the level of uncertainty. However, conveying uncertainties using a visual display significantly increases operator workload and impedes users in the execution of non-driving related tasks

    Interaction in Digital Ecologies with Connected and Non-Connected Cars

    Get PDF

    Social Control Experience Design:A Cross-Domain Investigation on Media

    Get PDF
    corecore