606 research outputs found

    A comparison of artificial driving sounds for automated vehicles

    Get PDF
    As automated vehicles currently do not provide sufficient feedback relating to the primary driving task, drivers have no assurance that an automated vehicle has understood and can cope with upcoming traffic situations [16]. To address this we conducted two user evaluations to investigate auditory displays in automated vehicles using different types of sound cues related to the primary driving sounds: acceleration, deceleration/braking, gear changing and indicating. Our first study compared earcons, speech and auditory icons with existing vehicle sounds. Our findings suggested that earcons were an effective alternative to existing vehicle sounds for presenting information related to the primary driving task. Based on these findings a second study was conducted to further investigate earcons modulated by different sonic parameters to present primary driving sounds. We discovered that earcons containing naturally mapped sonic parameters such as pitch and timbre were as effective as existing sounds in a simulated automated vehicle

    Ambient hues and audible cues: An approach to automotive user interface design using multi-modal feedback

    Get PDF
    The use of touchscreen interfaces for in-vehicle information, entertainment, and for the control of comfort settings is proliferating. Moreover, using these interfaces requires the same visual and manual resources needed for safe driving. Guided by much of the prevalent research in the areas of the human visual system, attention, and multimodal redundancy the Hues and Cues design paradigm was developed to make touchscreen automotive user interfaces more suitable to use while driving. This paradigm was applied to a prototype of an automotive user interface and evaluated with respects to driver performance using the dual-task, Lane Change Test (LCT). Each level of the design paradigm was evaluated in light of possible gender differences. The results of the repeated measures experiment suggests that when compared to interfaces without both the Hues and the Cues paradigm applied, the Hues and Cues interface requires less mental effort to operate, is more usable, and is more preferred. However, the results differ in the degradation in driver performance with interfaces that only have visual feedback resulting in better task times and significant gender differences in the driving task with interfaces that only have auditory feedback. Overall, the results reported show that the presentation of multimodal feedback can be useful in design automotive interfaces, but must be flexible enough to account for individual differences

    On credibility improvements for automotive navigation systems

    Get PDF
    Automotive navigation systems are becoming ubiquitous as driver assistance systems. Vendors continuously aim to enhance route guidance by adding new features to their systems. However, we found in an analysis of current navigation systems that many share interaction weaknesses, which can damage the system’s credibility. Such issues are most prevalent when selecting a route, deviating from the route intentionally, or when systems react to dynamic traffic warnings. In this work, we analyze the impact on credibility and propose improved interaction mechanisms to enhance perceived credibility of navigation systems. We improve route selection and the integration of dynamic traffic warnings by optimizing route comparability with relevance-based information display. Further, we show how bidirectional communication between driver and device can be enhanced to achieve a better mapping between device behavior and driver intention. We evaluated the proposed mechanisms in a comparative user study and present results that confirm positive effects on perceived credibility

    May the Force Be with You: Ultrasound Haptic Feedback for Mid-Air Gesture Interaction in Cars

    Get PDF
    The use of ultrasound haptic feedback for mid-air gestures in cars has been proposed to provide a sense of control over the user's intended actions and to add touch to a touchless interaction. However, the impact of ultrasound feedback to the gesturing hand regarding lane deviation, eyes-off-the-road time (EORT) and perceived mental demand has not yet been measured. This paper investigates the impact of uni- and multimodal presentation of ultrasound feedback on the primary driving task and the secondary gesturing task in a simulated driving environment. The multimodal combinations of ultrasound included visual, auditory, and peripheral lights. We found that ultrasound feedback presented uni-modally and bi-modally resulted in significantly less EORT compared to visual feedback. Our results suggest that multimodal ultrasound feedback for mid-air interaction decreases EORT whilst not compromising driving performance nor mental demand and thus can increase safety while driving

    Enhanced Accessibility for People with Disabilities Living in Urban Areas

    Get PDF
    [Excerpt] People with disabilities constitute a significant proportion of the poor in developing countries. If internationally agreed targets on reducing poverty are to be reached, it is critical that specific measures be taken to reduce the societal discrimination and isolation that people with disabilities continue to face. Transport is an important enabler of strategies to fight poverty through enhancing access to education, employment, and social services. This project aims to further the understanding of the mobility and access issues experienced by people with disabilities in developing countries, and to identify specific steps that can be taken to start addressing problems. A major objective of the project is to compile a compendium of guidelines that can be used by government authorities, advocacy groups, and donor/loan agencies to improve the access of people with disabilities to transport and other services in urban areas

    Access to Personal Transportation for People with Disabilities with Autonomous Vehicles

    Get PDF
    The objective of this paper was to explore the potential of emerging technology of autonomous vehicles in accessible transportation and incorporate these findings a standardized transportation solution that readily accommodates future travelers with disabilities based on careful study on current trends in accessible transportation and interviews and surveys that were conducted as a part of this effort. The suggested solution and design principles associated with it took in account, the popular opinions of people with disabilities as well as various experts in the field of accessible transportation. The presented solution is based on emerging technology that is being actively pursued by the automotive industry and research institutions and seriously being considered through current and pending state legislation as a viable product in the near future. This paper explores the legal, technical and safety obstacles that lay in the path to making this a reality

    Voice Commands to Control Recording Sessions

    Get PDF
    In this report, the music recording workflow is described, with support for voice commands. Natural command grammars are proposed, allowing the user to name items, and issue commands on items identified by name. Recognition accuracy is examined within the contexts of single-phrase commands, and of versatile command grammars which enable the referring to items by name

    A user experience‐based toolset for automotive human‐machine interface technology development

    Get PDF
    The development of new automotive Human-Machine Interface (HMI) technologies must consider the competing and often conflicting demands of commercial value, User Experience (UX) and safety. Technology innovation offers manufacturers the opportunity to gain commercial advantage in a competitive and crowded marketplace, leading to an increase in the features and functionality available to the driver. User response to technology influences the perception of the brand as a whole, so it is important that in-vehicle systems provide a high-quality user experience. However, introducing new technologies into the car can also increase accident risk. The demands of usability and UX must therefore be balanced against the requirement for driver safety. Adopting a technology-focused business strategy carries a degree of risk, as most innovations fail before they reach the market. Obtaining clear and relevant information on the UX and safety of new technologies early in their development can help to inform and support robust product development (PD) decision making, improving product outcomes. In order to achieve this, manufacturers need processes and tools to evaluate new technologies, providing customer-focused data to drive development. This work details the development of an Evaluation Toolset for automotive HMI technologies encompassing safety-related functional metrics and UX measures. The Toolset consists of four elements: an evaluation protocol, based on methods identified from the Human Factors, UX and Sensory Science literature; a fixed-base driving simulator providing a context-rich, configurable evaluation environment, supporting both hardware and software-based technologies; a standardised simulation scenario providing a repeatable basis for technology evaluations, allowing comparisons across multiple technologies and studies; and a technology scorecard that collates and presents evaluation data to support PD decision making processes

    A Model to Predict Driver Task Performance When Interacting with In-Vehicle Speech Interfaces for Destination Entry and Music Selection.

    Full text link
    Motor vehicle crashes were estimated to be the eleventh leading cause of death in United States in 2009. Using a speech interface to operate infotainment systems while driving can potentially reduce driver distraction. Unfortunately, evaluations of driver interfaces are often too late to make changes. An alternative approach is to model driver task performance when using speech interfaces and to use the model to predict system performance early in design when changes are easier to make. The purposes of this research are to understand how drivers interact with speech interfaces and based on that knowledge, develop and validate a simulation model of how drivers interact with speech interfaces to aid speech-interface development. To develop the simulation model, a survey and a driving simulator experiment were conducted to identify how these tasks are carried out and the values for the process parameters. First, using a survey, frequency data for tasks and methods, and the content in user-generated databases were collected to assure that real tasks and constraints are considered in the simulation model. Next, a driving simulator experiment was conducted to understand how drivers perform destination entry and music selection and to determine the time drivers need to construct utterances, the errors drivers make, and the probability of correction strategies are used for each type of error. Half of these data were used to create the simulation model structure and provide the model parameters for entering destinations and selecting music using speech. Finally, the simulation model was validated for these two tasks using the second half of the data from the previous experiment. This research provides a model to predict user task performance with speech interfaces in motor vehicles. Use of this model supports the design of safer and easier to use speech interfaces in vehicles that can minimize eyes-off-road time and should reduce crash risk, and thereby protect public health. This model can be exercised to examine alternative speech interface configurations months before a physical interfaces is available for user testing when changes are easier to make, which saves time, reduces cost, and improves the quality of the interface produced.PHDIndustrial HealthUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/99777/1/loe_1.pd
    corecore