130 research outputs found

    Investigating the effect of urgency and modality of pedestrian alert warnings on driver acceptance and performance

    Get PDF
    Active safety systems have the potential to reduce the risk to pedestrians by warning the driver and/or taking evasive action to reduce the effects of or avoid a collision. However, current systems are limited in the range of scenarios they can address using primary control interventions, and this arguably places more emphasis in some situations on warning the driver so that they can take appropriate action in response to pedestrian hazards. In a counterbalanced experimental design, we varied urgency (‘when’) based on the time-to-collision (TTC) at which the warning was presented (with associated false-positive alarms, but no false negatives, or ‘misses’), and modality (‘how’) by presenting warnings using audio-only and audio combined with visual alerts presented on a HUD. Results from 24 experienced drivers, who negotiated an urban scenario during twelve 6.0-minute drives in a medium-fidelity driving simulator, showed that all warnings were generally rated ‘positively’ (using recognised subjective ‘acceptance’ scales), although acceptance was lower when warnings were delivered at the shortest (2.0s) TTC. In addition, drivers indicated higher confidence in combined audio and visual warnings in all situations. Performance (based on safety margins associated with critical events) varied significantly between warning onset times, with drivers first fixating their gaze on the hazard, taking their foot off the accelerator, applying their foot on the brake, and ultimately bringing the car to a stop further from the pedestrian when warnings were presented at the longest (5.0s) TTC. In addition, drivers applied the brake further from the pedestrian when combined audio and HUD warnings were provided (compared to audio-only), but only at 5.0s TTC. Overall, the study indicates a greater margin of safety associated with the provision of earlier warnings, with no apparent detriment to acceptance, despite relatively high false alarm rates at longer TTCs. Also, that drivers feel more confident with a warning system present, especially when it incorporates auditory and visual elements, even though the visual cue does not necessarily improve hazard localisation or driving performance beyond the advantages offered by auditory alerts alone. Findings are discussed in the context of the design, evaluation and acceptance of active safety systems

    Isolating the Effect of Off-Road Glance Duration on Driving Performance: An Exemplar Study Comparing HDD and HUD in Different Driving Scenarios

    Get PDF
    Objective: We controlled participants’ glance behavior while using head-down displays (HDDs) and head-up displays (HUDs) to isolate driving behavioral changes due to use of different display types across different driving environments. Background: Recently, HUD technology has been incorporated into vehicles, allowing drivers to, in theory, gather display information without moving their eyes away from the road. Previous studies comparing the impact of HUDs with traditional displays on human performance show differences in both drivers’ visual attention and driving performance. Yet no studies have isolated glance from driving behaviors, which limits our ability to understand the cause of these differences and resulting impact on display design. Method: We developed a novel method to control visual attention in a driving simulator. Twenty experienced drivers sustained visual attention to in-vehicle HDDs and HUDs while driving in both a simple straight and empty roadway environment and a more realistic driving environment that included traffic and turns. Results: In the realistic environment, but not the simpler environment, we found evidence of differing driving behaviors between display conditions, even though participants’ glance behavior was similar. Conclusion: Thus, the assumption that visual attention can be evaluated in the same way for different types of vehicle displays may be inaccurate. Differences between driving environments bring the validity of testing HUDs using simplistic driving environments into question. Application: As we move toward the integration of HUD user interfaces into vehicles, it is important that we develop new, sensitive assessment methods to ensure HUD interfaces are indeed safe for driving

    A Perceptual Color-Matching Method for Examining Color Blending in Augmented Reality Head-Up Display Graphics

    Get PDF
    Augmented reality (AR) offers new ways to visualize information on-the-go. As noted in related work, AR graphics presented via optical see-through AR displays are particularly prone to color blending, whereby intended graphic colors may be perceptually altered by real-world backgrounds, ultimately degrading usability. This work adds to this body of knowledge by presenting a methodology for assessing AR interface color robustness, as quantitatively measured via shifts in the CIE color space, and qualitatively assessed in terms of users’ perceived color name. We conducted a human factors study where twelve participants examined eight AR colors atop three real-world backgrounds as viewed through an in-vehicle AR head-up display (HUD); a type of optical see-through display used to project driving-related information atop the forward-looking road scene. Participants completed visual search tasks, matched the perceived AR HUD color against the WCS color palette, and verbally named the perceived color. We present analysis that suggests blue, green, and yellow AR colors are relatively robust, while red and brown are not, and discuss the impact of chromaticity shift and dispersion on outdoor AR interface design. While this work presents a case study in transportation, the methodology is applicable to a wide range of AR displays in many application domains and settings

    PATRIC, the bacterial bioinformatics database and analysis resource

    Get PDF
    The Pathosystems Resource Integration Center (PATRIC) is the all-bacterial Bioinformatics Resource Center (BRC) (http://www.patricbrc.org). A joint effort by two of the original National Institute of Allergy and Infectious Diseases-funded BRCs, PATRIC provides researchers with an online resource that stores and integrates a variety of data types [e.g. genomics, transcriptomics, protein-protein interactions (PPIs), three-dimensional protein structures and sequence typing data] and associated metadata. Datatypes are summarized for individual genomes and across taxonomic levels. All genomes in PATRIC, currently more than 10 000, are consistently annotated using RAST, the Rapid Annotations using Subsystems Technology. Summaries of different data types are also provided for individual genes, where comparisons of different annotations are available, and also include available transcriptomic data. PATRIC provides a variety of ways for researchers to find data of interest and a private workspace where they can store both genomic and gene associations, and their own private data. Both private and public data can be analyzed together using a suite of tools to perform comparative genomic or transcriptomic analysis. PATRIC also includes integrated information related to disease and PPIs. All the data and integrated analysis and visualization tools are freely available. This manuscript describes updates to the PATRIC since its initial report in the 2007 NAR Database Issu

    Search for gravitational-lensing signatures in the full third observing run of the LIGO-Virgo network

    Get PDF
    Gravitational lensing by massive objects along the line of sight to the source causes distortions of gravitational wave-signals; such distortions may reveal information about fundamental physics, cosmology and astrophysics. In this work, we have extended the search for lensing signatures to all binary black hole events from the third observing run of the LIGO--Virgo network. We search for repeated signals from strong lensing by 1) performing targeted searches for subthreshold signals, 2) calculating the degree of overlap amongst the intrinsic parameters and sky location of pairs of signals, 3) comparing the similarities of the spectrograms amongst pairs of signals, and 4) performing dual-signal Bayesian analysis that takes into account selection effects and astrophysical knowledge. We also search for distortions to the gravitational waveform caused by 1) frequency-independent phase shifts in strongly lensed images, and 2) frequency-dependent modulation of the amplitude and phase due to point masses. None of these searches yields significant evidence for lensing. Finally, we use the non-detection of gravitational-wave lensing to constrain the lensing rate based on the latest merger-rate estimates and the fraction of dark matter composed of compact objects

    A taxonomy of usability characteristics in virtual environments

    No full text
    Despite intense and wide-spread research in both virtual environments (VEs) and usability, the exciting new technology of VEs has not yet been closely coupled with the important characteristic of usability — a necessary coupling if VEs are to reach their full potential. Although numerous methods exist for usability evaluation of interactive computer applications, these methods have well-known limitations, especially for evaluating VEs. Thus, there is a great need to develop usability evaluation methods and criteria specifically for VEs. Our goal is to increase awareness of the need for usability engineering of VEs and to lay a scientific foundation for developing high-impact methods for usability engineering of VEs. The first step in our multi-year research plan has been accomplished, yielding a comprehensive multi-dimensional taxonomy of usability characteristics specifically for VEs. This taxonomy was developed by collecting and synthesizing information from literature, conferences, World Wide Web (WWW) searches, investigative research visits to top VE facilities, and interviews of VE researchers and developers. The taxonomy consists of four main areas of usability issues: Users and User Tasks in VEs, The Virtual Model, VE User Interface Input Mechanisms, andVE User Interface Presentatio
    corecore