527 research outputs found

    The Priming Function of In-car Audio Instruction

    Get PDF
    Studies to date have focused on the priming power of visual road signs, but not the priming potential of audio road scene instruction. Here, the relative priming power of visual, audio and multisensory road scene instructions were assessed. In a lab-based study, participants responded to target road scene turns following visual, audio or multisensory road turn primes which were congruent or incongruent to the primes in direction, or control primes. All types of instruction (visual, audio, multisensory) were successful in priming responses to a road scene. Responses to multisensory-primed targets (both audio and visual) were faster than responses to either audio or visual primes alone. Incongruent audio primes did not affect performance negatively in the manner of incongruent visual or multisensory primes. Results suggest that audio instructions have the potential to prime drivers to respond quickly and safely to their road environment. Peak performance will be observed if audio and visual road instruction primes can be timed to co-occur

    Human emotional response to steering wheel vibration in automobiles

    Get PDF
    This is the post-print (final draft post-refereeing) version of the final published paper that is available from the link below. Copyright @ 2013 Inderscience Enterprises Ltd.This study investigates what form of correlation may exist between measures of the valence and the arousal dimensions of the human emotional response to steering wheel vibration and the vibration intensity metrics obtained by means of the unweighted and the frequency weighted root mean square (rms). A laboratory experiment was performed with 30 participants who were presented 17 acceleration time histories in random order and asked to rate their emotional feelings of valence and arousal using a self-assessment manikin (SAM) scale. The results suggest a highly linear correlation between the unweighted, Wh weighted and Ws weighted vibration intensity metrics and the arousal measures of the human emotional response. The results also suggest that while vibration intensity plays a significant role in eliciting emotional feelings, there are other factors which influence the human emotional response to steering wheel vibration such as the presence of high peaks or high frequency band amplitudes

    An Introduction to 3D User Interface Design

    Get PDF
    3D user interface design is a critical component of any virtual environment (VE) application. In this paper, we present a broad overview of three-dimensional (3D) interaction and user interfaces. We discuss the effect of common VE hardware devices on user interaction, as well as interaction techniques for generic 3D tasks and the use of traditional two-dimensional interaction styles in 3D environments. We divide most user interaction tasks into three categories: navigation, selection/manipulation, and system control. Throughout the paper, our focus is on presenting not only the available techniques, but also practical guidelines for 3D interaction design and widely held myths. Finally, we briefly discuss two approaches to 3D interaction design, and some example applications with complex 3D interaction requirements. We also present an annotated online bibliography as a reference companion to this article

    Dynamic Bayesian Collective Awareness Models for a Network of Ego-Things

    Get PDF
    A novel approach is proposed for multimodal collective awareness (CA) of multiple networked intelligent agents. Each agent is here considered as an Internet-of-Things (IoT) node equipped with machine learning capabilities; CA aims to provide the network with updated causal knowledge of the state of execution of actions of each node performing a joint task, with particular attention to anomalies that can arise. Data-driven dynamic Bayesian models learned from multisensory data recorded during the normal realization of a joint task (agent network experience) are used for distributed state estimation of agents and detection of abnormalities. A set of switching dynamic Bayesian network (DBN) models collectively learned in a training phase, each related to particular sensorial modality, is used to allow each agent in the network to perform synchronous estimation of possible abnormalities occurring when a new task of the same type is jointly performed. Collective DBN (CDBN) learning is performed by unsupervised clustering of generalized errors (GEs) obtained from a starting generalized model. A growing neural gas (GNG) algorithm is used as a basis to learn the discrete switching variables at the semantic level. Conditional probabilities linking nodes in the CDBN models are estimated using obtained clusters. CDBN models are associated with a Bayesian inference method, namely, distributed Markov jump particle filter (D-MJPF), employed for joint state estimation and abnormality detection. The effects of networking protocols and of communications in the estimation of state and abnormalities are analyzed. Performance is evaluated by using a small network of two autonomous vehicles performing joint navigation tasks in a controlled environment. In the proposed method, first the sharing of observations is considered in ideal condition, and then the effects of a wireless communication channel have been analyzed for the collective abnormality estimation of the agents. Rician wireless channel and the usage of two protocols (i.e., IEEE 802.11p and IEEE 802.15.4) along with different channel conditions are considered as well

    MULTISENSORY CUE CONGRUENCY IN LANE CHANGE TEST

    Get PDF
    Nowadays, a driver interacts with multiple systems while driving. Multimodal in-vehicle technologies (e.g., Personal Navigation Devices) intend to facilitate multitasking while driving. Multimodality enables to reduce cognitive effort in information processing, but not always. The present study aims to investigate how/when auditory cues could improve driver responses to a visual target. We manipulated three dimensions (spatial, semantic, and temporal) of verbal and nonverbal cues to interact with visual spatial instructions. Multimodal displays were compared with unimodal (visual-only) displays to see whether they would facilitate or degrade a vehicle control task. Twenty-six drivers participated in the Auditory-Spatial Stroop experiment using a lane change test (LCT). The preceding auditory cues improved response time over the visual-only condition. When conflicting, spatial congruency has a stronger impact than semantic congruency. The effects on accuracy was minimal, but there was a trend of speed-accuracy trade-offs. Results are discussed with theoretical issues and future works

    Alternative avenues in the assessment of driving capacities in older drivers and implications for training

    Get PDF
    The population aging, combined with the overrepresentation of older drivers in car crashes, engendered a whole body of research destined at finding simple and efficient assessment methods of driving capacities. However, this quest is little more than a utopian dream, given that car crashes and unsafe driving behaviours can result from a plethora of interacting factors. This review highlights the main problems of the current assessment methods and training programs, and presents theoretical and empirical arguments justifying the need of reorienting the research focus. Our discussion is elaborated in light of the fundamental principle of specificity in learning and practice. We also identify overlooked variables that are deterministic when assessing, and training, a complex ability like driving. We especially focus on the role of the sensorimotor transformation process. Finally, we propose alternative methods that are in-line with the recent trends in educational programs that use virtual reality and simulation technologies

    Path Following in Non-Visual Conditions

    Get PDF
    Path-following tasks have been investigated mostly under visual conditions, that is when subjects are able to see both the path and the tool, or limb, used for navigation. Moreover, only basic path shapes are usually adopted. In the present experiment, participants must rely exclusively on audio and vibrotactile feedback to follow a path on a flat surface. Two different, asymmetric path shapes were tested. Participants navigated by moving their index finger over a surface sensing position and force. Results show that the different non-visual feedback modes did not affect the task's accuracy, yet they affected its speed, with vibrotactile feedback causing slower gestures than audio feedback. Conversely, audio and audio-tactile feedback yielded similar results. Vibrotactile feedback caused participants to exert more force over the surface. Finally, the shape of the path was relevant to the accuracy, and participants tended to prefer audio over vibrotactile and audio-tactile feedback

    Age-Related Differences in Multimodal Information Processing and Their Implications for Adaptive Display Design.

    Full text link
    In many data-rich, safety-critical environments, such as driving and aviation, multimodal displays (i.e., displays that present information in visual, auditory, and tactile form) are employed to support operators in dividing their attention across numerous tasks and sources of information. However, limitations of this approach are not well understood. Specifically, most research on the effectiveness of multimodal interfaces has examined the processing of only two concurrent signals in different modalities, primarily in vision and hearing. Also, nearly all studies to date have involved young participants only. The goals of this dissertation were therefore to (1) determine the extent to which people can notice and process three unrelated concurrent signals in vision, hearing and touch, (2) examine how well aging modulates this ability, and (3) develop countermeasures to overcome observed performance limitations. Adults aged 65+ years were of particular interest because they represent the fastest growing segment of the U.S. population, are known to suffer from various declines in sensory abilities, and experience difficulties with divided attention. Response times and incorrect response rates to singles, pairs, and triplets of visual, auditory, and tactile stimuli were significantly higher for older adults, compared to younger participants. In particular, elderly participants often failed to notice the tactile signal when all three cues were combined. They also frequently falsely reported the presence of a visual cue when presented with a combination of auditory and tactile cues. These performance breakdowns were observed both in the absence and presence of a concurrent visual/manual (driving) task. Also, performance on the driving task suffered the most for older adult participants and with the combined visual-auditory-tactile stimulation. Introducing a half-second delay between two stimuli significantly increased response accuracy for older adults. This work adds to the knowledge base in multimodal information processing, the perceptual and attentional abilities and limitations of the elderly, and adaptive display design. From an applied perspective, these results can inform the design of multimodal displays and enable aging drivers to cope with increasingly data-rich in-vehicle technologies. The findings are expected to generalize and thus contribute to improved overall public safety in a wide range of complex environments.PhDIndustrial and Operations EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/133203/1/bjpitts_1.pd

    A review of human sensory dynamics for application to models of driver steering and speed control.

    Get PDF
    In comparison with the high level of knowledge about vehicle dynamics which exists nowadays, the role of the driver in the driver-vehicle system is still relatively poorly understood. A large variety of driver models exist for various applications; however, few of them take account of the driver's sensory dynamics, and those that do are limited in their scope and accuracy. A review of the literature has been carried out to consolidate information from previous studies which may be useful when incorporating human sensory systems into the design of a driver model. This includes information on sensory dynamics, delays, thresholds and integration of multiple sensory stimuli. This review should provide a basis for further study into sensory perception during driving.This work was supported by the UK Engineering and Physical Sciences Research Council (EP/P505445/1) (studentship for Nash).This is the published version. It first appeared from Springer at http://dx.doi.org/10.1007/s00422-016-0682-x

    Enhancing Situational Awareness for Rotorcraft Pilots Using Virtual and Augmented Reality

    Get PDF
    Rotorcraft pilots often face the challenge of processing a multitude of data, integrating it with prior experience and making informed decisions in complex, rapidly changing multisensory environments. Virtual Reality (VR), and more recently Augmented Reality (AR) technologies have been applied for providing users with immersive, interactive and navigable experiences. The research work described in this thesis demonstrates that VR/AR are particularly effective in providing real-time information without detracting from the pilot\u27s mission in both civilian and military engagements. The immersion of the pilot inside of the VR model provides enhanced realism. Interaction with the VR environment allows pilots to practice appropriately responding to simulated threats. Navigation allows the VR environment to change with varying parameters. In this thesis, VR/AR environments are applied for the design and development of a head-up display (HUD) for helicopter pilots. The usability of the HUD that is developed as a part of this thesis is assessed using established frameworks for human systems engineering by incorporating best practices for user-centered design. The research work described in this thesis will demonstrate that VR/AR environments can provide flexible, ergonomic, and user-focused interfaces for real-time operations in complex, multisensory environments
    • …
    corecore