10,394 research outputs found

    Prefrontal cortex activation upon a demanding virtual hand-controlled task: A new frontier for neuroergonomics

    Get PDF
    open9noFunctional near-infrared spectroscopy (fNIRS) is a non-invasive vascular-based functional neuroimaging technology that can assess, simultaneously from multiple cortical areas, concentration changes in oxygenated-deoxygenated hemoglobin at the level of the cortical microcirculation blood vessels. fNIRS, with its high degree of ecological validity and its very limited requirement of physical constraints to subjects, could represent a valid tool for monitoring cortical responses in the research field of neuroergonomics. In virtual reality (VR) real situations can be replicated with greater control than those obtainable in the real world. Therefore, VR is the ideal setting where studies about neuroergonomics applications can be performed. The aim of the present study was to investigate, by a 20-channel fNIRS system, the dorsolateral/ventrolateral prefrontal cortex (DLPFC/VLPFC) in subjects while performing a demanding VR hand-controlled task (HCT). Considering the complexity of the HCT, its execution should require the attentional resources allocation and the integration of different executive functions. The HCT simulates the interaction with a real, remotely-driven, system operating in a critical environment. The hand movements were captured by a high spatial and temporal resolution 3-dimensional (3D) hand-sensing device, the LEAP motion controller, a gesture-based control interface that could be used in VR for tele-operated applications. Fifteen University students were asked to guide, with their right hand/forearm, a virtual ball (VB) over a virtual route (VROU) reproducing a 42 m narrow road including some critical points. The subjects tried to travel as long as possible without making VB fall. The distance traveled by the guided VB was 70.2 ± 37.2 m. The less skilled subjects failed several times in guiding the VB over the VROU. Nevertheless, a bilateral VLPFC activation, in response to the HCT execution, was observed in all the subjects. No correlation was found between the distance traveled by the guided VB and the corresponding cortical activation. These results confirm the suitability of fNIRS technology to objectively evaluate cortical hemodynamic changes occurring in VR environments. Future studies could give a contribution to a better understanding of the cognitive mechanisms underlying human performance either in expert or non-expert operators during the simulation of different demanding/fatiguing activities.openCarrieri, Marika; Petracca, Andrea; Lancia, Stefania; Basso Moro, Sara; Brigadoi, Sabrina; Spezialetti, Matteo; Ferrari, Marco; Placidi, Giuseppe; Quaresima, ValentinaCarrieri, Marika; Petracca, Andrea; Lancia, Stefania; BASSO MORO, Sara; Brigadoi, Sabrina; Spezialetti, Matteo; Ferrari, Marco; Placidi, Giuseppe; Quaresima, Valentin

    Putting the feel in ’look and feel‘

    Get PDF
    Haptic devices are now commercially available and thus touch has become a potentially realistic solution to a variety of interaction design challenges. We report on an investigation of the use of touch as a way of reducing visual overload in the conventional desktop. In a two-phase study, we investigated the use of the PHANToM haptic device as a means of interacting with a conventional graphical user interface. The first experiment compared the effects of four different haptic augmentations on usability in a simple targeting task. The second experiment involved a more ecologically-oriented searching and scrolling task. Results indicated that the haptic effects did not improve users performance in terms of task completion time. However, the number of errors made was significantly reduced. Subjective workload measures showed that participants perceived many aspects of workload as significantly less with haptics. The results are described and the implications for the use of haptics in user interface design are discussed

    Evaluation of Multi-sensory Feedback in Virtual and Real Remote Environments in a USAR Robot Teleoperation Scenario

    Get PDF
    The area of Human-Robot Interaction deals with problems not only related to robots interacting with humans, but also with problems related to humans interacting and controlling robots. This dissertation focuses on the latter and evaluates multi-sensory (vision, hearing, touch, smell) feedback interfaces as a means to improve robot-operator cognition and performance. A set of four empirical studies using both simulated and real robotic systems evaluated a set of multi-sensory feedback interfaces with various levels of complexity. The task scenario for the robot in these studies involved the search for victims in a debris-filled environment after a fictitious catastrophic event (e.g., earthquake) took place. The results show that, if well-designed, multi-sensory feedback interfaces can indeed improve the robot operator data perception and performance. Improvements in operator performance were detected for navigation and search tasks despite minor increases in workload. In fact, some of the multi-sensory interfaces evaluated even led to a reduction in workload. The results also point out that redundant feedback is not always beneficial to the operator. While introducing the concept of operator omni-directional perception, that is, the operator’s capability of perceiving data or events coming from all senses and in all directions, this work explains that feedback redundancy is only beneficial when it enhances the operator omni-directional perception of data relevant to the task at hand. Last, the comprehensive methodology employed and refined over the course of the four studies is suggested as a starting point for the design of future HRI user studies. In summary, this work sheds some light on the benefits and challenges multi-sensory feedback interfaces bring, specifically on teleoperated robotics. It adds to our current understanding of these kinds of interfaces and provides a few insights to assist the continuation of research in the area

    The effects of substitute multisensory feedback on task performance and the sense of presence in a virtual reality environment

    Get PDF
    Objective and subjective measures of performance in virtual reality environments increase as more sensory cues are delivered and as simulation fidelity increases. Some cues (colour or sound) are easier to present than others (object weight, vestibular cues) so that substitute cues can be used to enhance informational content in a simulation at the expense of simulation fidelity. This study evaluates how substituting cues in one modality by alternative cues in another modality affects subjective and objective performance measures in a highly immersive virtual reality environment. Participants performed a wheel change in a virtual reality (VR) environment. Auditory, haptic and visual cues, signalling critical events in the simulation, were manipulated in a factorial design. Subjective ratings were recorded via questionnaires. The time taken to complete the task was used as an objective performance measure. The results show that participants performed best and felt an increased sense of immersion and involvement, collectively referred to as 'presence', when substitute multimodal sensory feedback was provided. Significant main effects of audio and tactile cues on task performance and on participants' subjective ratings were found. A significant negative relationship was found between the objective (overall completion times) and subjective (ratings of presence) performance measures. We conclude that increasing informational content, even if it disrupts fidelity, enhances performance and user's overall experience. On this basis we advocate the use of substitute cues in VR environments as an efficient method to enhance performance and user experience

    Design For Auditory Displays: Identifying Temporal And Spatial Information Conveyance Principles

    Get PDF
    Designing auditory interfaces is a challenge for current human-systems developers. This is largely due to a lack of theoretical guidance for directing how best to use sounds in today\u27s visually-rich graphical user interfaces. This dissertation provided a framework for guiding the design of audio interfaces to enhance human-systems performance. This doctoral research involved reviewing the literature on conveying temporal and spatial information using audio, using this knowledge to build three theoretical models to aid the design of auditory interfaces, and empirically validating select components of the models. The three models included an audio integration model that outlines an end-to-end process for adding sounds to interactive interfaces, a temporal audio model that provides a framework for guiding the timing for integration of these sounds to meet human performance objectives, and a spatial audio model that provides a framework for adding spatialization cues to interface sounds. Each model is coupled with a set of design guidelines theorized from the literature, thus combined, the developed models put forward a structured process for integrating sounds in interactive interfaces. The developed models were subjected to a three phase validation process that included review by Subject Matter Experts (SMEs) to assess the face validity of the developed models and two empirical studies. For the SME review, which assessed the utility of the developed models and identified opportunities for improvement, a panel of three audio experts was selected to respond to a Strengths, Weaknesses, Opportunities, and Threats (SWOT) validation questionnaire. Based on the SWOT analysis, the main strengths of the models included that they provide a systematic approach to auditory display design and that they integrate a wide variety of knowledge sources in a concise manner. The main weaknesses of the models included the lack of a structured process for amending the models with new principles, some branches were not considered parallel or completely distinct, and lack of guidance on selecting interface sounds. The main opportunity identified by the experts was the ability of the models to provide a seminal body of knowledge that can be used for building and validating auditory display designs. The main threats identified by the experts were that users may not know where to start and end with each model, the models may not provide comprehensive coverage of all uses of auditory displays, and the models may act as a restrictive influence on designers or they may be used inappropriately. Based on the SWOT analysis results, several changes were made to the models prior to the empirical studies. Two empirical evaluation studies were conducted to test the theorized design principles derived from the revised models. The first study focused on assessing the utility of audio cues to train a temporal pacing task and the second study combined both temporal (i.e., pace) and spatial audio information, with a focus on examining integration issues. In the pace study, there were four different auditory conditions used for training pace: 1) a metronome, 2) non-spatial auditory earcons, 3) a spatialized auditory earcon, and 4) no audio cues for pace training. Sixty-eight people participated in the study. A pre- post between subjects experimental design was used, with eight training trials. The measure used for assessing pace performance was the average deviation from a predetermined desired pace. The results demonstrated that a metronome was not effective in training participants to maintain a desired pace, while, spatial and non-spatial earcons were effective strategies for pace training. Moreover, an examination of post-training performance as compared to pre-training suggested some transfer of learning. Design guidelines were extracted for integrating auditory cues for pace training tasks in virtual environments. In the second empirical study, combined temporal (pacing) and spatial (location of entities within the environment) information were presented. There were three different spatialization conditions used: 1) high fidelity using subjective selection of a best-fit head related transfer function, 2) low fidelity using a generalized head-related transfer function, and 3) no spatialization. A pre- post between subjects experimental design was used, with eight training trials. The performance measures were average deviation from desired pace and time and accuracy to complete the task. The results of the second study demonstrated that temporal, non-spatial auditory cues were effective in influencing pace while other cues were present. On the other hand, spatialized auditory cues did not result in significantly faster task completion. Based on these results, a set of design guidelines was proposed that can be used to direct the integration of spatial and temporal auditory cues for supporting training tasks in virtual environments. Taken together, the developed models and the associated guidelines provided a theoretical foundation from which to direct user-centered design of auditory interfaces

    Optimizing The Design Of Multimodal User Interfaces

    Get PDF
    Due to a current lack of principle-driven multimodal user interface design guidelines, designers may encounter difficulties when choosing the most appropriate display modality for given users or specific tasks (e.g., verbal versus spatial tasks). The development of multimodal display guidelines from both a user and task domain perspective is thus critical to the achievement of successful human-system interaction. Specifically, there is a need to determine how to design task information presentation (e.g., via which modalities) to capitalize on an individual operator\u27s information processing capabilities and the inherent efficiencies associated with redundant sensory information, thereby alleviating information overload. The present effort addresses this issue by proposing a theoretical framework (Architecture for Multi-Modal Optimization, AMMO) from which multimodal display design guidelines and adaptive automation strategies may be derived. The foundation of the proposed framework is based on extending, at a functional working memory (WM) level, existing information processing theories and models with the latest findings in cognitive psychology, neuroscience, and other allied sciences. The utility of AMMO lies in its ability to provide designers with strategies for directing system design, as well as dynamic adaptation strategies (i.e., multimodal mitigation strategies) in support of real-time operations. In an effort to validate specific components of AMMO, a subset of AMMO-derived multimodal design guidelines was evaluated with a simulated weapons control system multitasking environment. The results of this study demonstrated significant performance improvements in user response time and accuracy when multimodal display cues were used (i.e., auditory and tactile, individually and in combination) to augment the visual display of information, thereby distributing human information processing resources across multiple sensory and WM resources. These results provide initial empirical support for validation of the overall AMMO model and a sub-set of the principle-driven multimodal design guidelines derived from it. The empirically-validated multimodal design guidelines may be applicable to a wide range of information-intensive computer-based multitasking environments

    SARSCEST (human factors)

    Get PDF
    People interact with the processes and products of contemporary technology. Individuals are affected by these in various ways and individuals shape them. Such interactions come under the label 'human factors'. To expand the understanding of those to whom the term is relatively unfamiliar, its domain includes both an applied science and applications of knowledge. It means both research and development, with implications of research both for basic science and for development. It encompasses not only design and testing but also training and personnel requirements, even though some unwisely try to split these apart both by name and institutionally. The territory includes more than performance at work, though concentration on that aspect, epitomized in the derivation of the term ergonomics, has overshadowed human factors interest in interactions between technology and the home, health, safety, consumers, children and later life, the handicapped, sports and recreation education, and travel. Two aspects of technology considered most significant for work performance, systems and automation, and several approaches to these, are discussed

    The impact of multisensory integration and perceptual load in virtual reality settings on performance, workload and presence

    Get PDF
    Real-world experience is typically multimodal. Evidence indicates that the facilitation in the detection of multisensory stimuli is modulated by the perceptual load, the amount of information involved in the processing of the stimuli. Here, we used a realistic virtual reality environment while concomitantly acquiring Electroencephalography (EEG) and Galvanic Skin Response (GSR) to investigate how multisensory signals impact target detection in two conditions, high and low perceptual load. Different multimodal stimuli (auditory and vibrotactile) were presented, alone or in combination with the visual target. Results showed that only in the high load condition, multisensory stimuli significantly improve performance, compared to visual stimulation alone. Multisensory stimulation also decreases the EEG-based workload. Instead, the perceived workload, according to the "NASA Task Load Index" questionnaire, was reduced only by the trimodal condition (i.e., visual, auditory, tactile). This trimodal stimulation was more effective in enhancing the sense of presence, that is the feeling of being in the virtual environment, compared to the bimodal or unimodal stimulation. Also, we show that in the high load task, the GSR components are higher compared to the low load condition. Finally, the multimodal stimulation (Visual-Audio-Tactile-VAT and Visual-Audio-VA) induced a significant decrease in latency, and a significant increase in the amplitude of the P300 potentials with respect to the unimodal (visual) and visual and tactile bimodal stimulation, suggesting a faster and more effective processing and detection of stimuli if auditory stimulation is included. Overall, these findings provide insights into the relationship between multisensory integration and human behavior and cognition

    Space human factors discipline science plan

    Get PDF
    The purpose of this Discipline Science Plan is to provide a conceptual strategy for NASA's Life Sciences Division research and development activities in the comprehensive areas of behavior, performance, and human factors. This document summarizes the current status of the program, outlines available knowledge, establishes goals and objectives, defines critical questions in the subdiscipline areas, and identifies technological priorities. It covers the significant research areas critical to NASA's programmatic requirements for the Extended Duration Orbiter, Space Station Freedom, and Exploration mission science activities. These science activities include ground-based and flight; basic, applied and operational; and animal and human research and development. This document contains a general plan that will be used by both NASA Headquarters program offices and the field centers to review and plan basic, applied, and operational research and development activities, both intramural and extramural, in this area
    • …
    corecore