90 research outputs found

    Designing Tactile Interfaces for Abstract Interpersonal Communication, Pedestrian Navigation and Motorcyclists Navigation

    Get PDF
    The tactile medium of communication with users is appropriate for displaying information in situations where auditory and visual mediums are saturated. There are situations where a subject's ability to receive information through either of these channels is severely restricted by the environment they are in or through any physical impairments that the subject may have. In this project, we have focused on two groups of users who need sustained visual and auditory focus in their task: Soldiers on the battle field and motorcyclists. Soldiers on the battle field use their visual and auditory capabilities to maintain awareness of their environment to guard themselves from enemy assault. One of the major challenges to coordination in a hazardous environment is maintaining communication between team members while mitigating cognitive load. Compromise in communication between team members may result in mistakes that can adversely affect the outcome of a mission. We have built two vibrotactile displays, Tactor I and Tactor II, each with nine actuators arranged in a three-by-three matrix with differing contact areas that can represent a total of 511 shapes. We used two dimensions of tactile medium, shapes and waveforms, to represent verb phrases and evaluated ability of users to perceive verb phrases the tactile code. We evaluated the effectiveness of communicating verb phrases while the users were performing two tasks simultaneously. The results showed that performing additional visual task did not affect the accuracy or the time taken to perceive tactile codes. Another challenge in coordinating Soldiers on a battle field is navigating them to respective assembly areas. We have developed HaptiGo, a lightweight haptic vest that provides pedestrians both navigational intelligence and obstacle detection capabilities. HaptiGo consists of optimally-placed vibro-tactile sensors that utilize natural and small form factor interaction cues, thus emulating the sensation of being passively guided towards the intended direction. We evaluated HaptiGo and found that it was able to successfully navigate users with timely alerts of incoming obstacles without increasing cognitive load, thereby increasing their environmental awareness. Additionally, we show that users are able to respond to directional information without training. The needs of motorcyclists are di erent from those of Soldiers. Motorcyclists' need to maintain visual and auditory situational awareness at all times is crucial since they are highly exposed on the road. Route guidance systems, such as the Garmin, have been well tested on automobilists, but remain much less safe for use by motorcyclists. Audio/visual routing systems decrease motorcyclists' situational awareness and vehicle control, and thus increase the chances of an accident. To enable motorcyclists to take advantage of route guidance while maintaining situational awareness, we created HaptiMoto, a wearable haptic route guidance system. HaptiMoto uses tactile signals to encode the distance and direction of approaching turns, thus avoiding interference with audio/visual awareness. Evaluations show that HaptiMoto is intuitive for motorcyclists, and a safer alternative to existing solutions

    LVC Interaction within a Mixed Reality Training System

    Get PDF
    The United States military is increasingly pursuing advanced live, virtual, and constructive (LVC) training systems for reduced cost, greater training flexibility, and decreased training times. Combining the advantages of realistic training environments and virtual worlds, mixed reality LVC training systems can enable live and virtual trainee interaction as if co-located. However, LVC interaction in these systems often requires constructing immersive environments, developing hardware for live-virtual interaction, tracking in occluded environments, and an architecture that supports real-time transfer of entity information across many systems. This paper discusses a system that overcomes these challenges to empower LVC interaction in a reconfigurable, mixed reality environment. This system was developed and tested in an immersive, reconfigurable, and mixed reality LVC training system for the dismounted warfighter at ISU, known as the Veldt, to overcome LVC interaction challenges and as a test bed for cuttingedge technology to meet future U.S. Army battlefield requirements. Trainees interact physically in the Veldt and virtually through commercial and developed game engines. Evaluation involving military trained personnel found this system to be effective, immersive, and useful for developing the critical decision-making skills necessary for the battlefield. Procedural terrain modeling, model-matching database techniques, and a central communication server process all live and virtual entity data from system components to create a cohesive virtual world across all distributed simulators and game engines in real-time. This system achieves rare LVC interaction within multiple physical and virtual immersive environments for training in real-time across many distributed systems

    Haptics: Science, Technology, Applications

    Get PDF
    This open access book constitutes the proceedings of the 12th International Conference on Human Haptic Sensing and Touch Enabled Computer Applications, EuroHaptics 2020, held in Leiden, The Netherlands, in September 2020. The 60 papers presented in this volume were carefully reviewed and selected from 111 submissions. The were organized in topical sections on haptic science, haptic technology, and haptic applications. This year's focus is on accessibility

    Multimodal Interaction for Enhancing Team Coordination on the Battlefield

    Get PDF
    Team coordination is vital to the success of team missions. On the battlefield and in other hazardous environments, mission outcomes are often very unpredictable because of unforeseen circumstances and complications encountered that adversely affect team coordination. In addition, the battlefield is constantly evolving as new technology, such as context-aware systems and unmanned drones, becomes available to assist teams in coordinating team efforts. As a result, we must re-evaluate the dynamics of teams that operate in high-stress, hazardous environments in order to learn how to use technology to enhance team coordination within this new context. In dangerous environments where multi-tasking is critical for the safety and success of the team operation, it is important to know what forms of interaction are most conducive to team tasks. We have explored interaction methods, including various types of user input and data feedback mediums that can assist teams in performing unified tasks on the battlefield. We’ve conducted an ethnographic analysis of Soldiers and researched technologies such as sketch recognition, physiological data classification, augmented reality, and haptics to come up with a set of core principles to be used when de- signing technological tools for these teams. This dissertation provides support for these principles and addresses outstanding problems of team connectivity, mobility, cognitive load, team awareness, and hands-free interaction in mobile military applications. This research has resulted in the development of a multimodal solution that enhances team coordination by allowing users to synchronize their tasks while keeping an overall awareness of team status and their environment. The set of solutions we’ve developed utilizes optimal interaction techniques implemented and evaluated in related projects; the ultimate goal of this research is to learn how to use technology to provide total situational awareness and team connectivity on the battlefield. This information can be used to aid the research and development of technological solutions for teams that operate in hazardous environments as more advanced resources become available

    Enhancing Situational Awareness Through Haptics Interaction In Virtual Environment Training Systmes

    Get PDF
    Virtual environment (VE) technology offers a viable training option for developing knowledge, skills and attitudes (KSA) within domains that have limited live training opportunities due to personnel safety and cost (e.g., live fire exercises). However, to ensure these VE training systems provide effective training and transfer, designers of such systems must ensure that training goals and objectives are clearly defined and VEs are designed to support development of KSAs required. Perhaps the greatest benefit of VE training is its ability to provide a multimodal training experience, where trainees can see, hear and feel their surrounding environment, thus engaging them in training scenarios to further their expertise. This work focused on enhancing situation awareness (SA) within a training VE through appropriate use of multimodal cues. The Multimodal Optimization of Situation Awareness (MOSA) model was developed to identify theoretical benefits of various environmental and individual multimodal cues on SA components. Specific focus was on benefits associated with adding cues that activated the haptic system (i.e., kinesthetic/cutaneous sensory systems) or vestibular system in a VE. An empirical study was completed to evaluate the effectiveness of adding two independent spatialized tactile cues to a Military Operations on Urbanized Terrain (MOUT) VE training system, and how head tracking (i.e., addition of rotational vestibular cues) impacted spatial awareness and performance when tactile cues were added during training. Results showed tactile cues enhanced spatial awareness and performance during both repeated training and within a transfer environment, yet there were costs associated with including two cues together during training, as each cue focused attention on a different aspect of the global task. In addition, the results suggest that spatial awareness benefits from a single point indicator (i.e., spatialized tactile cues) may be impacted by interaction mode, as performance benefits were seen when tactile cues were paired with head tracking. Future research should further examine theoretical benefits outlined in the MOSA model, and further validate that benefits can be realized through appropriate activation of multimodal cues for targeted training objectives during training, near transfer and far transfer (i.e., real world performance)

    Haptics: Science, Technology, Applications

    Get PDF
    This open access book constitutes the proceedings of the 12th International Conference on Human Haptic Sensing and Touch Enabled Computer Applications, EuroHaptics 2020, held in Leiden, The Netherlands, in September 2020. The 60 papers presented in this volume were carefully reviewed and selected from 111 submissions. The were organized in topical sections on haptic science, haptic technology, and haptic applications. This year's focus is on accessibility

    Unmanned Aerial Vehicle (UAV) Operators’ Workload Reduction: The Effect of 3D Audio on Operators’ Workload and Performance during Multi-Aircraft Control

    Get PDF
    The importance and number of Unmanned Aerial Vehicle (UAV) operations are rapidly growing in both military and civilian applications. This growth has produced significant manpower issues, producing a desire that multiple aircraft are controlled by a single operator as opposed to the current model where one aircraft may require multiple operators. A potential issue is the need for an operator to monitor radio traffic for the call signs of multi-aircraft. An investigation of the use of 3D sound was undertaken to investigate whether an automatic parser, which preselected the spatial location of relevant versus irrelevant call signs, could aid UAV operators in increasing performance with reduced workload. Furthermore, because the 3D audio system may not guarantee 100% reliability, human performance with the 3D audio system was also collected when they were informed announcement that errors were possible and when the reliability level was less than 100%. This investigation included development of a human performance model, simulation of human performance and workload, and a human subject study. Consequently, promising effects of the 3D audio system on multi-aircraft control were found. This novel and unique use of 3D sound is discussed, and significant improvements in response time and workload are demonstrated

    Structured evaluation of training in virtual environments

    Get PDF
    Virtual Environments (VEs) created through Virtual Reality (VR) technologies have been suggested as potentially beneficial for a number of applications. However a review of VEs and VR has highlighted the main barriers to implementation as: current technological limitations; usability issues with various systems; a lack of real applications; and therefore little proven value of use. These barriers suggest that industry would benefit from some structured guidance for developing effective VEs. To examine this ‘training’ was chosen to be explored, as it has been suggested as a potential early use of VEs and is of importance to many sectors. A review of existing case studies on VE training applications (VETs) examined type of training applications and VR systems being considered; state of development of these applications and results of any evaluation studies. In light of these case studies, it was possible to focus this work on the structured evaluation of training psycho-motor skills using VEs created by desktop VR. In order to perform structured evaluation, existing theories of training and evaluation were also reviewed. Using these theories, a framework for developing VETs was suggested. Applying this framework, two VETs were proposed, specified, developed and evaluated. Conclusions of this work highlighted the many areas in the development process of an effective VET that still need addressing. In particular, in the proposal stage, it is necessary to provide some guidance on the appropriateness of VET for particular tasks. In the specification and building stages, standard formats and techniques are required in order to guide the VE developer(s) in producing an effective VET. Finally in the evaluation stage, there are still tools required that highlight the benefits of VET and many more evaluation studies needed to contribute information back to the development process. Therefore VEs are still in their early stages and this work unifies existing work in the area specifically on training and highlights the gaps that need to be addressed before widespread implementation

    Enhancing user experience and safety in the context of automated driving through uncertainty communication

    Get PDF
    Operators of highly automated driving systems may exhibit behaviour characteristic of overtrust issues due to an insufficient awareness of automation fallibility. Consequently, situation awareness in critical situations is reduced and safe driving performance following emergency takeovers is impeded. Previous research has indicated that conveying system uncertainties may alleviate these issues. However, existing approaches require drivers to attend the uncertainty information with focal attention, likely resulting in missed changes when engaged in non-driving-related tasks. This research project expands on existing work regarding uncertainty communication in the context of automated driving. Specifically, it aims to investigate the implications of conveying uncertainties under consideration of non-driving-related tasks and, based on the outcomes, develop and evaluate an uncertainty display that enhances both user experience and driving safety. In a first step, the impact of visually conveying uncertainties was investigated under consideration of workload, trust, monitoring behaviour, non-driving-related tasks, takeover performance, and situation awareness. For this, an anthropomorphic visual uncertainty display located in the instrument cluster was developed. While the hypothesised benefits for trust calibration and situation awareness were confirmed, the results indicate that visually conveying uncertainties leads to an increased perceived effort due to a higher frequency of monitoring glances. Building on these findings, peripheral awareness displays were explored as a means for conveying uncertainties without the need for focused attention to reduce monitoring glances. As a prerequisite for developing such a display, a systematic literature review was conducted to identify evaluation methods and criteria, which were then coerced into a comprehensive framework. Grounded in this framework, a peripheral awareness display for uncertainty communication was developed and subsequently compared with the initially proposed visual anthropomorphic uncertainty display in a driving simulator study. Eye tracking and subjective workload data indicate that the peripheral awareness display reduces the monitoring effort relative to the visual display, while driving performance and trust data highlight that the benefits of uncertainty communication are maintained. Further, this research project addresses the implications of increasing the functional detail of uncertainty information. Results of a driving simulator study indicate that particularly workload should be considered when increasing the functional detail of uncertainty information. Expanding upon this approach, an augmented reality display concept was developed and a set of visual variables was explored in a forced choice sorting task to assess their ordinal characteristics. Particularly changes in colour hue and animation-based variables received high preference ratings and were ordered consistently from low to high uncertainty. This research project has contributed a series of novel insights and ideas to the field of human factors in automated driving. It confirmed that conveying uncertainties improves trust calibration and situation awareness, but highlighted that using a visual display lessens the positive effects. Addressing this shortcoming, a peripheral awareness display was designed applying a dedicated evaluation framework. Compared with the previously employed visual display, it decreased monitoring glances and, consequentially, perceived effort. Further, an augmented reality-based uncertainty display concept was developed to minimise the workload increments associated with increases in the functional detail of uncertainty information.</div
    • …
    corecore