139 research outputs found

    LVC Interaction within a Mixed Reality Training System

    Get PDF
    The United States military is increasingly pursuing advanced live, virtual, and constructive (LVC) training systems for reduced cost, greater training flexibility, and decreased training times. Combining the advantages of realistic training environments and virtual worlds, mixed reality LVC training systems can enable live and virtual trainee interaction as if co-located. However, LVC interaction in these systems often requires constructing immersive environments, developing hardware for live-virtual interaction, tracking in occluded environments, and an architecture that supports real-time transfer of entity information across many systems. This paper discusses a system that overcomes these challenges to empower LVC interaction in a reconfigurable, mixed reality environment. This system was developed and tested in an immersive, reconfigurable, and mixed reality LVC training system for the dismounted warfighter at ISU, known as the Veldt, to overcome LVC interaction challenges and as a test bed for cuttingedge technology to meet future U.S. Army battlefield requirements. Trainees interact physically in the Veldt and virtually through commercial and developed game engines. Evaluation involving military trained personnel found this system to be effective, immersive, and useful for developing the critical decision-making skills necessary for the battlefield. Procedural terrain modeling, model-matching database techniques, and a central communication server process all live and virtual entity data from system components to create a cohesive virtual world across all distributed simulators and game engines in real-time. This system achieves rare LVC interaction within multiple physical and virtual immersive environments for training in real-time across many distributed systems

    Evaluating the Microsoft HoloLens through an augmented reality assembly application

    Get PDF
    Industry and academia have repeatedly demonstrated the transformative potential of Augmented Reality (AR) guided assembly instructions. In the past, however, computational and hardware limitations often dictated that these systems were deployed on tablets or other cumbersome devices. Often, tablets impede worker progress by diverting a user\u27s hands and attention, forcing them to alternate between the instructions and the assembly process. Head Mounted Displays (HMDs) overcome those diversions by allowing users to view the instructions in a hands-free manner while simultaneously performing an assembly operation. Thanks to rapid technological advances, wireless commodity AR HMDs are becoming commercially available. Specifically, the pioneering Microsoft HoloLens, provides an opportunity to explore a hands-free HMD’s ability to deliver AR assembly instructions and what a user interface looks like for such an application. Such an exploration is necessary because it is not certain how previous research on user interfaces will transfer to the HoloLens or other new commodity HMDs. In addition, while new HMD technology is promising, its ability to deliver a robust AR assembly experience is still unknown. To assess the HoloLens’ potential for delivering AR assembly instructions, the cross-platform Unity 3D game engine was used to build a proof of concept application. Features focused upon when building the prototype were: user interfaces, dynamic 3D assembly instructions, and spatially registered content placement. The research showed that while the HoloLens is a promising system, there are still areas that require improvement, such as tracking accuracy, before the device is ready for deployment in a factory assembly setting
    • …
    corecore