4 research outputs found

    Mixed Reality Flood Visualizations: Reflections on Development and Usability of Current Systems

    Get PDF
    Interest in and use of 3D visualizations for analysis and communication of flooding risks has been increasing. At the same time, an ecosystem of 3D user interfaces has also been emerging. Together, they offer exciting potential opportunities for flood visualization. In order to understand how we turn potential into real value, we need to develop better understandings of technical workflows, capabilities of the resulting systems, their usability, and implications for practice. Starting with existing geospatial datasets, we develop single user and collaborative visualization prototypes that leverage capabilities of the state-of-the art HoloLens 2 mixed reality system. By using the 3D displays, positional tracking, spatial mapping, and hand- and eye-tracking, we seek to unpack the capabilities of these tools for meaningful spatial data practice. We reflect on the user experience, hardware performance, and usability of these tools and discuss the implications of these technologies for flood risk management, and broader spatial planning practice.&nbsp

    EXPLORING THE ABILITY TO EMPLOY VIRTUAL 3D ENTITIES OUTDOORS AT RANGES BEYOND 20 METERS

    Get PDF
    The Army is procuring the Integrated Visual Augmentation System (IVAS) system to enable enhanced night vision, planning, and training capability. One known limitation of the IVAS system is the limited ability to portray virtual entities at far ranges in the outdoors due to light wash out, accurate positioning, and dynamic occlusion. The primary goal of this research was to evaluate fixed three-dimensional (3D) visualizations to support outdoor training for fire teams through squads, requiring target visualizations for 3D non-player characters or vehicles at ranges up to 300 m. Tools employed to achieve outdoor visualizations included GPS locational data with virtual entity placement, and sensors to adjust device light levels. This study was conducted with 20 military test subjects in three scenarios at the Naval Postgraduate School using a HoloLens II. Outdoor location considerations included shadows, background clutter, cars blocking the field of view, and the sun’s positioning. Users provided feedback on identifying the type of object, and the difficulty in finding the object. The results indicate GPS only aided in identification for objects up to 100 m. Animation had a statistically insignificant effect on identification of objects. Employment of software to adjust the light levels of the virtual objects aided in identification of objects at 200 m. This research develops a clearer understanding of requirements to enable the employment of mixed reality in outdoor training.Lieutenant Colonel, United States ArmyApproved for public release. Distribution is unlimited

    Modelling and Visualizing Holographic 3D Geographical Scenes with Timely Data Based on the HoloLens

    No full text
    Commonly, a three-dimensional (3D) geographic information system (GIS) is based on a two-dimensional (2D) visualization platform, hindering the understanding and expression of the real world in 3D space that further limits user cognition and understanding of 3D geographic information. Mixed reality (MR) adopts 3D display technology, which enables users to recognize and understand a computer-generated world from the perspective of 3D glasses and solves the problem that users are restricted to the perspective of a 2D screen, with a broad application foreground. However, there is a gap, especially dynamically, in modelling and visualizing a holographic 3D geographical Scene with GIS data/information under the development mechanism of a mixed reality system (e.g., the Microsoft HoloLens). This paper attempts to propose a design architecture (HoloDym3DGeoSce) to model and visualize holographic 3D geographical scenes with timely data based on mixed reality technology and the Microsoft HoloLens. The HoloDym3DGeoSce includes two modules, 3D geographic scene modelling with timely data and HoloDym3DGeoSce interactive design. 3D geographic scene modelling with timely data dynamically creates 3D geographic scenes based on Web services, providing materials and content for the HoloDym3DGeoSce system. The HoloDym3DGeoSce interaction module includes two methods: Human–computer physical interaction and human–computer virtual–real interaction. The human–computer physical interaction method provides an interface for users to interact with virtual geographic scenes. The human–computer virtual–real interaction method maps virtual geographic scenes to physical space to achieve virtual and real fusion. According to the proposed architecture design scheme, OpenStreetMap data and the BingMap Server are used as experimental data to realize the application of mixed reality technology to the modelling, rendering, and interacting of 3D geographic scenes, providing users with a stronger and more realistic 3D geographic information experience, and more natural human–computer GIS interactions. The experimental results show that the feasibility and practicability of the scheme have good prospects for further development
    corecore