1,345 research outputs found

    The Effects of Reference Frames on 3D Menus in Virtual Environments

    Get PDF
    The emergence of affordable Head Mounted Displays (HMD) means Virtual Reality (VR) systems are now available to wider audiences. Other than the key target audience, gamers, groups as diverse as oil and gas industries, medical, military, entertainment and education have created demand for effective Virtual Environments (VE). To be effective certain VEs need to properly convey textual information. This is done using 3D menus. It is very important these menus are displayed in an ergonomic manner and do not obstruct important content. The study collected measures of user experience, comfort and memory recall. The study found that reference frames for 3D menus presenting textual information do not influence user experience or memory recall. However, there was a significant difference in user behavior between the reference frames, which has implications for repeated stress injury

    Emerging technologies for learning report (volume 3)

    Get PDF

    Key functions in BIM-based AR platforms

    Get PDF
    The integration of Augmented Reality and Building Information Modelling is a promising area of research; however, fragmentation in literature hinders the development of mature BIM-based AR platforms. This paper aims to minimise the fragmentation in the literature by identifying the key functions that represent the essential capabilities of BIM-AR platforms. A systematic literature review is employed to identify, categorise, and discuss the key functions. The outcome of this paper identifies six key functions: positioning (P), interaction (I), visualisation (V), collaboration (C), automation (A), and integration (T). These key functions act as the foundation for an evaluation framework that can assist practitioners, developers, and researchers with assessing the requirements of the targeted application area, and hence be better informed on the appropriate devices, software, and techniques to use. Finally, this paper emphasises the importance of industrial-academic collaboration in BIM-AR research and suggests prospects for automation through the application of artificial intelligence

    New Generation of Instrumented Ranges: Enabling Automated Performance Analysis

    Get PDF
    Military training conducted on physical ranges that match a unit’s future operational environment provides an invaluable experience. Today, to conduct a training exercise while ensuring a unit’s performance is closely observed, evaluated, and reported on in an After Action Review, the unit requires a number of instructors to accompany the different elements. Training organized on ranges for urban warfighting brings an additional level of complexity—the high level of occlusion typical for these environments multiplies the number of evaluators needed. While the units have great need for such training opportunities, they may not have the necessary human resources to conduct them successfully. In this paper we report on our US Navy/ONR-sponsored project aimed at a new generation of instrumented ranges, and the early results we have achieved. We suggest a radically different concept: instead of recording multiple video streams that need to be reviewed and evaluated by a number of instructors, our system will focus on capturing dynamic individual warfighter pose data and performing automated performance evaluation. We will use an in situ network of automatically-controlled pan-tilt-zoom video cameras and personal position and orientation sensing devices. Our system will record video, reconstruct dynamic 3D individual poses, analyze, recognize events, evaluate performances, generate reports, provide real-time free exploration of recorded data, and even allow the user to generate ‘what-if’ scenarios that were never recorded. The most direct benefit for an individual unit will be the ability to conduct training with fewer human resources, while having a more quantitative account of their performance (dispersion across the terrain, ‘weapon flagging’ incidents, number of patrols conducted). The instructors will have immediate feedback on some elements of the unit’s performance. Having data sets for multiple units will enable historical trend analysis, thus providing new insights and benefits for the entire service.Office of Naval Researc

    Multimodal Content Delivery for Geo-services

    Get PDF
    This thesis describes a body of work carried out over several research projects in the area of multimodal interaction for location-based services. Research in this area has progressed from using simulated mobile environments to demonstrate the visual modality, to the ubiquitous delivery of rich media using multimodal interfaces (geo- services). To effectively deliver these services, research focused on innovative solutions to real-world problems in a number of disciplines including geo-location, mobile spatial interaction, location-based services, rich media interfaces and auditory user interfaces. My original contributions to knowledge are made in the areas of multimodal interaction underpinned by advances in geo-location technology and supported by the proliferation of mobile device technology into modern life. Accurate positioning is a known problem for location-based services, contributions in the area of mobile positioning demonstrate a hybrid positioning technology for mobile devices that uses terrestrial beacons to trilaterate position. Information overload is an active concern for location-based applications that struggle to manage large amounts of data, contributions in the area of egocentric visibility that filter data based on field-of-view demonstrate novel forms of multimodal input. One of the more pertinent characteristics of these applications is the delivery or output modality employed (auditory, visual or tactile). Further contributions in the area of multimodal content delivery are made, where multiple modalities are used to deliver information using graphical user interfaces, tactile interfaces and more notably auditory user interfaces. It is demonstrated how a combination of these interfaces can be used to synergistically deliver context sensitive rich media to users - in a responsive way - based on usage scenarios that consider the affordance of the device, the geographical position and bearing of the device and also the location of the device

    View management for virtual and augmented reality

    Get PDF
    • …
    corecore