1,257 research outputs found

    Analysis domain model for shared virtual environments

    Get PDF
    The field of shared virtual environments, which also encompasses online games and social 3D environments, has a system landscape consisting of multiple solutions that share great functional overlap. However, there is little system interoperability between the different solutions. A shared virtual environment has an associated problem domain that is highly complex raising difficult challenges to the development process, starting with the architectural design of the underlying system. This paper has two main contributions. The first contribution is a broad domain analysis of shared virtual environments, which enables developers to have a better understanding of the whole rather than the part(s). The second contribution is a reference domain model for discussing and describing solutions - the Analysis Domain Model

    TechNews digests: Jan - Nov 2009

    Get PDF
    TechNews is a technology, news and analysis service aimed at anyone in the education sector keen to stay informed about technology developments, trends and issues. TechNews focuses on emerging technologies and other technology news. TechNews service : digests september 2004 till May 2010 Analysis pieces and News combined publish every 2 to 3 month

    Right-lateralised lane keeping in young and older British drivers

    Get PDF
    Young adults demonstrate a small, but consistent, asymmetry of spatial attention favouring the left side of space (“pseudoneglect”) in laboratory-based tests of perception. Conversely, in more naturalistic environments, behavioural errors towards the right side of space are often observed. In the older population, spatial attention asymmetries are generally diminished, or even reversed to favour the right side of space, but much of this evidence has been gained from lab-based and/or psychophysical testing. In this study we assessed whether spatial biases can be elicited during a simulated driving task, and secondly whether these biases also shift with age, in line with standard lab-based measures. Data from 77 right-handed adults with full UK driving licences (i.e. prior experience of left-lane driving) were analysed: 38 young (mean age = 21.53) and 39 older adults (mean age = 70.38). Each participant undertook 3 tests of visuospatial attention: the landmark task, line bisection task, and a simulated lane-keeping task. We found leftward biases in young adults for the landmark and line bisection tasks, indicative of pseudoneglect, and a mean lane position towards the right of centre. In young adults the leftward landmark task biases were negatively correlated with rightward lane-keeping biases, hinting that a common property of the spatial attention networks may have influenced both tasks. As predicted, older adults showed no group-level spatial asymmetry on the landmark nor the line bisection task, but they maintained a mean rightward lane position, similar to young adults. The 3 tasks were not inter-correlated in the older group. These results suggest that spatial biases in older adults may be elicited more effectively in experiments involving complex behaviour rather than abstract, lab-based measures. More broadly, these results confirm that lateral biases of spatial attention are linked to driving behaviour, and this could prove informative in the development of future vehicle safety and driving technology

    XR, music and neurodiversity: design and application of new mixed reality technologies that facilitate musical intervention for children with autism spectrum conditions

    Get PDF
    This thesis, accompanied by the practice outputs,investigates sensory integration, social interaction and creativity through a newly developed VR-musical interface designed exclusively for children with a high-functioning autism spectrum condition (ASC).The results aim to contribute to the limited expanse of literature and research surrounding Virtual Reality (VR) musical interventions and Immersive Virtual Environments (IVEs) designed to support individuals with neurodevelopmental conditions. The author has developed bespoke hardware, software and a new methodology to conduct field investigations. These outputs include a Virtual Immersive Musical Reality Intervention (ViMRI) protocol, a Supplemental Personalised, immersive Musical Experience(SPiME) programme, the Assisted Real-time Three-dimensional Immersive Musical Intervention System’ (ARTIMIS) and a bespoke (and fully configurable) ‘Creative immersive interactive Musical Software’ application (CiiMS). The outputs are each implemented within a series of institutional investigations of 18 autistic child participants. Four groups are evaluated using newly developed virtual assessment and scoring mechanisms devised exclusively from long-established rating scales. Key quantitative indicators from the datasets demonstrate consistent findings and significant improvements for individual preferences (likes), fear reduction efficacy, and social interaction. Six individual case studies present positive qualitative results demonstrating improved decision-making and sensorimotor processing. The preliminary research trials further indicate that using this virtual-reality music technology system and newly developed protocols produces notable improvements for participants with an ASC. More significantly, there is evidence that the supplemental technology facilitates a reduction in psychological anxiety and improvements in dexterity. The virtual music composition and improvisation system presented here require further extensive testing in different spheres for proof of concept

    Cubic-panorama image dataset analysis for storage and transmission

    Full text link

    Perception-driven approaches to real-time remote immersive visualization

    Get PDF
    In remote immersive visualization systems, real-time 3D perception through RGB-D cameras, combined with modern Virtual Reality (VR) interfaces, enhances the user’s sense of presence in a remote scene through 3D reconstruction rendered in a remote immersive visualization system. Particularly, in situations when there is a need to visualize, explore and perform tasks in inaccessible environments, too hazardous or distant. However, a remote visualization system requires the entire pipeline from 3D data acquisition to VR rendering satisfies the speed, throughput, and high visual realism. Mainly when using point-cloud, there is a fundamental quality difference between the acquired data of the physical world and the displayed data because of network latency and throughput limitations that negatively impact the sense of presence and provoke cybersickness. This thesis presents state-of-the-art research to address these problems by taking the human visual system as inspiration, from sensor data acquisition to VR rendering. The human visual system does not have a uniform vision across the field of view; It has the sharpest visual acuity at the center of the field of view. The acuity falls off towards the periphery. The peripheral vision provides lower resolution to guide the eye movements so that the central vision visits all the interesting crucial parts. As a first contribution, the thesis developed remote visualization strategies that utilize the acuity fall-off to facilitate the processing, transmission, buffering, and rendering in VR of 3D reconstructed scenes while simultaneously reducing throughput requirements and latency. As a second contribution, the thesis looked into attentional mechanisms to select and draw user engagement to specific information from the dynamic spatio-temporal environment. It proposed a strategy to analyze the remote scene concerning the 3D structure of the scene, its layout, and the spatial, functional, and semantic relationships between objects in the scene. The strategy primarily focuses on analyzing the scene with models the human visual perception uses. It sets a more significant proportion of computational resources on objects of interest and creates a more realistic visualization. As a supplementary contribution, A new volumetric point-cloud density-based Peak Signal-to-Noise Ratio (PSNR) metric is proposed to evaluate the introduced techniques. An in-depth evaluation of the presented systems, comparative examination of the proposed point cloud metric, user studies, and experiments demonstrated that the methods introduced in this thesis are visually superior while significantly reducing latency and throughput

    Digital Cognitive Companions for Marine Vessels : On the Path Towards Autonomous Ships

    Get PDF
    As for the automotive industry, industry and academia are making extensive efforts to create autonomous ships. The solutions for this are very technology-intense. Many building blocks, often relying on AI technology, need to work together to create a complete system that is safe and reliable to use. Even when the ships are fully unmanned, humans are still foreseen to guide the ships when unknown situations arise. This will be done through teleoperation systems.In this thesis, methods are presented to enhance the capability of two building blocks that are important for autonomous ships; a positioning system, and a system for teleoperation.The positioning system has been constructed to not rely on the Global Positioning System (GPS), as this system can be jammed or spoofed. Instead, it uses Bayesian calculations to compare the bottom depth and magnetic field measurements with known sea charts and magnetic field maps, in order to estimate the position. State-of-the-art techniques for this method typically use high-resolution maps. The problem is that there are hardly any high-resolution terrain maps available in the world. Hence we present a method using standard sea-charts. We compensate for the lower accuracy by using other domains, such as magnetic field intensity and bearings to landmarks. Using data from a field trial, we showed that the fusion method using multiple domains was more robust than using only one domain. In the second building block, we first investigated how 3D and VR approaches could support the remote operation of unmanned ships with a data connection with low throughput, by comparing respective graphical user interfaces (GUI) with a Baseline GUI following the currently applied interfaces in such contexts. Our findings show that both the 3D and VR approaches outperform the traditional approach significantly. We found the 3D GUI and VR GUI users to be better at reacting to potentially dangerous situations than the Baseline GUI users, and they could keep track of the surroundings more accurately. Building from this, we conducted a teleoperation user study using real-world data from a field-trial in the archipelago, where the users should assist the positioning system with bearings to landmarks. The users experienced the tool to give a good overview, and despite the connection with the low throughput, they managed through the GUI to significantly improve the positioning accuracy

    System Design and Analysis for Creating a 3D Virtual Street Scene for Autonomous Vehicles using Geometric Proxies from a Single Video Camera

    Get PDF
    Self-driving vehicles use a variety of sensors to understand the environment they are in. In order to do so, they must accurately measure the distances and positions of the objects around them. A common representation of the environment around the vehicle is a 3D point cloud, or a set of 3D data points which represent the positions of objects in the real world relative to the car. However, while accurate and useful, these point clouds require large amounts of memory compared to other representations such as lightweight polygonal meshes. In addition, 3D point clouds can be difficult for a human to visually understand as the data points do not always form a naturally coherent object. This paper introduces a system to lower the memory consumption needed for the graphical representation of a virtual street environment. At this time, the proposed system takes in as input a single front-facing video. The system uses the video to retrieve still images of a scene which are then segmented to distinguish the different relevant objects, such as cars and stop signs. The system generates a corresponding virtual street scene and these key objects are visualized in the virtual world as low poly, or low resolution, models of the respective objects. This virtual 3D street environment is created to allow a remote operator to visualize the world that the car is traveling through. At this time, the virtual street includes geometric proxies for parallel parked cars in the form of lightweight polygonal meshes. These meshes are predefined, taking up less memory than a point cloud, which can be costly to transmit from the remote vehicle and potentially difficult for a remote human operator to understand. This paper contributes a design and analysis of an initial system for generating and placing these geometric proxies of parked cars in a virtual street environment from one input video. We discuss the limitations and measure the error for this system as well as reflect on future improvements
    • 

    corecore