7,431 research outputs found

    I Am The Passenger: How Visual Motion Cues Can Influence Sickness For In-Car VR

    Get PDF
    This paper explores the use of VR Head Mounted Displays (HMDs) in-car and in-motion for the first time. Immersive HMDs are becoming everyday consumer items and, as they offer new possibilities for entertainment and productivity, people will want to use them during travel in, for example, autonomous cars. However, their use is confounded by motion sickness caused in-part by the restricted visual perception of motion conflicting with physically perceived vehicle motion (accelerations/rotations detected by the vestibular system). Whilst VR HMDs restrict visual perception of motion, they could also render it virtually, potentially alleviating sensory conflict. To study this problem, we conducted the first on-road and in motion study to systematically investigate the effects of various visual presentations of the real-world motion of a car on the sickness and immersion of VR HMD wearing passengers. We established new baselines for VR in-car motion sickness, and found that there is no one best presentation with respect to balancing sickness and immersion. Instead, user preferences suggest different solutions are required for differently susceptible users to provide usable VR in-car. This work provides formative insights for VR designers and an entry point for further research into enabling use of VR HMDs, and the rich experiences they offer, when travelling

    EVEN-VE: Eyes Visibility Based Egocentric Navigation for Virtual Environments

    Get PDF
    Navigation is one of the 3D interactions often needed to interact with a synthetic world. The latest advancements in image processing have made possible gesture based interaction with a virtual world. However, the speed with which a 3D virtual world responds to a user’s gesture is far greater than posing of the gesture itself. To incorporate faster and natural postures in the realm of Virtual Environment (VE), this paper presents a novel eyes-based interaction technique for navigation and panning. Dynamic wavering and positioning of eyes are deemed as interaction instructions by the system. The opening of eyes preceded by closing for a distinct time-threshold, activates forward or backward navigation. Supporting 2-Degree of Freedom head’s gestures (Rolling and Pitching) panning is performed over the xy-plane. The proposed technique was implemented in a case-study project; EWI (Eyes Wavering based Interaction). With EWI, real time detection and tracking of eyes are performed by the libraries of OpenCV at the backend. To interactively follow trajectory of both the eyes, dynamic mapping is performed in OpenGL. The technique was evaluated in two separate sessions by a total of 28 users to assess accuracy, speed and suitability of the system in Virtual Reality (VR). Using an ordinary camera, an average accuracy of 91% was achieved. However, assessment made by using a high quality camera testified that accuracy of the system could be raised to a higher level besides increase in navigation speed. Results of the unbiased statistical evaluations suggest/demonstrate applicability of the system in the emerging domains of virtual and augmented realities

    Refining personal and social presence in virtual meetings

    Get PDF
    Virtual worlds show promise for conducting meetings and conferences without the need for physical travel. Current experience suggests the major limitation to the more widespread adoption and acceptance of virtual conferences is the failure of existing environments to provide a sense of immersion and engagement, or of ‘being there’. These limitations are largely related to the appearance and control of avatars, and to the absence of means to convey non-verbal cues of facial expression and body language. This paper reports on a study involving the use of a mass-market motion sensor (Kinect™) and the mapping of participant action in the real world to avatar behaviour in the virtual world. This is coupled with full-motion video representation of participant’s faces on their avatars to resolve both identity and facial expression issues. The outcomes of a small-group trial meeting based on this technology show a very positive reaction from participants, and the potential for further exploration of these concepts

    A new approach to study gait impairments in Parkinson’s disease based on mixed reality

    Get PDF
    Dissertação de mestrado integrado em Engenharia Biomédica (especialização em Eletrónica Médica)Parkinson’s disease (PD) is the second most common neurodegenerative disorder after Alzheimer's disease. PD onset is at 55 years-old on average, and its incidence increases with age. This disease results from dopamine-producing neurons degeneration in the basal ganglia and is characterized by various motor symptoms such as freezing of gait, bradykinesia, hypokinesia, akinesia, and rigidity, which negatively impact patients’ quality of life. To monitor and improve these PD-related gait disabilities, several technology-based methods have emerged in the last decades. However, these solutions still require more customization to patients’ daily living tasks in order to provide more objective, reliable, and long-term data about patients’ motor conditions in home-related contexts. Providing this quantitative data to physicians will ensure more personalized and better treatments. Also, motor rehabilitation sessions fostered by assistance devices require the inclusion of quotidian tasks to train patients for their daily motor challenges. One of the most promising technology-based methods is virtual, augmented, and mixed reality (VR/AR/MR), which immerse patients in virtual environments and provide sensory stimuli (cues) to assist with these disabilities. However, further research is needed to improve and conceptualize efficient and patient-centred VR/AR/MR approaches and increase their clinical evidence. Bearing this in mind, the main goal of this dissertation was to design, develop, test, and validate virtual environments to assess and train PD-related gait impairments using mixed reality smart glasses, integrated with another high-technological motion tracking device. Using specific virtual environments that trigger PD-related gait impairments (turning, doorways, and narrow spaces), it is hypothesized that patients can be assessed and trained in their daily challenges related to walking. Also, this tool integrates on-demand visual cues to provide visual biofeedback and foster motor training. This solution was validated with end-users to test the identified hypothesis. The results showed that, in fact, mixed reality has the potential to recreate real-life environments that often provoke PD-related gait disabilities, by placing virtual objects on top of the real world. On the contrary, biofeedback strategies did not significantly improve the patients’ motor performance. The user experience evaluation showed that participants enjoyed participating in the activity and felt that this tool can help their motor performance.A doença de Parkinson (DP) é a segunda doença neurodegenerativa mais comum depois da doença de Alzheimer. O início da DP ocorre, em média, aos 55 anos de idade, e a sua incidência aumenta com a idade. Esta doença resulta da degeneração dos neurónios produtores de dopamina nos gânglios basais e é caracterizada por vários sintomas motores como o congelamento da marcha, bradicinesia, hipocinesia, acinesia, e rigidez, que afetam negativamente a qualidade de vida dos pacientes. Nas últimas décadas surgiram métodos tecnológicos para monitorizar e treinar estas desabilidades da marcha. No entanto, estas soluções ainda requerem uma maior personalização relativamente às tarefas diárias dos pacientes, a fim de fornecer dados mais objetivos, fiáveis e de longo prazo sobre o seu desempenho motor em contextos do dia-a-dia. Através do fornecimento destes dados quantitativos aos médicos, serão assegurados tratamentos mais personalizados. Além disso, as sessões de reabilitação motora, promovidas por dispositivos de assistência, requerem a inclusão de tarefas quotidianas para treinar os pacientes para os seus desafios diários. Um dos métodos tecnológicos mais promissores é a realidade virtual, aumentada e mista (RV/RA/RM), que imergem os pacientes em ambientes virtuais e fornecem estímulos sensoriais para ajudar nestas desabilidades. Contudo, é necessária mais investigação para melhorar e conceptualizar abordagens RV/RA/RM eficientes e centradas no paciente e ainda aumentar as suas evidências clínicas. Tendo isto em mente, o principal objetivo desta dissertação foi conceber, desenvolver, testar e validar ambientes virtuais para avaliar e treinar as incapacidades de marcha relacionadas com a DP usando óculos inteligentes de realidade mista, integrados com outro dispositivo de rastreio de movimento. Utilizando ambientes virtuais específicos que desencadeiam desabilidades da marcha (rodar, portas e espaços estreitos), é possível testar hipóteses de que os pacientes possam ser avaliados e treinados nos seus desafios diários. Além disso, esta ferramenta integra pistas visuais para fornecer biofeedback visual e fomentar a reabilitação motora. Esta solução foi validada com utilizadores finais de forma a testar as hipóteses identificadas. Os resultados mostraram que, de facto, a realidade mista tem o potencial de recriar ambientes da vida real que muitas vezes provocam deficiências de marcha relacionadas à DP. Pelo contrário, as estratégias de biofeedback não provocaram melhorias significativas no desempenho motor dos pacientes. A avaliação feita pelos pacientes mostrou que estes gostaram de participar nos testes e sentiram que esta ferramenta pode auxiliar no seu desempenho motor

    Comparative Analysis of Change Blindness in Virtual Reality and Augmented Reality Environments

    Full text link
    Change blindness is a phenomenon where an individual fails to notice alterations in a visual scene when a change occurs during a brief interruption or distraction. Understanding this phenomenon is specifically important for the technique that uses a visual stimulus, such as Virtual Reality (VR) or Augmented Reality (AR). Previous research had primarily focused on 2D environments or conducted limited controlled experiments in 3D immersive environments. In this paper, we design and conduct two formal user experiments to investigate the effects of different visual attention-disrupting conditions (Flickering and Head-Turning) and object alternative conditions (Removal, Color Alteration, and Size Alteration) on change blindness detection in VR and AR environments. Our results reveal that participants detected changes more quickly and had a higher detection rate with Flickering compared to Head-Turning. Furthermore, they spent less time detecting changes when an object disappeared compared to changes in color or size. Additionally, we provide a comparison of the results between VR and AR environments.Comment: This paper is accepted as a conference paper on ISMAR 202

    Phenomenal regression to the real object in physical and virtual worlds

    Get PDF
    © 2014, Springer-Verlag London. In this paper, we investigate a new approach to comparing physical and virtual size and depth percepts that captures the involuntary responses of participants to different stimuli in their field of view, rather than relying on their skill at judging size, reaching or directed walking. We show, via an effect first observed in the 1930s, that participants asked to equate the perspective projections of disc objects at different distances make a systematic error that is both individual in its extent and comparable in the particular physical and virtual setting we have tested. Prior work has shown that this systematic error is difficult to correct, even when participants are knowledgeable of its likelihood of occurring. In fact, in the real world, the error only reduces as the available cues to depth are artificially reduced. This makes the effect we describe a potentially powerful, intrinsic measure of VE quality that ultimately may contribute to our understanding of VE depth compression phenomena

    Three levels of metric for evaluating wayfinding

    Get PDF
    Three levels of virtual environment (VE) metric are proposed, based on: (1) users’ task performance (time taken, distance traveled and number of errors made), (2) physical behavior (locomotion, looking around, and time and error classification), and (3) decision making (i.e., cognitive) rationale (think aloud, interview and questionnaire). Examples of the use of these metrics are drawn from a detailed review of research into VE wayfinding. A case study from research into the fidelity that is required for efficient VE wayfinding is presented, showing the unsuitability in some circumstances of common metrics of task performance such as time and distance, and the benefits to be gained by making fine-grained analyses of users’ behavior. Taken as a whole, the article highlights the range of techniques that have been successfully used to evaluate wayfinding and explains in detail how some of these techniques may be applied

    Eye tracking in maritime immersive safe oceans technology

    Get PDF
    This paper presents the integration of eye tracking in the MarSEVR (Maritime Safety Education with VR) technology to increase the precision of the trainee focus on delivering the learning episodes of the technology with enhanced impressiveness and user engagement. MarSEVR is part of the Safe Oceans concept, a green ocean technology that integrates several VR safety training applications to reduce maritime accidents that result into human casualties, sea pollution and other environmental damages. The paper indicates the research delivery architecture driven by Hevner's design science in information systems Research for usability, use experience (UX) and effectiveness. Furthermore, this technology integration is approached from a game design perspective for user engagement but also from a cognitive and neuroscience perspective for pedagogical use and purposes. The paper addressees the impact of the eye tracking technology in the maritime sector operations, training market, and competitive research. Lastly areas of further research are presented and the efforts to link and align finger tracking and hand recognitions technologies with eye tracking for a more complete VR training environment

    Defining Interaction within Immersive Virtual Environments

    Get PDF
    PhDThis thesis is concerned with the design of Virtual Environments (YEs) - in particular with the tools and techniques used to describe interesting and useful environments. This concern is not only with respect to the appearance of objects in the VE but also with their behaviours and their reactions to actions of the participants. The main research hypothesis is that there are several advantages to constructing these interactions and behaviours whilst remaining immersed within the VE which they describe. These advantages include the fact that editing is done interactively with immediate effect and without having to resort to the usual edit-compile-test cycle. This means that the participant doesn't have to leave the VE and lose their sense of presence within it, and editing tasks can take advantage of the enhanced spatial cognition and naturalistic interaction metaphors a VE provides. To this end a data flow dialogue architecture with an immersive virtual environment presentation system was designed and built. The data flow consists of streams of data that originate at sensors that register the body state of the participant, flowing through filters that modify the streams and affect the yE. The requirements for such a system and the filters it should contain are derived from two pieces of work on interaction metaphors, one based on a desktop system using a novel input device and the second a navigation technique for an immersive system. The analysis of these metaphors highlighted particular tasks that such a virtual environment dialogue architecture (VEDA) system might be used to solve, and illustrate the scope of interactions that should be accommodated. Initial evaluation of the VEDA system is provided by moderately sized demonstration environments and tools constructed by the author. Further evaluation is provided by an in-depth study where three novice VE designers were invited to construct VEs with the VEDA system. This highlighted the flexibility that the VEDA approach provides and the utility of the immersive presentation over traditional techniques in that it allows the participant to use more natural and expressive techniques in the construction process. In other words the evaluation shows how the immersive facilities of VEs can be exploited in the process of constructing further VEs
    corecore