455 research outputs found

    Substitutional reality:using the physical environment to design virtual reality experiences

    Get PDF
    Experiencing Virtual Reality in domestic and other uncontrolled settings is challenging due to the presence of physical objects and furniture that are not usually defined in the Virtual Environment. To address this challenge, we explore the concept of Substitutional Reality in the context of Virtual Reality: a class of Virtual Environments where every physical object surrounding a user is paired, with some degree of discrepancy, to a virtual counterpart. We present a model of potential substitutions and validate it in two user studies. In the first study we investigated factors that affect participants' suspension of disbelief and ease of use. We systematically altered the virtual representation of a physical object and recorded responses from 20 participants. The second study investigated users' levels of engagement as the physical proxy for a virtual object varied. From the results, we derive a set of guidelines for the design of future Substitutional Reality experiences

    A Utility Framework for Selecting Immersive Interactive Capability and Technology for Virtual Laboratories

    Get PDF
    There has been an increase in the use of virtual reality (VR) technology in the education community since VR is emerging as a potent educational tool that offers students with a rich source of educational material and makes learning exciting and interactive. With a rise of popularity and market expansion in VR technology in the past few years, a variety of consumer VR electronics have boosted educators and researchers’ interest in using these devices for practicing engineering and science laboratory experiments. However, little is known about how such devices may be well-suited for active learning in a laboratory environment. This research aims to address this gap by formulating a utility framework to help educators and decision-makers efficiently select a type of VR device that matches with their design and capability requirements for their virtual laboratory blueprint. Furthermore, a framework use case is demonstrated by not only surveying five types of VR devices ranging from low-immersive to full-immersive along with their capabilities (i.e., hardware specifications, cost, and availability) but also considering the interaction techniques in each VR device based on the desired laboratory task. To validate the framework, a research study is carried out to compare these five VR devices and investigate which device can provide an overall best-fit for a 3D virtual laboratory content that we implemented based on the interaction level, usability and performance effectiveness

    Merging the Real and the Virtual: An Exploration of Interaction Methods to Blend Realities

    Get PDF
    We investigate, build, and design interaction methods to merge the real with the virtual. An initial investigation looks at spatial augmented reality (SAR) and its effects on pointing with a real mobile phone. A study reveals a set of trade-offs between the raycast, viewport, and direct pointing techniques. To further investigate the manipulation of virtual content within a SAR environment, we design an interaction technique that utilizes the distance that a user holds mobile phone away from their body. Our technique enables pushing virtual content from a mobile phone to an external SAR environment, interact with that content, rotate-scale-translate it, and pull the content back into the mobile phone. This is all done in a way that ensures seamless transitions between the real environment of the mobile phone and the virtual SAR environment. To investigate the issues that occur when the physical environment is hidden by a fully immersive virtual reality (VR) HMD, we design and investigate a system that merges a realtime 3D reconstruction of the real world with a virtual environment. This allows users to freely move, manipulate, observe, and communicate with people and objects situated in their physical reality without losing their sense of immersion or presence inside a virtual world. A study with VR users demonstrates the affordances provided by the system and how it can be used to enhance current VR experiences. We then move to AR, to investigate the limitations of optical see-through HMDs and the problem of communicating the internal state of the virtual world with unaugmented users. To address these issues and enable new ways to visualize, manipulate, and share virtual content, we propose a system that combines a wearable SAR projector. Demonstrations showcase ways to utilize the projected and head-mounted displays together, such as expanding field of view, distributing content across depth surfaces, and enabling bystander collaboration. We then turn to videogames to investigate how spectatorship of these virtual environments can be enhanced through expanded video rendering techniques. We extract and combine additional data to form a cumulative 3D representation of the live game environment for spectators, which enables each spectator to individually control a personal view into the stream while in VR. A study shows that users prefer spectating in VR when compared with a comparable desktop rendering

    The development of a hybrid virtual reality/video view-morphing display system for teleoperation and teleconferencing

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, System Design & Management Program, 2000.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Includes bibliographical references (p. 84-89).The goal of this study is to extend the desktop panoramic static image viewer concept (e.g., Apple QuickTime VR; IPIX) to support immersive real time viewing, so that an observer wearing a head-mounted display can make free head movements while viewing dynamic scenes rendered in real time stereo using video data obtained from a set of fixed cameras. Computational experiments by Seitz and others have demonstrated the feasibility of morphing image pairs to render stereo scenes from novel, virtual viewpoints. The user can interact both with morphed real world video images, and supplementary artificial virtual objects (“Augmented Reality”). The inherent congruence of the real and artificial coordinate frames of this system reduces registration errors commonly found in Augmented Reality applications. In addition, the user’s eyepoint is computed locally so that any scene lag resulting from head movement will be less than those from alternative technologies using remotely controlled ground cameras. For space applications, this can significantly reduce the apparent lag due to satellite communication delay. This hybrid VR/view-morphing display (“Virtual Video”) has many important NASA applications including remote teleoperation, crew onboard training, private family and medical teleconferencing, and telemedicine. The technical objective of this study developed a proof-of-concept system using a 3D graphics PC workstation of one of the component technologies, Immersive Omnidirectional Video, of Virtual Video. The management goal identified a system process for planning, managing, and tracking the integration, test and validation of this phased, 3-year multi-university research and development program.by William E. Hutchison.S.M

    Eye-Tracking in Interactive Virtual Environments: Implementation and Evaluation

    Get PDF
    Not all eye-tracking methodology and data processing are equal. While the use of eye-tracking is intricate because of its grounding in visual physiology, traditional 2D eye-tracking methods are supported by software, tools, and reference studies. This is not so true for eye-tracking methods applied in virtual reality (imaginary 3D environments). Previous research regarded the domain of eye-tracking in 3D virtual reality as an untamed realm with unaddressed issues. The present paper explores these issues, discusses possible solutions at a theoretical level, and offers example implementations. The paper also proposes a workflow and software architecture that encompasses an entire experimental scenario, including virtual scene preparation and operationalization of visual stimuli, experimental data collection and considerations for ambiguous visual stimuli, post-hoc data correction, data aggregation, and visualization. The paper is accompanied by examples of eye-tracking data collection and evaluation based on ongoing research of indoor evacuation behavior

    Creating architecture for a digital information system leveraging virtual environments

    Get PDF
    Abstract. The topic of the thesis was the creation of a proof of concept digital information system, which utilizes virtual environments. The focus was finding a working design, which can then be expanded upon. The research was conducted using design science research, by creating the information system as the artifact. The research was conducted for Nokia Networks in Oulu, Finland; in this document referred to as “the target organization”. An information system is a collection of distributed computing components, which come together to create value for an organization. Information system architecture is generally derived from enterprise architecture, and consists of a data-, technical- and application architectures. Data architecture outlines the data that the system uses, and the policies related to its usage, manipulation and storage. Technical architecture relates to various technological areas, such as networking and protocols, as well as any environmental factors. The application architecture consists of deconstructing the applications that are used in the operations of the information system. Virtual reality is an experience, where the concepts of presence, autonomy and interaction come together to create an immersive alternative to a regular display-based computer environment. The most typical form of virtual reality consists of a headmounted device, controllers and movement-tracking base stations. The user’s head- and body movement can be tracked, which changes their position in the virtual environment. The proof-of-concept information system architecture used a multi-server -based solution, where one central physical server hosted multiple virtual servers. The system consisted of a website, which was the knowledge-center and where a client software could be downloaded. The client software was the authorization portal, which determined the virtual environments that were available to the user. The virtual reality application included functionalities, which enable co-operative, virtualized use of various Nokia products, in immersive environments. The system was tested in working situations, such as during exhibitions with customers. The proof-of-concept system fulfilled many of the functional requirements set for it, allowing for co-operation in the virtual reality. Additionally, a rudimentary model for access control was available in the designed system. The shortcomings of the system were related to areas such as security and scaling, which can be further developed by introducing a cloud-hosted environment to the architecture

    Serious game augmented reality 3D for physical rehabilitation

    Get PDF
    This research consists in the development of a PhysioAR framework (Augmented Reality Physiotherapy) that consider a set of two wearable sensors (Left Controller and Right Controller and Meta/Oculus Quest headset controller for use in natural interactions with a set of AR therapeutic serious games developed on the Unity 3D. The system allows to perform training sessions for hands and fingers, knees and legs motor rehabilitation bearing in mind that the games are for people who have suffered from stroke. The training is part of special care that must be taken for this through the serious games that are properly adapted to be a source of motivation and easy to be played. This FisioAR project includes, two different apps designed, one for calendar and for physiotherapists has a background data with all information needed to do and other to make login in main app and have the possibility to interact with our three types of games specifically designed, developed and implemented for Oculus Quest. Two different mobile apps were constructed on Outsystems platform, where one is destinated to physiotherapists and other is destinated to AVC patient’s. Three Different types of serious games were developed on Unity Platform Engine and all the games have specific contents to be played according with motor and cognitive rehabilitation objectives. The first game called Boxes Game, has six cubes displayed with different colors and six spheres also with six different colors. The main goal of this game is to put the maximum number of spheres in a box with the same color. This game will involve the use of legs, knees and arms and can be easily adapted to each patients’ conditions, making it more or less demanding. The Second Game is called Garden Care Game. Its scenario was made with prefabs (assets) and materials from Unity asset store to simulate a realistic garden, with a watering can, fences and a set of flowers. The main goal of this game is to care the flowers with water. This simple goal is related with the measurement of the wrist rotation made by the patient through wearable sensors while watering each flower. This game as a score for each flower watered. In the Third Game called Puzzle Game, there’s a white screen with the same number of divisions as the existing image blocks in project.Esta pesquisa consiste no desenvolvimento de uma solução do projeto FisioAR baseada em dispositivos vestíveis, combinando um conjunto de sensores vestíveis e controlador de headset para uso em interações naturais com um conjunto de serious games terapêuticos VR desenvolvidos na plataforma de games 3D Unity. O sistema permite realizar treinos de reabilitação motora de mãos e dedos, joelhos e pernas tendo em vista que os jogos são para pessoas que sofreram AVC e devem ser tomados cuidados especiais com isso e que os jogos estão devidamente adaptados para serem mais simples. ser jogado. Este projeto FisioAR tem em todas as implementações, dois aplicativos diferentes projetados, três tipos diferentes de jogos projetados no Oculus Quest. Dois aplicativos diferentes foram construídos na plataforma Outsystems sendo um destinado a fisioterapeutas e outro a pacientes AVC. Três tipos diferentes de jogos foram especialmente projetados no Unity Platform Engine e todos os jogos possuem conteúdos específicos para serem jogados. O primeiro jogo denominado Boxes Game, tem seis cubos apresentados com cores diferentes e seis esferas também com seis cores diferentes. O principal objetivo deste jogo é colocar o número máximo de esferas em uma caixa com a mesma cor e com distância mínima percorrida. Este jogo envolverá o uso de pernas, joelhos e braços e pode ser facilmente adaptado às condições de cada paciente, tornando-o mais ou menos exigente. O segundo jogo é chamado de jogo de cuidado de jardim. Seu cenário foi feito com pré-fabricados e materiais da loja de ativos da unidade para simular um jardim realista, com regador, cercas e um conjunto de flores. O objetivo principal deste jogo é regar as flores. Esse objetivo simples está relacionado à medição da rotação do punho feita pelo paciente por meio de sensores vestíveis ao regar cada flor. Este jogo é uma pontuação para cada flor regada. No terceiro jogo, chamado Puzzle Game, há uma tela branca com o mesmo número de divisões que os blocos de imagem existentes no projeto

    Development and evaluation of mixed reality-enhanced robotic systems for intuitive tele-manipulation and telemanufacturing tasks in hazardous conditions

    Get PDF
    In recent years, with the rapid development of space exploration, deep-sea discovery, nuclear rehabilitation and management, and robotic-assisted medical devices, there is an urgent need for humans to interactively control robotic systems to perform increasingly precise remote operations. The value of medical telerobotic applications during the recent coronavirus pandemic has also been demonstrated and will grow in the future. This thesis investigates novel approaches to the development and evaluation of a mixed reality-enhanced telerobotic platform for intuitive remote teleoperation applications in dangerous and difficult working conditions, such as contaminated sites and undersea or extreme welding scenarios. This research aims to remove human workers from the harmful working environments by equipping complex robotic systems with human intelligence and command/control via intuitive and natural human-robot- interaction, including the implementation of MR techniques to improve the user's situational awareness, depth perception, and spatial cognition, which are fundamental to effective and efficient teleoperation. The proposed robotic mobile manipulation platform consists of a UR5 industrial manipulator, 3D-printed parallel gripper, and customized mobile base, which is envisaged to be controlled by non-skilled operators who are physically separated from the robot working space through an MR-based vision/motion mapping approach. The platform development process involved CAD/CAE/CAM and rapid prototyping techniques, such as 3D printing and laser cutting. Robot Operating System (ROS) and Unity 3D are employed in the developing process to enable the embedded system to intuitively control the robotic system and ensure the implementation of immersive and natural human-robot interactive teleoperation. This research presents an integrated motion/vision retargeting scheme based on a mixed reality subspace approach for intuitive and immersive telemanipulation. An imitation-based velocity- centric motion mapping is implemented via the MR subspace to accurately track operator hand movements for robot motion control, and enables spatial velocity-based control of the robot tool center point (TCP). The proposed system allows precise manipulation of end-effector position and orientation to readily adjust the corresponding velocity of maneuvering. A mixed reality-based multi-view merging framework for immersive and intuitive telemanipulation of a complex mobile manipulator with integrated 3D/2D vision is presented. The proposed 3D immersive telerobotic schemes provide the users with depth perception through the merging of multiple 3D/2D views of the remote environment via MR subspace. The mobile manipulator platform can be effectively controlled by non-skilled operators who are physically separated from the robot working space through a velocity-based imitative motion mapping approach. Finally, this thesis presents an integrated mixed reality and haptic feedback scheme for intuitive and immersive teleoperation of robotic welding systems. By incorporating MR technology, the user is fully immersed in a virtual operating space augmented by real-time visual feedback from the robot working space. The proposed mixed reality virtual fixture integration approach implements hybrid haptic constraints to guide the operator’s hand movements following the conical guidance to effectively align the welding torch for welding and constrain the welding operation within a collision-free area. Overall, this thesis presents a complete tele-robotic application space technology using mixed reality and immersive elements to effectively translate the operator into the robot’s space in an intuitive and natural manner. The results are thus a step forward in cost-effective and computationally effective human-robot interaction research and technologies. The system presented is readily extensible to a range of potential applications beyond the robotic tele- welding and tele-manipulation tasks used to demonstrate, optimise, and prove the concepts

    Automatically Augmenting Lifelog Events Using Pervasively Generated Content from Millions of People

    Get PDF
    In sensor research we take advantage of additional contextual sensor information to disambiguate potentially erroneous sensor readings or to make better informed decisions on a single sensor’s output. This use of additional information reinforces, validates, semantically enriches, and augments sensed data. Lifelog data is challenging to augment, as it tracks one’s life with many images including the places they go, making it non-trivial to find associated sources of information. We investigate realising the goal of pervasive user-generated content based on sensors, by augmenting passive visual lifelogs with “Web 2.0” content collected by millions of other individuals

    Augmented reality system with application in physical rehabilitation

    Get PDF
    The aging phenomenon causes increased physiotherapy services requirements, with increased costs associated with long rehabilitation periods. Traditional rehabilitation methods rely on the subjective assessment of physiotherapists without supported training data. To overcome the shortcoming of traditional rehabilitation method and improve the efficiency of rehabilitation, AR (Augmented Reality) which represents a promissory technology that provides an immersive interaction with real and virtual objects is used. The AR devices may assure the capture body posture and scan the real environment that conducted to a growing number of AR applications focused on physical rehabilitation. In this MSc thesis, an AR platform used to materialize a physical rehabilitation plan for stroke patients is presented. Gait training is a significant part of physical rehabilitation for stroke patients. AR represents a promissory solution for training assessment providing information to the patients and physiotherapists about exercises to be done and the reached results. As part of MSc work an iOS application was developed in unity 3D platform. This application immersing patients in a mixed environment that combine real-world and virtual objects. The human computer interface is materialized by an iPhone as head-mounted 3D display and a set of wireless sensors for physiological and motion parameters measurement. The position and velocity of the patient are recorded by a smart carpet that includes capacitive sensors connected to a computation unit characterized by Wi-Fi communication capabilities. AR training scenario and the corresponding experimental results are part of the thesis.O envelhecimento causa um aumento das necessidades dos serviços de fisioterapia, com aumento dos custos associados a longos períodos de reabilitação. Os métodos tradicionais de reabilitação dependem da avaliação subjetiva de fisioterapeutas sem registo automatizado de dados de treino. Com o principal objetivo de superar os problemas do método tradicional e melhorar a eficiência da reabilitação, é utilizada a RA (Realidade Aumentada), que representa uma tecnologia promissora, que fornece uma interação imersiva com objetos reais e virtuais. Os dispositivos de RA são capazes de garantir uma postura correta do corpo de capturar e verificar o ambiente real, o que levou a um número crescente de aplicações de RA focados na reabilitação física. Neste projeto, é apresentada uma plataforma de RA, utilizada para materializar um plano de reabilitação física para pacientes que sofreram AVC. O treino de marcha é uma parte significativa da reabilitação física para pacientes com AVC. A RA apresenta-se como uma solução promissora para a avaliação do treino, fornecendo informações aos pacientes e aos profissionais de fisioterapia sobre os exercícios a serem realizados e os resultados alcançados. Como parte deste projeto, uma aplicação iOS foi desenvolvida na plataforma Unity 3D. Esta aplicação fornece aos pacientes um ambiente imersivo que combina objetos reais e virtuais. A interface de RA é materializada por um iPhone montado num suporte de cabeça do utilizador, assim como um conjunto de sensores sem fios para medição de parâmetros fisiológicos e de movimento. A posição e a velocidade do paciente são registadas por um tapete inteligente que inclui sensores capacitivos conectados a uma unidade de computação, caracterizada por comunicação via Wi-Fi. O cenário de treino em RA e os resultados experimentais correspondentes fazem parte desta dissertação
    corecore