2,853 research outputs found

    LoCoMoTe – a framework for classification of natural locomotion in VR by task, technique and modality

    Get PDF
    Virtual reality (VR) research has provided overviews of locomotion techniques, how they work, their strengths and overall user experience. Considerable research has investigated new methodologies, particularly machine learning to develop redirection algorithms. To best support the development of redirection algorithms through machine learning, we must understand how best to replicate human navigation and behaviour in VR, which can be supported by the accumulation of results produced through live-user experiments. However, it can be difficult to identify, select and compare relevant research without a pre-existing framework in an ever-growing research field. Therefore, this work aimed to facilitate the ongoing structuring and comparison of the VR-based natural walking literature by providing a standardised framework for researchers to utilise. We applied thematic analysis to study methodology descriptions from 140 VR-based papers that contained live-user experiments. From this analysis, we developed the LoCoMoTe framework with three themes: navigational decisions, technique implementation, and modalities. The LoCoMoTe framework provides a standardised approach to structuring and comparing experimental conditions. The framework should be continually updated to categorise and systematise knowledge and aid in identifying research gaps and discussions

    Locomotion in virtual reality in full space environments

    Get PDF
    Virtual Reality is a technology that allows the user to explore and interact with a virtual environment in real time as if they were there. It is used in various fields such as entertainment, education, and medicine due to its immersion and ability to represent reality. Still, there are problems such as virtual simulation sickness and lack of realism that make this technology less appealing. Locomotion in virtual environments is one of the main factors responsible for an immersive and enjoyable virtual reality experience. Several methods of locomotion have been proposed, however, these have flaws that end up negatively influencing the experience. This study compares natural locomotion in complete spaces with joystick locomotion and natural locomotion in impossible spaces through three tests in order to identify the best locomotion method in terms of immersion, realism, usability, spatial knowledge acquisition and level of virtual simulation sickness. The results show that natural locomotion is the method that most positively influences the experience when compared to the other locomotion methods.A Realidade Virual é uma tecnologia que permite ao utilizador explorar e interagir com um ambiente virtual em tempo real como se lá estivesse presente. E utilizada em diversas áreas como o entretenimento, educação e medicina devido à sua imersão e capacidade de representar a realidade. Ainda assim, existem problemas como o enjoo por simulação virtual e a falta de realismo que tornam esta tecnologia menos apelativa. A locomoção em ambientes virtuais é um dos principais fatores responsáveis por uma experiência em realidade virtual imersiva e agradável. Vários métodos de locomoção foram propostos, no entanto, estes têm falhas que acabam por influenciar negativamente a experiência. Este estudo compara a locomoção natural em espaços completos com a locomoção por joystick e a locomoção natural em espaços impossíveis através de três testes de forma a identificar qual o melhor método de locomoção a nível de imersão, realismo, usabilidade, aquisição de conhecimento espacial e nível de enjoo por simulação virtual. Os resultados mostram que a locomoção natural é o método que mais influencia positivamente a experiência quando comparado com os outros métodos de locomoção

    Future developments in brain-machine interface research

    Get PDF
    Neuroprosthetic devices based on brain-machine interface technology hold promise for the restoration of body mobility in patients suffering from devastating motor deficits caused by brain injury, neurologic diseases and limb loss. During the last decade, considerable progress has been achieved in this multidisciplinary research, mainly in the brain-machine interface that enacts upper-limb functionality. However, a considerable number of problems need to be resolved before fully functional limb neuroprostheses can be built. To move towards developing neuroprosthetic devices for humans, brain-machine interface research has to address a number of issues related to improving the quality of neuronal recordings, achieving stable, long-term performance, and extending the brain-machine interface approach to a broad range of motor and sensory functions. Here, we review the future steps that are part of the strategic plan of the Duke University Center for Neuroengineering, and its partners, the Brazilian National Institute of Brain-Machine Interfaces and the École Polytechnique Fédérale de Lausanne (EPFL) Center for Neuroprosthetics, to bring this new technology to clinical fruition

    Natural locomotion based on a reduced set of inertial sensors: decoupling body and head directions indoors

    Get PDF
    Inertial sensors offer the potential for integration into wireless virtual reality systems that allow the users to walk freely through virtual environments. However, owing to drift errors, inertial sensors cannot accurately estimate head and body orientations in the long run, and when walking indoors, this error cannot be corrected by magnetometers, due to the magnetic field distortion created by ferromagnetic materials present in buildings. This paper proposes a technique, called EHBD (Equalization of Head and Body Directions), to address this problem using two head- and shoulder-located magnetometers. Due to their proximity, their distortions are assumed to be similar and the magnetometer measurements are used to detect when the user is looking straight forward. Then, the system corrects the discrepancies between the estimated directions of the head and the shoulder, which are provided by gyroscopes and consequently are affected by drift errors. An experiment is conducted to evaluate the performance of this technique in two tasks (navigation and navigation plus exploration) and using two different locomotion techniques: (1) gaze-directed mode (GD) in which the walking direction is forced to be the same as the head direction, and (2) decoupled direction mode (DD) in which the walking direction can be different from the viewing direction. The obtained results show that both locomotion modes show similar matching of the target path during the navigation task, while DD’s path matches the target path more closely than GD in the navigation plus exploration task. These results validate the EHBD technique especially when allowing different walking and viewing directions in the navigation plus exploration tasks, as expected. While the proposed method does not reach the accuracy of optical tracking (ideal case), it is an acceptable and satisfactory solution for users and is much more compact, portable and economical

    Choreographic and Somatic Approaches for the Development of Expressive Robotic Systems

    Full text link
    As robotic systems are moved out of factory work cells into human-facing environments questions of choreography become central to their design, placement, and application. With a human viewer or counterpart present, a system will automatically be interpreted within context, style of movement, and form factor by human beings as animate elements of their environment. The interpretation by this human counterpart is critical to the success of the system's integration: knobs on the system need to make sense to a human counterpart; an artificial agent should have a way of notifying a human counterpart of a change in system state, possibly through motion profiles; and the motion of a human counterpart may have important contextual clues for task completion. Thus, professional choreographers, dance practitioners, and movement analysts are critical to research in robotics. They have design methods for movement that align with human audience perception, can identify simplified features of movement for human-robot interaction goals, and have detailed knowledge of the capacity of human movement. This article provides approaches employed by one research lab, specific impacts on technical and artistic projects within, and principles that may guide future such work. The background section reports on choreography, somatic perspectives, improvisation, the Laban/Bartenieff Movement System, and robotics. From this context methods including embodied exercises, writing prompts, and community building activities have been developed to facilitate interdisciplinary research. The results of this work is presented as an overview of a smattering of projects in areas like high-level motion planning, software development for rapid prototyping of movement, artistic output, and user studies that help understand how people interpret movement. Finally, guiding principles for other groups to adopt are posited.Comment: Under review at MDPI Arts Special Issue "The Machine as Artist (for the 21st Century)" http://www.mdpi.com/journal/arts/special_issues/Machine_Artis

    Around-Body Interaction: Leveraging Limb Movements for Interacting in a Digitally Augmented Physical World

    Full text link
    Recent technological advances have made head-mounted displays (HMDs) smaller and untethered, fostering the vision of ubiquitous interaction with information in a digitally augmented physical world. For interacting with such devices, three main types of input - besides not very intuitive finger gestures - have emerged so far: 1) Touch input on the frame of the devices or 2) on accessories (controller) as well as 3) voice input. While these techniques have both advantages and disadvantages depending on the current situation of the user, they largely ignore the skills and dexterity that we show when interacting with the real world: Throughout our lives, we have trained extensively to use our limbs to interact with and manipulate the physical world around us. This thesis explores how the skills and dexterity of our upper and lower limbs, acquired and trained in interacting with the real world, can be transferred to the interaction with HMDs. Thus, this thesis develops the vision of around-body interaction, in which we use the space around our body, defined by the reach of our limbs, for fast, accurate, and enjoyable interaction with such devices. This work contributes four interaction techniques, two for the upper limbs and two for the lower limbs: The first contribution shows how the proximity between our head and hand can be used to interact with HMDs. The second contribution extends the interaction with the upper limbs to multiple users and illustrates how the registration of augmented information in the real world can support cooperative use cases. The third contribution shifts the focus to the lower limbs and discusses how foot taps can be leveraged as an input modality for HMDs. The fourth contribution presents how lateral shifts of the walking path can be exploited for mobile and hands-free interaction with HMDs while walking.Comment: thesi

    NaviFields: relevance fields for adaptive VR navigation

    Get PDF
    Virtual Reality allow users to explore virtual environments naturally, by moving their head and body. However, the size of the environments they can explore is limited by real world constraints, such as the tracking technology or the physical space available. Existing techniques removing these limitations often break the metaphor of natural navigation in VR (e.g. steering techniques), involve control commands (e.g., teleporting) or hinder precise navigation (e.g., scaling user's displacements). This paper proposes NaviFields, which quantify the requirements for precise navigation of each point of the environment, allowing natural navigation within relevant areas, while scaling users' displacements when travelling across non-relevant spaces. This expands the size of the navigable space, retains the natural navigation metaphor and still allows for areas with precise control of the virtual head. We present a formal description of our NaviFields technique, which we compared against two alternative solutions (i.e., homogeneous scaling and natural navigation). Our results demonstrate our ability to cover larger spaces, introduce minimal disruption when travelling across bigger distances and improve very significantly the precise control of the viewpoint inside relevant areas

    Walking with virtual humans : understanding human response to virtual humanoids' appearance and behaviour while navigating in immersive VR

    Get PDF
    In this thesis, we present a set of studies whose results have allowed us to analyze how to improve the realism, navigation, and behaviour of the avatars in an immersive virtual reality environment. In our simulations, participants must perform a series of tasks and we have analyzed perceptual and behavioural data. The results of the studies have allowed us to deduce what improvements are needed to be incorporated to the original simulations, in order to enhance the perception of realism, the navigation technique, the rendering of the avatars, their behaviour or their animations. The most reliable technique for simulating avatars’ behaviour in a virtual reality environment should be based on the study of how humans behave within the environment. For this purpose, it is necessary to build virtual environments where participants can navigate safely and comfortably with a proper metaphor and, if the environment is populated with avatars, simulate their behaviour accurately. All these aspects together will make the participants behave in a way that is closer to how they would behave in the real world. Besides, the integration of these concepts could provide an ideal platform to develop different types of applications with and without collaborative virtual reality such as emergency simulations, teaching, architecture, or designing. In the first contribution of this thesis, we carried out an experiment to study human decision making during an evacuation. We were interested to evaluate to what extent the behaviour of a virtual crowd can affect individuals' decisions. From the second contribution, in which we studied the perception of realism with bots and humans performing just locomotion or varied animations, we can conclude that the combination of having human-like avatars with animation variety can increase the overall realism of a crowd simulation, trajectories and animation. The preliminary study presented in the third contribution of this thesis showed that realistic rendering of the environment and the avatars do not appear to increase the perception of realism in the participants, which is consistent with works presented previously. The preliminary results in our walk-in-place contribution showed a seamless and natural transition between walk-in-place and normal walk. Our system provided a velocity mapping function that closely resembles natural walk. We observed through a pilot study that the system successfully reduces motion sickness and enhances immersion. Finally, the results of the contribution related to locomotion in collaborative virtual reality showed that animation synchronism and footstep sound of the avatars representing the participants do not seem to have a strong impact in terms of presence and feeling of avatar control. However, in our experiment, incorporating natural animations and footstep sound resulted in smaller clearance values in VR than previous work in the literature. The main objective of this thesis was to improve different factors related to virtual reality experiences to make the participants feel more comfortable in the virtual environment. These factors include the behaviour and appearance of the virtual avatars and the navigation through the simulated space in the experience. By increasing the realism of the avatars and facilitating navigation, high scores in presence are achieved during the simulations. This provides an ideal framework for developing collaborative virtual reality applications or emergency simulations that require participants to feel as if they were in real life.En aquesta tesi, es presenta un conjunt d'estudis els resultats dels quals ens han permès analitzar com millorar el realisme, la navegació i el comportament dels avatars en un entorn de realitat virtual immersiu. En les nostres simulacions, els participants han de realitzar una sèrie de tasques i hem analitzat dades perceptives i de comportament mentre les feien. Els resultats dels estudis ens han permès deduir quines millores són necessàries per a ser incorporades a les simulacions originals, amb la finalitat de millorar la percepció del realisme, la tècnica de navegació, la representació dels avatars, el seu comportament o les seves animacions. La tècnica més fiable per simular el comportament dels avatars en un entorn de realitat virtual hauria de basar-se en l'estudi de com es comporten els humans dins de l¿entorn virtual. Per a aquest propòsit, és necessari construir entorns virtuals on els participants poden navegar amb seguretat i comoditat amb una metàfora adequada i, si l¿entorn està poblat amb avatars, simular el seu comportament amb precisió. Tots aquests aspectes junts fan que els participants es comportin d'una manera més pròxima a com es comportarien en el món real. A més, la integració d'aquests conceptes podria proporcionar una plataforma ideal per desenvolupar diferents tipus d'aplicacions amb i sense realitat virtual col·laborativa com simulacions d'emergència, ensenyament, arquitectura o disseny. En la primera contribució d'aquesta tesi, vam realitzar un experiment per estudiar la presa de decisions durant una evacuació. Estàvem interessats a avaluar en quina mesura el comportament d'una multitud virtual pot afectar les decisions dels participants. A partir de la segona contribució, en la qual estudiem la percepció del realisme amb robots i humans que realitzen només una animació de caminar o bé realitzen diverses animacions, vam arribar a la conclusió que la combinació de tenir avatars semblants als humans amb animacions variades pot augmentar la percepció del realisme general de la simulació de la multitud, les seves trajectòries i animacions. L'estudi preliminar presentat en la tercera contribució d'aquesta tesi va demostrar que la representació realista de l¿entorn i dels avatars no semblen augmentar la percepció del realisme en els participants, que és coherent amb treballs presentats anteriorment. Els resultats preliminars de la nostra contribució de walk-in-place van mostrar una transició suau i natural entre les metàfores de walk-in-place i caminar normal. El nostre sistema va proporcionar una funció de mapatge de velocitat que s'assembla molt al caminar natural. Hem observat a través d'un estudi pilot que el sistema redueix amb èxit el motion sickness i millora la immersió. Finalment, els resultats de la contribució relacionada amb locomoció en realitat virtual col·laborativa van mostrar que el sincronisme de l'animació i el so dels avatars que representen els participants no semblen tenir un fort impacte en termes de presència i sensació de control de l'avatar. No obstant això, en el nostre experiment, la incorporació d'animacions naturals i el so de passos va donar lloc a valors de clearance més petits en RV que treballs anteriors ja publicats. L'objectiu principal d'aquesta tesi ha estat millorar els diferents factors relacionats amb experiències de realitat virtual immersiva per fer que els participants se sentin més còmodes en l'entorn virtual. Aquests factors inclouen el comportament i l'aparença dels avatars i la navegació a través de l'entorn virtual. En augmentar el realisme dels avatars i facilitar la navegació, s'aconsegueixen altes puntuacions en presència durant les simulacions. Això proporciona un marc ideal per desenvolupar aplicacions col·laboratives de realitat virtual o simulacions d'emergència que requereixen que els participants se sentin com si estiguessin en la vida realPostprint (published version
    • …
    corecore