336 research outputs found

    Haptic ankle platform for interactive walking in virtual reality.

    Get PDF
    This paper presents an impedance type ankle haptic interface for providing users with an immersive navigation experience in virtual reality (VR). The ankle platform actuated by an electric motor with feedback control enables the use of foot-tapping gestures to create a walking experience similar to a real one and to haptically render different types of walking terrains. Experimental studies demonstrated that the interface can be easily used to generate virtual walking and it is capable to render terrains such as hard and soft surfaces, and multi-layer complex dynamic terrains. The designed system is a seated-type VR locomotion interface, therefore allowing its user to maintain a stable seated posture to comfortably navigate a virtual scene

    I Am The Passenger: How Visual Motion Cues Can Influence Sickness For In-Car VR

    Get PDF
    This paper explores the use of VR Head Mounted Displays (HMDs) in-car and in-motion for the first time. Immersive HMDs are becoming everyday consumer items and, as they offer new possibilities for entertainment and productivity, people will want to use them during travel in, for example, autonomous cars. However, their use is confounded by motion sickness caused in-part by the restricted visual perception of motion conflicting with physically perceived vehicle motion (accelerations/rotations detected by the vestibular system). Whilst VR HMDs restrict visual perception of motion, they could also render it virtually, potentially alleviating sensory conflict. To study this problem, we conducted the first on-road and in motion study to systematically investigate the effects of various visual presentations of the real-world motion of a car on the sickness and immersion of VR HMD wearing passengers. We established new baselines for VR in-car motion sickness, and found that there is no one best presentation with respect to balancing sickness and immersion. Instead, user preferences suggest different solutions are required for differently susceptible users to provide usable VR in-car. This work provides formative insights for VR designers and an entry point for further research into enabling use of VR HMDs, and the rich experiences they offer, when travelling

    Ankle-Actuated Human-Machine Interface for Walking in Virtual Reality

    Get PDF
    This thesis work presents design, implementation and experimental study of an impedance type ankle haptic interface for providing users with the immersive navigation experience in virtual reality (VR). The ankle platform enables the use of foot-tapping gestures to reproduce realistic walking experience in VR and to haptically render different types of walking terrains. The system is designed to be used by seated users allowing more comfort, causing less fatigue and motion sickness. The custom-designed ankle interface is composed of a single actuator-sensors system making it a cost-efficient solution for VR applications. The designed interface consists of a single degree of freedom actuated platform which can rotate around the ankle joint of the user. The platform is impedance controlled around the horizontal position by an electric motor and capstan transmission system. to perform walking in a virtual scene, a seated user is expected to perform walking gestures in form of ankle plantar-flexion and dorsiflexion movements causing the platform to tilt forward and backward. We present three algorithms for simulating the immersive locomotion of a VR avatar using the platform movement information. We also designed multiple impedance controllers to render haptic feedback for different virtual terrains during walking. We carried out experiments to understand how quickly users adapt to the interface, how well they can control their locomotion speed in VR, and how well they can distinguish different types of terrains presented through haptic feedback. We implemented qualitative questionnaires on the usability of the device and the task load of the experimental procedures. The experimental studies demonstrated that the interface can be easily used to navigate in VR and it is capable of rendering dynamic multi-layer complex terrains containing structures with different stiffness and brittleness properties

    Improving spatial orientation in virtual reality with leaning-based interfaces

    Get PDF
    Advancement in technology has made Virtual Reality (VR) increasingly portable, affordable and accessible to a broad audience. However, large scale VR locomotion still faces major challenges in the form of spatial disorientation and motion sickness. While spatial updating is automatic and even obligatory in real world walking, using VR controllers to travel can cause disorientation. This dissertation presents two experiments that explore ways of improving spatial updating and spatial orientation in VR locomotion while minimizing cybersickness. In the first study, we compared a hand-held controller with HeadJoystick, a leaning-based interface, in a 3D navigational search task. The results showed that leaning-based interface helped participant spatially update more effectively than when using the controller. In the second study, we designed a "HyperJump" locomotion paradigm which allows to travel faster while limiting its optical flow. Not having any optical flow (as in traditional teleport paradigms) has been shown to help reduce cybersickness, but can also cause disorientation. By interlacing continuous locomotion with teleportation we showed that user can travel faster without compromising spatial updating

    Motion Generation and Planning System for a Virtual Reality Motion Simulator: Development, Integration, and Analysis

    Get PDF
    In the past five years, the advent of virtual reality devices has significantly influenced research in the field of immersion in a virtual world. In addition to the visual input, the motion cues play a vital role in the sense of presence and the factor of engagement in a virtual environment. This thesis aims to develop a motion generation and planning system for the SP7 motion simulator. SP7 is a parallel robotic manipulator in a 6RSS-R configuration. The motion generation system must be able to produce accurate motion data that matches the visual and audio signals. In this research, two different system workflows have been developed, the first for creating custom visual, audio, and motion cues, while the second for extracting the required motion data from an existing game or simulation. Motion data from the motion generation system are not bounded, while motion simulator movements are limited. The motion planning system commonly known as the motion cueing algorithm is used to create an effective illusion within the limited capabilities of the motion platform. Appropriate and effective motion cues could be achieved by a proper understanding of the perception of human motion, in particular the functioning of the vestibular system. A classical motion cueing has been developed using the model of the semi-circular canal and otoliths. A procedural implementation of the motion cueing algorithm has been described in this thesis. We have integrated all components together to make this robotic mechanism into a VR motion simulator. In general, the performance of the motion simulator is measured by the quality of the motion perceived on the platform by the user. As a result, a novel methodology for the systematic subjective evaluation of the SP7 with a pool of juries was developed to check the quality of motion perception. Based on the results of the evaluation, key issues related to the current configuration of the SP7 have been identified. Minor issues were rectified on the flow, so they were not extensively reported in this thesis. Two major issues have been addressed extensively, namely the parameter tuning of the motion cueing algorithm and the motion compensation of the visual signal in virtual reality devices. The first issue was resolved by developing a tuning strategy with an abstraction layer concept derived from the outcome of the novel technique for the objective assessment of the motion cueing algorithm. The origin of the second problem was found to be a calibration problem of the Vive lighthouse tracking system. So, a thorough experimental study was performed to obtain the optimal calibrated environment. This was achieved by benchmarking the dynamic position tracking performance of the Vive lighthouse tracking system using an industrial serial robot as a ground truth system. With the resolution of the identified issues, a general-purpose virtual reality motion simulator has been developed that is capable of creating custom visual, audio, and motion cues and of executing motion planning for a robotic manipulator with a human motion perception constraint

    Rehabilitation Engineering

    Get PDF
    Population ageing has major consequences and implications in all areas of our daily life as well as other important aspects, such as economic growth, savings, investment and consumption, labour markets, pensions, property and care from one generation to another. Additionally, health and related care, family composition and life-style, housing and migration are also affected. Given the rapid increase in the aging of the population and the further increase that is expected in the coming years, an important problem that has to be faced is the corresponding increase in chronic illness, disabilities, and loss of functional independence endemic to the elderly (WHO 2008). For this reason, novel methods of rehabilitation and care management are urgently needed. This book covers many rehabilitation support systems and robots developed for upper limbs, lower limbs as well as visually impaired condition. Other than upper limbs, the lower limb research works are also discussed like motorized foot rest for electric powered wheelchair and standing assistance device

    Natural locomotion based on a reduced set of inertial sensors: decoupling body and head directions indoors

    Get PDF
    Inertial sensors offer the potential for integration into wireless virtual reality systems that allow the users to walk freely through virtual environments. However, owing to drift errors, inertial sensors cannot accurately estimate head and body orientations in the long run, and when walking indoors, this error cannot be corrected by magnetometers, due to the magnetic field distortion created by ferromagnetic materials present in buildings. This paper proposes a technique, called EHBD (Equalization of Head and Body Directions), to address this problem using two head- and shoulder-located magnetometers. Due to their proximity, their distortions are assumed to be similar and the magnetometer measurements are used to detect when the user is looking straight forward. Then, the system corrects the discrepancies between the estimated directions of the head and the shoulder, which are provided by gyroscopes and consequently are affected by drift errors. An experiment is conducted to evaluate the performance of this technique in two tasks (navigation and navigation plus exploration) and using two different locomotion techniques: (1) gaze-directed mode (GD) in which the walking direction is forced to be the same as the head direction, and (2) decoupled direction mode (DD) in which the walking direction can be different from the viewing direction. The obtained results show that both locomotion modes show similar matching of the target path during the navigation task, while DD’s path matches the target path more closely than GD in the navigation plus exploration task. These results validate the EHBD technique especially when allowing different walking and viewing directions in the navigation plus exploration tasks, as expected. While the proposed method does not reach the accuracy of optical tracking (ideal case), it is an acceptable and satisfactory solution for users and is much more compact, portable and economical

    Development and application of smart actuation methods for vehicle simulators

    Get PDF
    Driving simulators are complex virtual reality systems that integrate visual displays, sound rendering systems, motion platforms, and human-machine-interface devices. They are used in different research areas and consequently, different studies are conducted using these systems, in conditions that would not be safe to be carried out in the real world. However, driving simulators are very expensive research tools. When building such a system, a compromise usually has to be made. Although a driving simulator cannot reproduce 1:1 real life situations or sensations because of its limitations, researchers still need to use such a device for training and research purposes, due to the realistic driving experience it has to offer to its driver. This work focuses on developing a three-degrees of freedom Essential Function Driving Simulator that integrates cost and design constraints, the human perception of motion and real vehicle motion achieved through simulated vehicle models, and the classical motion cueing strategy. The goal is, on the first hand, to immerse the driver to a certain extend into the simulation environment by using this virtual reality device and, on the second hand, to investigate the degree of realism of such a solution. Different actuation solutions are modelled and discussed in this research, with respect to the available workspace, singularity configurations, the system’s behaviour and the maximum forces needed in the frame of the overall cost constraints. A solution was chosen following kinematical and dynamical analyses, as a trade off solution among the above mentioned constraints. The human body finds itself in continuous movement and interaction with the environment. Motion is sensed by the human being through the vestibular system and the skin. The human motion perception mechanisms are mathematically modelled and studied, in order to apply their characteristics in the three-degrees of freedom driving simulator. Due to the limited workspace and degrees of freedom of the discussed simulator, the motion of the simulated vehicle cannot be identically reproduced by the motion system. Thus, special algorithms are designed to transform the motion of the vehicle model in achievable positions for the three actuators, and additionally, to render correct motion cues. The influence of the three variable parameters on the overall subjective degree of freedom is investigated using an optimisation method. The studied parameters are: motion, optical flow and haptic response, introduced by using a lane departure warning assistance system. It is shown in this research that the influence of motion cues on the subjective degree of realism rated by the drivers is of 84%. The vibrations in the steering wheel improve the realism of the simulation and have a 15% impact. The participants of these experiments could easily adapt to the provided assistance system and their immersion in the simulated environment was significantly influenced by the activation of the lane departure warning option. It has also been shown that drivers rated the motion and the accelerations felt in the simulator with 70.41%, compared to the experience of driving a real vehicle. These results are interpreted in this research by putting the emphasis on the fact that irrespective of the DOF of the actuation mechanism, a motion driving simulator should provide correct motion cues. The development of the vehicle models and of the motion cueing algorithms should be approached, so that the system provides motion as similar as possible to the real vehicle, as it is further discussed here.Entwicklung und Anwendung von intelligenten Ansteuerungsmethoden für Fahrzeugsimulatoren Fahrsimulatoren sind Virtual-Reality Systeme, die aus geeigneten Mensch-Maschinen-Schnittstellen, optische und akustische Wiedergabe und, wenn das Bedarf besteht, aus einen Bewegungsapparat bestehen. Sie werden in unterschiedlichen Forschungsfeldern verwendet um verschiedene Studien durchzuführen. Unter anderem können dadurch Manöverstudien durchgeführt werden, die in realen Fahrsituationen zu gefährlich für den Fahrer wären. Der Bau komplexer und hochauflösender Fahrsimulationen ist jedoch sehr Kostenintensiv. Obwohl ein Fahrsimulator die in realen Fahrsituationen empfundenen Fahrgefühle nicht originalgetreu wiedergegeben kann, durch den begrenzten Arbeitsraum, eignet sich ein solches Gerät zu Lehr- und Forschungszwecken. Diese Arbeit befasst sich mit der Entwickelung eines kostengünstigen Fahrsimulators mit drei Freiheitsgraden, der durch eine geeignete Motion-Cueing Strategie dem Fahrer ein ausreichendes Fahrgefühl widergibt. Es werden verschiedene Aktuierungslösungen in Bezug auf den begrenzten Arbeitsraum, singulären Stellungen, des maximalen Kraftbedarfs modelliert, verglichen und diskutiert. Es wurde eine Kompromisslösung gefunden basierend auf der kinematischen und dynamischen Analyse, die diese Begrenzungen berücksichtigt. Der menschliche Körper befindet sich in einer kontinuierlichen Bewegung und interagiert dabei mit der Umgebung. Die Bewegung wird durch das Vestibularorgan und durch die Haut wahrgenommen. Die menschliche Wahrnehmung wird durch ein geeignetes mathematisches Modell widergegeben. Der Bewegungsablauf des Fahrsimulators wurde unter Berücksichtigung der menschlichen Wahrnehmung ausgelegt und untersucht. Wegen dem begrenztem Arbeitsraum und der geringen Anzahl von Systemfreiheitsgraden kann der Simulator die reelle Fahrdynamik nicht im vollen Umfang an die Testperson weitergeben. Deshalb werden angepasste Algorithmen entwickelt um den Bewegungsablauf beschränkt durch drei Aktuatoren in einem akzeptablem umfang widerzugeben. Der Einfluss der drei Aktuatorparameter auf den Bewegungsablauf wird durch geeignete Optimierungsmethoden untersucht. Die Größen die anschließend durch das Fahrsimulator Setup untersucht werden sind unter anderem der Bewegungsablauf, die optische Darstellung und die haptische Wiedergabe. Die Wichtigkeit der empfundenen Fahrbewegung wurde durch die Probanden, im Vergleich zu einem statischen Fahrsimulator, mit 84% bewertet. Die Vibrationen im Lenkrad erhöhen das Realitätsempfinden um 15%. Die Testpersonen konnten sich schnell an die aktuierten Fahrsimulation anpassen und auch Assistenzsysteme wie Spurhalteassistent benutzen. Es wurde gezeigt, dass die im Fahrsimulator gefühlten Beschleunigungen zu ca. 70% an die im realen Fahrbetrieb empfundenen Beschleunigungen herankommen. Es hat sich gezeigt, dass der Immersionsgrad vor allem vom verwendeten Fahrzeugmodellen und den Motion-Cueing Algorithmus abhängig ist

    Enhancing interaction in mixed reality

    Get PDF
    With continuous technological innovation, we observe mixed reality emerging from research labs into the mainstream. The arrival of capable mixed reality devices transforms how we are entertained, consume information, and interact with computing systems, with the most recent being able to present synthesized stimuli to any of the human senses and substantially blur the boundaries between the real and virtual worlds. In order to build expressive and practical mixed reality experiences, designers, developers, and stakeholders need to understand and meet its upcoming challenges. This research contributes a novel taxonomy for categorizing mixed reality experiences and guidelines for designing mixed reality experiences. We present the results of seven studies examining the challenges and opportunities of mixed reality experiences, the impact of modalities and interaction techniques on the user experience, and how to enhance the experiences. We begin with a study determining user attitudes towards mixed reality in domestic and educational environments, followed by six research probes that each investigate an aspect of reality or virtuality. In the first, a levitating steerable projector enables us to investigate how the real world can be enhanced without instrumenting the user. We show that the presentation of in-situ instructions for navigational tasks leads to a significantly higher ability to observe and recall real-world landmarks. With the second probe, we enhance the perception of reality by superimposing information usually not visible to the human eye. In amplifying the human vision, we enable users to perceive thermal radiation visually. Further, we examine the effect of substituting physical components with non-functional tangible proxies or entirely virtual representations. With the third research probe, we explore how to enhance virtuality to enable a user to input text on a physical keyboard while being immersed in the virtual world. Our prototype tracked the user’s hands and keyboard to enable generic text input. Our analysis of text entry performance showed the importance and effect of different hand representations. We then investigate how to touch virtuality by simulating generic haptic feedback for virtual reality and show how tactile feedback through quadcopters can significantly increase the sense of presence. Our final research probe investigates the usability and input space of smartphones within mixed reality environments, pairing the user’s smartphone as an input device with a secondary physical screen. Based on our learnings from these individual research probes, we developed a novel taxonomy for categorizing mixed reality experiences and guidelines for designing mixed reality experiences. The taxonomy is based on the human sensory system and human capabilities of articulation. We showcased its versatility and set our research probes into perspective by organizing them inside the taxonomic space. The design guidelines are divided into user-centered and technology-centered. It is our hope that these will contribute to the bright future of mixed reality systems while emphasizing the new underlining interaction paradigm.Mixed Reality (vermischte Realitäten) gehen aufgrund kontinuierlicher technologischer Innovationen langsam von der reinen Forschung in den Massenmarkt über. Mit der Einführung von leistungsfähigen Mixed-Reality-Geräten verändert sich die Art und Weise, wie wir Unterhaltungsmedien und Informationen konsumieren und wie wir mit Computersystemen interagieren. Verschiedene existierende Geräte sind in der Lage, jeden der menschlichen Sinne mit synthetischen Reizen zu stimulieren. Hierdurch verschwimmt zunehmend die Grenze zwischen der realen und der virtuellen Welt. Um eindrucksstarke und praktische Mixed-Reality-Erfahrungen zu kreieren, müssen Designer und Entwicklerinnen die künftigen Herausforderungen und neuen Möglichkeiten verstehen. In dieser Dissertation präsentieren wir eine neue Taxonomie zur Kategorisierung von Mixed-Reality-Erfahrungen sowie Richtlinien für die Gestaltung von solchen. Wir stellen die Ergebnisse von sieben Studien vor, in denen die Herausforderungen und Chancen von Mixed-Reality-Erfahrungen, die Auswirkungen von Modalitäten und Interaktionstechniken auf die Benutzererfahrung und die Möglichkeiten zur Verbesserung dieser Erfahrungen untersucht werden. Wir beginnen mit einer Studie, in der die Haltung der nutzenden Person gegenüber Mixed Reality in häuslichen und Bildungsumgebungen analysiert wird. In sechs weiteren Fallstudien wird jeweils ein Aspekt der Realität oder Virtualität untersucht. In der ersten Fallstudie wird mithilfe eines schwebenden und steuerbaren Projektors untersucht, wie die Wahrnehmung der realen Welt erweitert werden kann, ohne dabei die Person mit Technologie auszustatten. Wir zeigen, dass die Darstellung von in-situ-Anweisungen für Navigationsaufgaben zu einer deutlich höheren Fähigkeit führt, Sehenswürdigkeiten der realen Welt zu beobachten und wiederzufinden. In der zweiten Fallstudie erweitern wir die Wahrnehmung der Realität durch Überlagerung von Echtzeitinformationen, die für das menschliche Auge normalerweise unsichtbar sind. Durch die Erweiterung des menschlichen Sehvermögens ermöglichen wir den Anwender:innen, Wärmestrahlung visuell wahrzunehmen. Darüber hinaus untersuchen wir, wie sich das Ersetzen von physischen Komponenten durch nicht funktionale, aber greifbare Replikate oder durch die vollständig virtuelle Darstellung auswirkt. In der dritten Fallstudie untersuchen wir, wie virtuelle Realitäten verbessert werden können, damit eine Person, die in der virtuellen Welt verweilt, Text auf einer physischen Tastatur eingeben kann. Unser Versuchsdemonstrator detektiert die Hände und die Tastatur, zeigt diese in der vermischen Realität an und ermöglicht somit die verbesserte Texteingaben. Unsere Analyse der Texteingabequalität zeigte die Wichtigkeit und Wirkung verschiedener Handdarstellungen. Anschließend untersuchen wir, wie man Virtualität berühren kann, indem wir generisches haptisches Feedback für virtuelle Realitäten simulieren. Wir zeigen, wie Quadrokopter taktiles Feedback ermöglichen und dadurch das Präsenzgefühl deutlich steigern können. Unsere letzte Fallstudie untersucht die Benutzerfreundlichkeit und den Eingaberaum von Smartphones in Mixed-Reality-Umgebungen. Hierbei wird das Smartphone der Person als Eingabegerät mit einem sekundären physischen Bildschirm verbunden, um die Ein- und Ausgabemodalitäten zu erweitern. Basierend auf unseren Erkenntnissen aus den einzelnen Fallstudien haben wir eine neuartige Taxonomie zur Kategorisierung von Mixed-Reality-Erfahrungen sowie Richtlinien für die Gestaltung von solchen entwickelt. Die Taxonomie basiert auf dem menschlichen Sinnessystem und den Artikulationsfähigkeiten. Wir stellen die vielseitige Verwendbarkeit vor und setzen unsere Fallstudien in Kontext, indem wir sie innerhalb des taxonomischen Raums einordnen. Die Gestaltungsrichtlinien sind in nutzerzentrierte und technologiezentrierte Richtlinien unterteilt. Es ist unsere Anliegen, dass diese Gestaltungsrichtlinien zu einer erfolgreichen Zukunft von Mixed-Reality-Systemen beitragen und gleichzeitig die neuen Interaktionsparadigmen hervorheben
    • …
    corecore