15 research outputs found

    Erg-O: Ergonomic Optimization of Immersive Virtual Environments

    Get PDF
    Interaction in VR involves large body movements, easily inducing fatigue and discomfort. We propose Erg-O, a manipulation technique that leverages visual dominance to maintain the visual location of the elements in VR, while making them accessible from more comfortable locations. Our solution works in an open-ended fashion (no prior knowledge of the object the user wants to touch), can be used with multiple objects, and still allows interaction with any other point within user's reach. We use optimization approaches to compute the best physical location to interact with each visual element, and space partitioning techniques to distort the visual and physical spaces based on those mappings and allow multi-object retargeting. In this paper we describe the Erg-O technique, propose two retargeting strategies and report the results from a user study on 3D selection under different conditions, elaborating on their potential and application to specific usage scenarios

    Erg-O: ergonomic optimization of immersive virtual environments

    Get PDF
    Interaction in VR involves large body movements, easily inducing fatigue and discomfort. We propose Erg-O, a manipulation technique that leverages visual dominance to maintain the visual location of the elements in VR, while making them accessible from more comfortable locations. Our solution works in an open-ended fashion (no prior knowledge of the object the user wants to touch), can be used with multiple objects, and still allows interaction with any other point within user's reach. We use optimization approaches to compute the best physical location to interact with each visual element, and space partitioning techniques to distort the visual and physical spaces based on those mappings and allow multi-object retargeting. In this paper we describe the Erg-O technique, propose two retargeting strategies and report the results from a user study on 3D selection under different conditions, elaborating on their potential and application to specific usage scenarios

    Advancing proxy-based haptic feedback in virtual reality

    Get PDF
    This thesis advances haptic feedback for Virtual Reality (VR). Our work is guided by Sutherland's 1965 vision of the ultimate display, which calls for VR systems to control the existence of matter. To push towards this vision, we build upon proxy-based haptic feedback, a technique characterized by the use of passive tangible props. The goal of this thesis is to tackle the central drawback of this approach, namely, its inflexibility, which yet hinders it to fulfill the vision of the ultimate display. Guided by four research questions, we first showcase the applicability of proxy-based VR haptics by employing the technique for data exploration. We then extend the VR system's control over users' haptic impressions in three steps. First, we contribute the class of Dynamic Passive Haptic Feedback (DPHF) alongside two novel concepts for conveying kinesthetic properties, like virtual weight and shape, through weight-shifting and drag-changing proxies. Conceptually orthogonal to this, we study how visual-haptic illusions can be leveraged to unnoticeably redirect the user's hand when reaching towards props. Here, we contribute a novel perception-inspired algorithm for Body Warping-based Hand Redirection (HR), an open-source framework for HR, and psychophysical insights. The thesis concludes by proving that the combination of DPHF and HR can outperform the individual techniques in terms of the achievable flexibility of the proxy-based haptic feedback.Diese Arbeit widmet sich haptischem Feedback für Virtual Reality (VR) und ist inspiriert von Sutherlands Vision des ultimativen Displays, welche VR-Systemen die Fähigkeit zuschreibt, Materie kontrollieren zu können. Um dieser Vision näher zu kommen, baut die Arbeit auf dem Konzept proxy-basierter Haptik auf, bei der haptische Eindrücke durch anfassbare Requisiten vermittelt werden. Ziel ist es, diesem Ansatz die für die Realisierung eines ultimativen Displays nötige Flexibilität zu verleihen. Dazu bearbeiten wir vier Forschungsfragen und zeigen zunächst die Anwendbarkeit proxy-basierter Haptik durch den Einsatz der Technik zur Datenexploration. Anschließend untersuchen wir in drei Schritten, wie VR-Systeme mehr Kontrolle über haptische Eindrücke von Nutzern erhalten können. Hierzu stellen wir Dynamic Passive Haptic Feedback (DPHF) vor, sowie zwei Verfahren, die kinästhetische Eindrücke wie virtuelles Gewicht und Form durch Gewichtsverlagerung und Veränderung des Luftwiderstandes von Requisiten vermitteln. Zusätzlich untersuchen wir, wie visuell-haptische Illusionen die Hand des Nutzers beim Greifen nach Requisiten unbemerkt umlenken können. Dabei stellen wir einen neuen Algorithmus zur Body Warping-based Hand Redirection (HR), ein Open-Source-Framework, sowie psychophysische Erkenntnisse vor. Abschließend zeigen wir, dass die Kombination von DPHF und HR proxy-basierte Haptik noch flexibler machen kann, als es die einzelnen Techniken alleine können

    Redirected Touching

    Get PDF
    In immersive virtual environments, virtual objects cannot be touched. One solution is to use passive haptics - physical props to which virtual objects are registered. The result is compelling; when a user reaches out with a virtual hand to touch a virtual object, her real hand touches and feels a real object. However, for every virtual object to be touched, there must be an analogous physical prop. In the limit, an entire real-world infrastructure would need to be built and changed whenever a virtual scene is changed. Virtual objects and passive haptics have historically been mapped one-to-one. I demonstrate that the mapping need not be one-to-one. One can make a single passive real object provide useful haptic feedback for many virtual objects by exploiting human perception. I developed and investigated three categories of such techniques: 1. Move the virtual world to align different virtual objects in turn with the same real object 2. Move a virtual object into alignment with a real object 3. Map real hand motion to different virtual hand motion, e.g., when the real hand traces a real object, the virtual hand traces a differently shaped virtual object. The first two techniques were investigated for feasibility, and the third was explored more deeply. The first technique (Redirected Passive Haptics) enables users to touch multiple instances of a virtual object, with haptic feedback provided by a single real object. The second technique (The Haptic Hand) attaches a larger-than-hand virtual user interface to the non-dominant hand, mapping the currently relevant part of the interface onto the palm. The third technique (Redirected Touching) warps virtual space to map many differently shaped virtual objects onto a single real object, introducing a discrepancy between real and virtual hand motions. Two studies investigated the technique's effect on task performance and its potential for use in aircraft cockpit procedures training. Users adapt rather quickly to real-virtual discrepancy, and after adaptation, users perform no worse with discrepant virtual objects than with one-to-one virtual objects. Redirected Touching shows promise for training and entertainment applications.Doctor of Philosoph

    HUMAN-ROBOT COLLABORATION IN ROBOTIC-ASSISTED SURGICAL TRAINING

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    On-the-fly dense 3D surface reconstruction for geometry-aware augmented reality.

    Get PDF
    Augmented Reality (AR) is an emerging technology that makes seamless connections between virtual space and the real world by superimposing computer-generated information onto the real-world environment. AR can provide additional information in a more intuitive and natural way than any other information-delivery method that a human has ever in- vented. Camera tracking is the enabling technology for AR and has been well studied for the last few decades. Apart from the tracking problems, sensing and perception of the surrounding environment are also very important and challenging problems. Although there are existing hardware solutions such as Microsoft Kinect and HoloLens that can sense and build the environmental structure, they are either too bulky or too expensive for AR. In this thesis, the challenging real-time dense 3D surface reconstruction technologies are studied and reformulated for the reinvention of basic position-aware AR towards geometry-aware and the outlook of context- aware AR. We initially propose to reconstruct the dense environmental surface using the sparse point from Simultaneous Localisation and Map- ping (SLAM), but this approach is prone to fail in challenging Minimally Invasive Surgery (MIS) scenes such as the presence of deformation and surgical smoke. We subsequently adopt stereo vision with SLAM for more accurate and robust results. With the success of deep learning technology in recent years, we present learning based single image re- construction and achieve the state-of-the-art results. Moreover, we pro- posed context-aware AR, one step further from purely geometry-aware AR towards the high-level conceptual interaction modelling in complex AR environment for enhanced user experience. Finally, a learning-based smoke removal method is proposed to ensure an accurate and robust reconstruction under extreme conditions such as the presence of surgical smoke

    Embodiment Sensitivity to Movement Distortion and Perspective Taking in Virtual Reality

    Get PDF
    Despite recent technological improvements of immersive technologies, Virtual Reality suffers from severe intrinsic limitations, in particular the immateriality of the visible 3D environment. Typically, any simulation and manipulation in a cluttered environment would ideally require providing feedback of collisions to every body parts (arms, legs, trunk, etc.) and not only to the hands as has been originally explored with haptic feedback. This thesis addresses these limitations by relying on a cross modal perception and cognitive approach instead of haptic or force feedback. We base our design on scientific knowledge of bodily self-consciousness and embodiment. It is known that the instantaneous experience of embodiment emerges from the coherent multisensory integration of bodily signals taking place in the brain, and that altering this mechanism can temporarily change how one perceives properties of their own body. This mechanism is at stake during a VR simulation, and this thesis explores the new venues of interaction design based on these fundamental scientific findings about the embodied self. In particular, we explore the use of third person perspective (3PP) instead of permanently offering the traditional first person perspective (1PP), and we manipulate the user-avatar motor mapping to achieve a broader range of interactions while maintaining embodiment. We are guided by two principles, to explore the extent to which we can enhance VR interaction through the manipulation of bodily aspects, and to identify the extent to which a given manipulation affects the embodiment of a virtual body. Our results provide new evidence supporting strong embodiment of a virtual body even when viewed from 3PP, and in particular that voluntarily alternating point of view between 1PP and 3PP is not detrimental to the experience of ownership over the virtual body. Moreover, detailed analysis of movement quality show highly similar reaching behavior in both perspective conditions, and only obvious advantages or disadvantages of each perspective depending on the situation (e.g. occlusion of target by the body in 3PP, limited field of view in 1PP). We also show that subjects are insensitive to visuo-proprioceptive movement distortions when the nature of the distortion was not made explicit, and that subjects are biased toward self-attributing distorted movements that make the task easier

    A continuum robotic platform for endoscopic non-contact laser surgery: design, control, and preclinical evaluation

    Get PDF
    The application of laser technologies in surgical interventions has been accepted in the clinical domain due to their atraumatic properties. In addition to manual application of fibre-guided lasers with tissue contact, non-contact transoral laser microsurgery (TLM) of laryngeal tumours has been prevailed in ENT surgery. However, TLM requires many years of surgical training for tumour resection in order to preserve the function of adjacent organs and thus preserve the patient’s quality of life. The positioning of the microscopic laser applicator outside the patient can also impede a direct line-of-sight to the target area due to anatomical variability and limit the working space. Further clinical challenges include positioning the laser focus on the tissue surface, imaging, planning and performing laser ablation, and motion of the target area during surgery. This dissertation aims to address the limitations of TLM through robotic approaches and intraoperative assistance. Although a trend towards minimally invasive surgery is apparent, no highly integrated platform for endoscopic delivery of focused laser radiation is available to date. Likewise, there are no known devices that incorporate scene information from endoscopic imaging into ablation planning and execution. For focusing of the laser beam close to the target tissue, this work first presents miniaturised focusing optics that can be integrated into endoscopic systems. Experimental trials characterise the optical properties and the ablation performance. A robotic platform is realised for manipulation of the focusing optics. This is based on a variable-length continuum manipulator. The latter enables movements of the endoscopic end effector in five degrees of freedom with a mechatronic actuation unit. The kinematic modelling and control of the robot are integrated into a modular framework that is evaluated experimentally. The manipulation of focused laser radiation also requires precise adjustment of the focal position on the tissue. For this purpose, visual, haptic and visual-haptic assistance functions are presented. These support the operator during teleoperation to set an optimal working distance. Advantages of visual-haptic assistance are demonstrated in a user study. The system performance and usability of the overall robotic system are assessed in an additional user study. Analogous to a clinical scenario, the subjects follow predefined target patterns with a laser spot. The mean positioning accuracy of the spot is 0.5 mm. Finally, methods of image-guided robot control are introduced to automate laser ablation. Experiments confirm a positive effect of proposed automation concepts on non-contact laser surgery.Die Anwendung von Lasertechnologien in chirurgischen Interventionen hat sich aufgrund der atraumatischen Eigenschaften in der Klinik etabliert. Neben manueller Applikation von fasergeführten Lasern mit Gewebekontakt hat sich die kontaktfreie transorale Lasermikrochirurgie (TLM) von Tumoren des Larynx in der HNO-Chirurgie durchgesetzt. Die TLM erfordert zur Tumorresektion jedoch ein langjähriges chirurgisches Training, um die Funktion der angrenzenden Organe zu sichern und damit die Lebensqualität der Patienten zu erhalten. Die Positionierung des mikroskopis chen Laserapplikators außerhalb des Patienten kann zudem die direkte Sicht auf das Zielgebiet durch anatomische Variabilität erschweren und den Arbeitsraum einschränken. Weitere klinische Herausforderungen betreffen die Positionierung des Laserfokus auf der Gewebeoberfläche, die Bildgebung, die Planung und Ausführung der Laserablation sowie intraoperative Bewegungen des Zielgebietes. Die vorliegende Dissertation zielt darauf ab, die Limitierungen der TLM durch robotische Ansätze und intraoperative Assistenz zu adressieren. Obwohl ein Trend zur minimal invasiven Chirurgie besteht, sind bislang keine hochintegrierten Plattformen für die endoskopische Applikation fokussierter Laserstrahlung verfügbar. Ebenfalls sind keine Systeme bekannt, die Szeneninformationen aus der endoskopischen Bildgebung in die Ablationsplanung und -ausführung einbeziehen. Für eine situsnahe Fokussierung des Laserstrahls wird in dieser Arbeit zunächst eine miniaturisierte Fokussieroptik zur Integration in endoskopische Systeme vorgestellt. Experimentelle Versuche charakterisieren die optischen Eigenschaften und das Ablationsverhalten. Zur Manipulation der Fokussieroptik wird eine robotische Plattform realisiert. Diese basiert auf einem längenveränderlichen Kontinuumsmanipulator. Letzterer ermöglicht in Kombination mit einer mechatronischen Aktuierungseinheit Bewegungen des Endoskopkopfes in fünf Freiheitsgraden. Die kinematische Modellierung und Regelung des Systems werden in ein modulares Framework eingebunden und evaluiert. Die Manipulation fokussierter Laserstrahlung erfordert zudem eine präzise Anpassung der Fokuslage auf das Gewebe. Dafür werden visuelle, haptische und visuell haptische Assistenzfunktionen eingeführt. Diese unterstützen den Anwender bei Teleoperation zur Einstellung eines optimalen Arbeitsabstandes. In einer Anwenderstudie werden Vorteile der visuell-haptischen Assistenz nachgewiesen. Die Systemperformanz und Gebrauchstauglichkeit des robotischen Gesamtsystems werden in einer weiteren Anwenderstudie untersucht. Analog zu einem klinischen Einsatz verfolgen die Probanden mit einem Laserspot vorgegebene Sollpfade. Die mittlere Positioniergenauigkeit des Spots beträgt dabei 0,5 mm. Zur Automatisierung der Ablation werden abschließend Methoden der bildgestützten Regelung vorgestellt. Experimente bestätigen einen positiven Effekt der Automationskonzepte für die kontaktfreie Laserchirurgie

    Computational interaction techniques for 3D selection, manipulation and navigation in immersive VR

    Get PDF
    3D interaction provides a natural interplay for HCI. Many techniques involving diverse sets of hardware and software components have been proposed, which has generated an explosion of Interaction Techniques (ITes), Interactive Tasks (ITas) and input devices, increasing thus the heterogeneity of tools in 3D User Interfaces (3DUIs). Moreover, most of those techniques are based on general formulations that fail in fully exploiting human capabilities for interaction. This is because while 3D interaction enables naturalness, it also produces complexity and limitations when using 3DUIs. In this thesis, we aim to generate approaches that better exploit the high potential human capabilities for interaction by combining human factors, mathematical formalizations and computational methods. Our approach is focussed on the exploration of the close coupling between specific ITes and ITas while addressing common issues of 3D interactions. We specifically focused on the stages of interaction within Basic Interaction Tasks (BITas) i.e., data input, manipulation, navigation and selection. Common limitations of these tasks are: (1) the complexity of mapping generation for input devices, (2) fatigue in mid-air object manipulation, (3) space constraints in VR navigation; and (4) low accuracy in 3D mid-air selection. Along with two chapters of introduction and background, this thesis presents five main works. Chapter 3 focusses on the design of mid-air gesture mappings based on human tacit knowledge. Chapter 4 presents a solution to address user fatigue in mid-air object manipulation. Chapter 5 is focused on addressing space limitations in VR navigation. Chapter 6 describes an analysis and a correction method to address Drift effects involved in scale-adaptive VR navigation; and Chapter 7 presents a hybrid technique 3D/2D that allows for precise selection of virtual objects in highly dense environments (e.g., point clouds). Finally, we conclude discussing how the contributions obtained from this exploration, provide techniques and guidelines to design more natural 3DUIs

    Medical robots for MRI guided diagnosis and therapy

    No full text
    Magnetic Resonance Imaging (MRI) provides the capability of imaging tissue with fine resolution and superior soft tissue contrast, when compared with conventional ultrasound and CT imaging, which makes it an important tool for clinicians to perform more accurate diagnosis and image guided therapy. Medical robotic devices combining the high resolution anatomical images with real-time navigation, are ideal for precise and repeatable interventions. Despite these advantages, the MR environment imposes constraints on mechatronic devices operating within it. This thesis presents a study on the design and development of robotic systems for particular MR interventions, in which the issue of testing the MR compatibility of mechatronic components, actuation control, kinematics and workspace analysis, and mechanical and electrical design of the robot have been investigated. Two types of robotic systems have therefore been developed and evaluated along the above aspects. (i) A device for MR guided transrectal prostate biopsy: The system was designed from components which are proven to be MR compatible, actuated by pneumatic motors and ultrasonic motors, and tracked by optical position sensors and ducial markers. Clinical trials have been performed with the device on three patients, and the results reported have demonstrated its capability to perform needle positioning under MR guidance, with a procedure time of around 40mins and with no compromised image quality, which achieved our system speci cations. (ii) Limb positioning devices to facilitate the magic angle effect for diagnosis of tendinous injuries: Two systems were designed particularly for lower and upper limb positioning, which are actuated and tracked by the similar methods as the first device. A group of volunteers were recruited to conduct tests to verify the functionality of the systems. The results demonstrate the clear enhancement of the image quality with an increase in signal intensity up to 24 times in the tendon tissue caused by the magic angle effect, showing the feasibility of the proposed devices to be applied in clinical diagnosis
    corecore