12 research outputs found

    Expanding the User Interactions and Design Process of Haptic Experiences in Virtual Reality

    Get PDF
    Virtual reality can be a highly immersive experience due to its realistic visual presentation. This immersive state is useful for applications including education, training, and entertainment. To enhance the state of immersion provided by virtual reality further, devices capable of simulating touch and force have been researched to allow not only a visual and audio experience but a haptic experience as well. Such research has investigated many approaches to generating haptics for virtual reality but often does not explore how to create an immersive haptic experience using them. In this thesis, we present a discussion on four proposed areas of the virtual reality haptic experience design process using a demonstration methodology. To investigate the application of haptic devices, we designed a modular ungrounded haptic system which was used to create a general-purpose device capable of force-based feedback and used it in the three topics of exploration. The first area explored is the application of existing haptic theory for aircraft control to the field of virtual reality drone control. The second area explored is the presence of the size-weight sensory illusion within virtual reality when using a simulated haptic force. The third area explored is how authoring within a virtual reality medium can be used by a designer to create VR haptic experiences. From these explorations, we begin a higher-level discussion of the broader process of creating a virtual reality haptic experience. Using the results of each project as a representation of our proposed design steps, we discuss not only the broader concepts the steps contribute to the process and their importance, but also draw connections between them. By doing this, we present a more holistic approach to the large-scale design of virtual reality haptic experiences and the benefits we believe it provides

    Haptic Interaction with 3D oriented point clouds on the GPU

    Get PDF
    Real-time point-based rendering and interaction with virtual objects is gaining popularity and importance as di�erent haptic devices and technologies increasingly provide the basis for realistic interaction. Haptic Interaction is being used for a wide range of applications such as medical training, remote robot operators, tactile displays and video games. Virtual object visualization and interaction using haptic devices is the main focus; this process involves several steps such as: Data Acquisition, Graphic Rendering, Haptic Interaction and Data Modi�cation. This work presents a framework for Haptic Interaction using the GPU as a hardware accelerator, and includes an approach for enabling the modi�cation of data during interaction. The results demonstrate the limits and capabilities of these techniques in the context of volume rendering for haptic applications. Also, the use of dynamic parallelism as a technique to scale the number of threads needed from the accelerator according to the interaction requirements is studied allowing the editing of data sets of up to one million points at interactive haptic frame rates

    Increasing Transparency and Presence of Teleoperation Systems Through Human-Centered Design

    Get PDF
    Teleoperation allows a human to control a robot to perform dexterous tasks in remote, dangerous, or unreachable environments. A perfect teleoperation system would enable the operator to complete such tasks at least as easily as if he or she was to complete them by hand. This ideal teleoperator must be perceptually transparent, meaning that the interface appears to be nearly nonexistent to the operator, allowing him or her to focus solely on the task environment, rather than on the teleoperation system itself. Furthermore, the ideal teleoperation system must give the operator a high sense of presence, meaning that the operator feels as though he or she is physically immersed in the remote task environment. This dissertation seeks to improve the transparency and presence of robot-arm-based teleoperation systems through a human-centered design approach, specifically by leveraging scientific knowledge about the human motor and sensory systems. First, this dissertation aims to improve the forward (efferent) teleoperation control channel, which carries information from the human operator to the robot. The traditional method of calculating the desired position of the robot\u27s hand simply scales the measured position of the human\u27s hand. This commonly used motion mapping erroneously assumes that the human\u27s produced motion identically matches his or her intended movement. Given that humans make systematic directional errors when moving the hand under conditions similar to those imposed by teleoperation, I propose a new paradigm of data-driven human-robot motion mappings for teleoperation. The mappings are determined by having the human operator mimic the target robot as it autonomously moves its arm through a variety of trajectories in the horizontal plane. Three data-driven motion mapping models are described and evaluated for their ability to correct for the systematic motion errors made in the mimicking task. Individually-fit and population-fit versions of the most promising motion mapping model are then tested in a teleoperation system that allows the operator to control a virtual robot. Results of a user study involving nine subjects indicate that the newly developed motion mapping model significantly increases the transparency of the teleoperation system. Second, this dissertation seeks to improve the feedback (afferent) teleoperation control channel, which carries information from the robot to the human operator. We aim to improve a teleoperation system a teleoperation system by providing the operator with multiple novel modalities of haptic (touch-based) feedback. We describe the design and control of a wearable haptic device that provides kinesthetic grip-force feedback through a geared DC motor and tactile fingertip-contact-and-pressure and high-frequency acceleration feedback through a pair of voice-coil actuators mounted at the tips of the thumb and index finger. Each included haptic feedback modality is known to be fundamental to direct task completion and can be implemented without great cost or complexity. A user study involving thirty subjects investigated how these three modalities of haptic feedback affect an operator\u27s ability to control a real remote robot in a teleoperated pick-and-place task. This study\u27s results strongly support the utility of grip-force and high-frequency acceleration feedback in teleoperation systems and show more mixed effects of fingertip-contact-and-pressure feedback

    Haptics: Science, Technology, Applications

    Get PDF
    This open access book constitutes the proceedings of the 13th International Conference on Human Haptic Sensing and Touch Enabled Computer Applications, EuroHaptics 2022, held in Hamburg, Germany, in May 2022. The 36 regular papers included in this book were carefully reviewed and selected from 129 submissions. They were organized in topical sections as follows: haptic science; haptic technology; and haptic applications

    Optical Fibre-based Force Sensing Needle Driver for Minimally Invasive Surgery

    Get PDF
    Minimally invasive surgery has been limited from its inception by insufficient haptic feedback to surgeons. The loss of haptic information threatens patients safety and results in longer operation times. To address this problem, various force sensing systems have been developed to provide information about tool–tissue interaction forces. However, the provided results for axial and grasping forces have been inaccurate in most of these studies due to considerable amount of error and uncertainty in their force acquisition method. Furthermore, sterilizability of the sensorized instruments plays a pivotal role in accurate measurement of forces inside a patient\u27s body. Therefore, the objective of this thesis was to develop a sterilizable needle-driver type grasper using fibre Bragg gratings. In order to measure more accurate and reliable tool–tissue interaction forces, optical force sensors were integrated in the grasper jaw to measure axial and grasping forces directly at their exertion point on the tool tip. Two sets of sensor prototypes were developed to prove the feasibility of proposed concept. Implementation of this concept into a needle-driver instrument resulted in the final proposed model of the sensorized laparoscopic instrument. Fibre Bragg gratings were used for measuring forces due to their many advantages for this application such as small size, sterilizability and high sensitivity. Visual force feedback was provided for users based on the acquired real-time force data. Improvement and consideration points related to the current work were identified and potential areas to continue this project in the future are discussed

    Measuring user experience for virtual reality

    Get PDF
    In recent years, Virtual Reality (VR) and 3D User Interfaces (3DUI) have seen a drastic increase in popularity, especially in terms of consumer-ready hardware and software. These technologies have the potential to create new experiences that combine the advantages of reality and virtuality. While the technology for input as well as output devices is market ready, only a few solutions for everyday VR - online shopping, games, or movies - exist, and empirical knowledge about performance and user preferences is lacking. All this makes the development and design of human-centered user interfaces for VR a great challenge. This thesis investigates the evaluation and design of interactive VR experiences. We introduce the Virtual Reality User Experience (VRUX) model based on VR-specific external factors and evaluation metrics such as task performance and user preference. Based on our novel UX evaluation approach, we contribute by exploring the following directions: shopping in virtual environments, as well as text entry and menu control in the context of everyday VR. Along with this, we summarize our findings by design spaces and guidelines for choosing optimal interfaces and controls in VR.In den letzten Jahren haben Virtual Reality (VR) und 3D User Interfaces (3DUI) stark an Popularität gewonnen, insbesondere bei Hard- und Software im Konsumerbereich. Diese Technologien haben das Potenzial, neue Erfahrungen zu schaffen, die die Vorteile von Realität und Virtualität kombinieren. Während die Technologie sowohl für Eingabe- als auch für Ausgabegeräte marktreif ist, existieren nur wenige Lösungen für den Alltag in VR - wie Online-Shopping, Spiele oder Filme - und es fehlt an empirischem Wissen über Leistung und Benutzerpräferenzen. Dies macht die Entwicklung und Gestaltung von benutzerzentrierten Benutzeroberflächen für VR zu einer großen Herausforderung. Diese Arbeit beschäftigt sich mit der Evaluation und Gestaltung von interaktiven VR-Erfahrungen. Es wird das Virtual Reality User Experience (VRUX)- Modell eingeführt, das auf VR-spezifischen externen Faktoren und Bewertungskennzahlen wie Leistung und Benutzerpräferenz basiert. Basierend auf unserem neuartigen UX-Evaluierungsansatz leisten wir einen Beitrag, indem wir folgende interaktive Anwendungsbereiche untersuchen: Einkaufen in virtuellen Umgebungen sowie Texteingabe und Menüsteuerung im Kontext des täglichen VR. Die Ergebnisse werden außerdem mittels Richtlinien zur Auswahl optimaler Schnittstellen in VR zusammengefasst

    Interfaces for human-centered production and use of computer graphics assets

    Get PDF
    L'abstract è presente nell'allegato / the abstract is in the attachmen

    Force-Sensing-Based Multi-Platform Robotic Assistance for Vitreoretinal Surgery

    Get PDF
    Vitreoretinal surgery aims to treat disorders of the retina, vitreous body, and macula, such as retinal detachment, diabetic retinopathy, macular hole, epiretinal membrane and retinal vein occlusion. Challenged by several technical and human limitations, vitreoretinal practice currently ranks amongst the most demanding fields in ophthalmic surgery. Of vitreoretinal procedures, membrane peeling is the most common to be performed, over 0.5 million times annually, and among the most prone to complications. It requires an extremely delicate tissue manipulation by various micron scale maneuvers near the retina despite the physiological hand tremor of the operator. In addition, to avoid injuries, the applied forces on the retina need to be kept at a very fine level, which is often well below the tactile sensory threshold of the surgeon. Retinal vein cannulation is another demanding procedure where therapeutic agents are injected into occluded retinal veins. The feasibility of this treatment is limited due to challenges in identifying the moment of venous puncture, achieving cannulation and maintaining it throughout the drug delivery period. Recent advancements in medical robotics have significant potential to address most of the challenges in vitreoretinal practice, and therefore to prevent traumas, lessen complications, minimize intra-operative surgeon effort, maximize surgeon comfort, and promote patient safety. This dissertation presents the development of novel force-sensing tools that can easily be used on various robotic platforms, and robot control methods to produce integrated assistive surgical systems that work in partnership with surgeons against the current limitations in vitreoretinal surgery, specifically focusing on membrane peeling and vein cannulation procedures. Integrating high sensitivity force sensing into the ophthalmic instruments enables precise quantitative monitoring of applied forces. Auditory feedback based upon the measured forces can inform (and warn) the surgeon quickly during the surgery and help prevent injury due to excessive forces. Using these tools on a robotic platform can attenuate hand tremor of the surgeon, which effectively promotes tool manipulation accuracy. In addition, based upon certain force signatures, the robotic system can precisely identify critical instants, such as the venous puncture in retinal vein cannulation, and actively guide the tool towards clinical targets, compensate any involuntary motion of the surgeon, or generate additional motion that will make the surgical task easier. The experimental results using two distinct robotic platforms, the Steady-Hand Eye Robot and Micron, in combination with the force-sensing ophthalmic instruments, show significant performance improvement in artificial dry phantoms and ex vivo biological tissues

    A novel parametric scale for determining rehabilitation progress in the upper limb

    Get PDF
    PhD ThesisThe process of sensorimotor rehabilitation depends upon the clinical condition, age and circumstances of the patient and is unlikely to be continuous or predictable in nature. Mapping progress in conditions such as stroke, cerebral palsy and traumatic brain injury relies upon a variety of qualitative assessments, each resulting in different scales of measurement. Most current assessments are elaborate, specific to certain participants and/or stages of a condition, and subject to inter-rater and intra-rater variability. A simple and reliable measuring system is required that can capture rehabilitation progress from an initial state through to complete rehabilitation. It must be believable, flexible, understandable and accessible if the patient is to benefit from its use. Two-dimensional reaching tasks reflect movements made in typical therapies and activities of daily living. This thesis hypothesises that valuable parameters exist within positional and temporal data gathered from simple reaching tasks. Such parameters should be able to identify movement quality and hence measure state and progress during rehabilitation. They should correlate well with a variety of clinical scales to be meaningful and, as quantifiable measurands, they should be extendable beyond the range of established clinical scales. This thesis proposes a novel solution for the assessment of upper limb rehabilitation. An affordable desktop computer assessment system was developed and used with juvenile patient participants (N=11) to compare simple desktop reaching parameters with a clinical scale. A control group of normal juvenile participants (N=10) provided baseline data. The results indicated good correlation with the clinical scale based upon a weighted combination of pre-selected movement parameters. The methodology developed permits assessment against further clinical scales and additional participant groups allowing rapid, accurate, reliable and extendable assessments. The potential for mass data acquisition from clinical and domestic settings is identified to support the development of further assessments and, potentially, new therapies to address limited therapist availability and innovative treatments
    corecore