13 research outputs found

    Robotic simulators for tissue examination training with multimodal sensory feedback

    Get PDF
    Tissue examination by hand remains an essential technique in clinical practice. The effective application depends on skills in sensorimotor coordination, mainly involving haptic, visual, and auditory feedback. The skills clinicians have to learn can be as subtle as regulating finger pressure with breathing, choosing palpation action, monitoring involuntary facial and vocal expressions in response to palpation, and using pain expressions both as a source of information and as a constraint on physical examination. Patient simulators can provide a safe learning platform to novice physicians before trying real patients. This paper reviews state-of-the-art medical simulators for the training for the first time with a consideration of providing multimodal feedback to learn as many manual examination techniques as possible. The study summarizes current advances in tissue examination training devices simulating different medical conditions and providing different types of feedback modalities. Opportunities with the development of pain expression, tissue modeling, actuation, and sensing are also analyzed to support the future design of effective tissue examination simulators

    A perspective review on integrating VR/AR with haptics into STEM education for multi-sensory learning

    Get PDF
    As a result of several governments closing educational facilities in reaction to the COVID-19 pandemic in 2020, almost 80% of the world’s students were not in school for several weeks. Schools and universities are thus increasing their efforts to leverage educational resources and provide possibilities for remote learning. A variety of educational programs, platforms, and technologies are now accessible to support student learning; while these tools are important for society, they are primarily concerned with the dissemination of theoretical material. There is a lack of support for hands-on laboratory work and practical experience. This is particularly important for all disciplines related to science, technology, engineering, and mathematics (STEM), where labs and pedagogical assets must be continuously enhanced in order to provide effective study programs. In this study, we describe a unique perspective to achieving multi-sensory learning through the integration of virtual and augmented reality (VR/AR) with haptic wearables in STEM education. We address the implications of a novel viewpoint on established pedagogical notions. We want to encourage worldwide efforts to make fully immersive, open, and remote laboratory learning a reality.European Union through the Erasmus+ Program under Grant 2020-1-NO01-KA203-076540, project title Integrating virtual and AUGMENTED reality with WEARable technology into engineering EDUcation (AugmentedWearEdu), https://augmentedwearedu.uia.no/ [34] (accessed on 27 March 2022). This work was also supported by the Top Research Centre Mechatronics (TRCM), University of Agder (UiA), Norwa

    A Perspective Review on Integrating VR/AR with Haptics into STEM Education for Multi-Sensory Learning

    Get PDF
    As a result of several governments closing educational facilities in reaction to the COVID-19 pandemic in 2020, almost 80% of the world’s students were not in school for several weeks. Schools and universities are thus increasing their efforts to leverage educational resources and provide possibilities for remote learning. A variety of educational programs, platforms, and technologies are now accessible to support student learning; while these tools are important for society, they are primarily concerned with the dissemination of theoretical material. There is a lack of support for hands-on laboratory work and practical experience. This is particularly important for all disciplines related to science, technology, engineering, and mathematics (STEM), where labs and pedagogical assets must be continuously enhanced in order to provide effective study programs. In this study, we describe a unique perspective to achieving multi-sensory learning through the integration of virtual and augmented reality (VR/AR) with haptic wearables in STEM education. We address the implications of a novel viewpoint on established pedagogical notions. We want to encourage worldwide efforts to make fully immersive, open, and remote laboratory learning a reality.publishedVersio

    Haptic technology for micro-robotic cell injection training systems — a review

    Full text link
    Currently, the micro-robotic cell injection procedure is performed manually by expert human bio-operators. In order to be proficient at the task, lengthy and expensive dedicated training is required. As such, effective specialized training systems for this procedure can prove highly beneficial. This paper presents a comprehensive review of haptic technology relevant to cell injection training and discusses the feasibility of developing such training systems, providing researchers with an inclusive resource enabling the application of the presented approaches, or extension and advancement of the work. A brief explanation of cell injection and the challenges associated with the procedure are first presented. Important skills, such as accuracy, trajectory, speed and applied force, which need to be mastered by the bio-operator in order to achieve successful injection, are then discussed. Then an overview of various types of haptic feedback, devices and approaches is presented. This is followed by discussion on the approaches to cell modeling. Discussion of the application of haptics to skills training across various fields and haptically-enabled virtual training systems evaluation are then presented. Finally, given the findings of the review, this paper concludes that a haptically-enabled virtual cell injection training system is feasible and recommendations are made to developers of such systems

    Advancing proxy-based haptic feedback in virtual reality

    Get PDF
    This thesis advances haptic feedback for Virtual Reality (VR). Our work is guided by Sutherland's 1965 vision of the ultimate display, which calls for VR systems to control the existence of matter. To push towards this vision, we build upon proxy-based haptic feedback, a technique characterized by the use of passive tangible props. The goal of this thesis is to tackle the central drawback of this approach, namely, its inflexibility, which yet hinders it to fulfill the vision of the ultimate display. Guided by four research questions, we first showcase the applicability of proxy-based VR haptics by employing the technique for data exploration. We then extend the VR system's control over users' haptic impressions in three steps. First, we contribute the class of Dynamic Passive Haptic Feedback (DPHF) alongside two novel concepts for conveying kinesthetic properties, like virtual weight and shape, through weight-shifting and drag-changing proxies. Conceptually orthogonal to this, we study how visual-haptic illusions can be leveraged to unnoticeably redirect the user's hand when reaching towards props. Here, we contribute a novel perception-inspired algorithm for Body Warping-based Hand Redirection (HR), an open-source framework for HR, and psychophysical insights. The thesis concludes by proving that the combination of DPHF and HR can outperform the individual techniques in terms of the achievable flexibility of the proxy-based haptic feedback.Diese Arbeit widmet sich haptischem Feedback für Virtual Reality (VR) und ist inspiriert von Sutherlands Vision des ultimativen Displays, welche VR-Systemen die Fähigkeit zuschreibt, Materie kontrollieren zu können. Um dieser Vision näher zu kommen, baut die Arbeit auf dem Konzept proxy-basierter Haptik auf, bei der haptische Eindrücke durch anfassbare Requisiten vermittelt werden. Ziel ist es, diesem Ansatz die für die Realisierung eines ultimativen Displays nötige Flexibilität zu verleihen. Dazu bearbeiten wir vier Forschungsfragen und zeigen zunächst die Anwendbarkeit proxy-basierter Haptik durch den Einsatz der Technik zur Datenexploration. Anschließend untersuchen wir in drei Schritten, wie VR-Systeme mehr Kontrolle über haptische Eindrücke von Nutzern erhalten können. Hierzu stellen wir Dynamic Passive Haptic Feedback (DPHF) vor, sowie zwei Verfahren, die kinästhetische Eindrücke wie virtuelles Gewicht und Form durch Gewichtsverlagerung und Veränderung des Luftwiderstandes von Requisiten vermitteln. Zusätzlich untersuchen wir, wie visuell-haptische Illusionen die Hand des Nutzers beim Greifen nach Requisiten unbemerkt umlenken können. Dabei stellen wir einen neuen Algorithmus zur Body Warping-based Hand Redirection (HR), ein Open-Source-Framework, sowie psychophysische Erkenntnisse vor. Abschließend zeigen wir, dass die Kombination von DPHF und HR proxy-basierte Haptik noch flexibler machen kann, als es die einzelnen Techniken alleine können

    A Novel Haptic Simulator for Evaluating and Training Salient Force-Based Skills for Laparoscopic Surgery

    Get PDF
    Laparoscopic surgery has evolved from an \u27alternative\u27 surgical technique to currently being considered as a mainstream surgical technique. However, learning this complex technique holds unique challenges to novice surgeons due to their \u27distance\u27 from the surgical site. One of the main challenges in acquiring laparoscopic skills is the acquisition of force-based or haptic skills. The neglect of popular training methods (e.g., the Fundamentals of Laparoscopic Surgery, i.e. FLS, curriculum) in addressing this aspect of skills training has led many medical skills professionals to research new, efficient methods for haptic skills training. The overarching goal of this research was to demonstrate that a set of simple, simulator-based haptic exercises can be developed and used to train users for skilled application of forces with surgical tools. A set of salient or core haptic skills that underlie proficient laparoscopic surgery were identified, based on published time-motion studies. Low-cost, computer-based haptic training simulators were prototyped to simulate each of the identified salient haptic skills. All simulators were tested for construct validity by comparing surgeons\u27 performance on the simulators with the performance of novices with no previous laparoscopic experience. An integrated, \u27core haptic skills\u27 simulator capable of rendering the three validated haptic skills was built. To examine the efficacy of this novel salient haptic skills training simulator, novice participants were tested for training improvements in a detailed study. Results from the study demonstrated that simulator training enabled users to significantly improve force application for all three haptic tasks. Research outcomes from this project could greatly influence surgical skills simulator design, resulting in more efficient training

    Virtual reality training for micro-robotic cell injection

    Full text link
    This research was carried out to fill the gap within existing knowledge on the approaches to supplement the training for micro-robotic cell injection procedure by utilising virtual reality and haptic technologies

    Proceedings of the 1st European conference on disability, virtual reality and associated technologies (ECDVRAT 1996)

    Get PDF
    The proceedings of the conferenc

    Exploring Cognitive Processes with Virtual Environments

    Get PDF
    The scope of this thesis is the study of cognitive processes with, and within Virtual Environments (VEs). Specifically, the presented work has two main objectives: (1) to outline a framework for situating the applications of VEs to cognitive sciences, especially those interfacing with the medical domain; and (2) to empirically illustrate the potential of VEs for studying specific aspects of cognitive processes. As for the first objective, the sought framework has been built by proposing classifications and discussing several examples of VEs used for assessing and treating disorders of attention, memory, executive functions, visual-spatial skills, and language. Virtual Reality Exposure Therapy was briefly discussed as well, and applications to autism spectrum disorders, schizophrenia, and pain control were touched on. These applications in fact underscore prerogatives that may extend to non-medical applications to cognitive sciences. The second objective was sought by studying the time course of attention. Two experiments were undertaken, both relying on dual-target paradigms that cause an attentional blink (AB). The first experiment evaluated the effect of a 7-week Tibetan Yoga training on the performance of habitual meditators in an AB paradigm using letters as distractors, and single-digit numbers as targets. The results confirm the evidence that meditation improves the allocation of attentional resources, and extend this conclusion to Yoga, which incorporates also physical exercise. The second experiment compared the AB performance of adult participants using rapid serial presentations of road signs -- hence less abstract stimuli -- under three display conditions: as 2-D images on a computer screen, either with or without a concurrent auditory distraction, and appearing in a 3-D immersive virtual environment depicting a motorway junction. The results found a generally weak AB magnitude, which is maximal in the Virtual Environment, and minimal in the condition with the concurrent auditory distraction. However, no lag-1 sparing effect was observed. == La tesi attiene allo studio dei processi cognitivi in ambiente virtuale (AV). In particolare il lavoro presentato ha due obiettivi principali: (1) proporre un quadro di riferimento per le applicazioni degli AV nelle scienze cognitive, specialmente quando queste rientrano nel settore medico; e (2) illustrare empiricamente il potenziale degli AV per lo studio di specifici aspetti cognitivi. Il quadro di riferimento del primo obiettivo è stato costruito discutendo classificazioni ed esempi di AV usati per valutare e trattare disturbi dell'attenzione, della memoria, esecutivi, delle abilità visuo-spaziali e del linguaggio. Sono stati brevemente discussi anche esempi di applicazioni di Virtual Reality Exposure Therapy, e per i disturbi dello spettro autistico, la schizofrenia, e l'analgesia. Esse sono infatti rappresentative delle prerogative degli AV trasferibili ad aspetti non medici delle scienze cognitive. Per il secondo obiettivo del lavoro si è studiata l'attenzione nel dominio temporale. Sono stati realizzati due esperimenti, entrambi basati su un paradigma sperimentale di doppio-compito tale da indurre il fenomeno dell'attentional blink (AB). Il primo esperimento ha valutato l'effetto di 7 settimane di Yoga tibetano sull'AB di un gruppo di meditatori. La presentazione visiva seriale rapida comprendeva lettere come distrattori e numeri a una cifra come target. I risultati confermano che la meditazione riduce l'AB, ed estendono questa conclusione allo Yoga, che include anche l'esercizio fisico. Il secondo esperimento ha confrontato l'AB di un gruppo di adulti utilizzando presentazioni visive seriali rapide di segnali stradali -- dunque stimoli meno astratti -- in 3 condizioni: immagini 2-D sullo schermo di un computer, essendo simultaneamente presente o assente una distrazione uditiva, e presentazione 3-D in un AV immersivo che simula un incrocio autostradale. I risultati rilevano un AB lieve, che è massimo nell'AV, e minimo nella condizione 2-D con la distrazione uditiva. L'effetto lag-1 sparing non è presente

    Computational interaction techniques for 3D selection, manipulation and navigation in immersive VR

    Get PDF
    3D interaction provides a natural interplay for HCI. Many techniques involving diverse sets of hardware and software components have been proposed, which has generated an explosion of Interaction Techniques (ITes), Interactive Tasks (ITas) and input devices, increasing thus the heterogeneity of tools in 3D User Interfaces (3DUIs). Moreover, most of those techniques are based on general formulations that fail in fully exploiting human capabilities for interaction. This is because while 3D interaction enables naturalness, it also produces complexity and limitations when using 3DUIs. In this thesis, we aim to generate approaches that better exploit the high potential human capabilities for interaction by combining human factors, mathematical formalizations and computational methods. Our approach is focussed on the exploration of the close coupling between specific ITes and ITas while addressing common issues of 3D interactions. We specifically focused on the stages of interaction within Basic Interaction Tasks (BITas) i.e., data input, manipulation, navigation and selection. Common limitations of these tasks are: (1) the complexity of mapping generation for input devices, (2) fatigue in mid-air object manipulation, (3) space constraints in VR navigation; and (4) low accuracy in 3D mid-air selection. Along with two chapters of introduction and background, this thesis presents five main works. Chapter 3 focusses on the design of mid-air gesture mappings based on human tacit knowledge. Chapter 4 presents a solution to address user fatigue in mid-air object manipulation. Chapter 5 is focused on addressing space limitations in VR navigation. Chapter 6 describes an analysis and a correction method to address Drift effects involved in scale-adaptive VR navigation; and Chapter 7 presents a hybrid technique 3D/2D that allows for precise selection of virtual objects in highly dense environments (e.g., point clouds). Finally, we conclude discussing how the contributions obtained from this exploration, provide techniques and guidelines to design more natural 3DUIs
    corecore