926 research outputs found

    An interactive 3D medical visualization system based on a light field display

    Get PDF
    This paper presents a prototype medical data visualization system exploiting a light field display and custom direct volume rendering techniques to enhance understanding of massive volumetric data, such as CT, MRI, and PET scans. The system can be integrated with standard medical image archives and extends the capabilities of current radiology workstations by supporting real-time rendering of volumes of potentially unlimited size on light field displays generating dynamic observer-independent light fields. The system allows multiple untracked naked-eye users in a sufficiently large interaction area to coherently perceive rendered volumes as real objects, with stereo and motion parallax cues. In this way, an effective collaborative analysis of volumetric data can be achieved. Evaluation tests demonstrate the usefulness of the generated depth cues and the improved performance in understanding complex spatial structures with respect to standard techniques.883-893Pubblicat

    Modeling and rendering for development of a virtual bone surgery system

    Get PDF
    A virtual bone surgery system is developed to provide the potential of a realistic, safe, and controllable environment for surgical education. It can be used for training in orthopedic surgery, as well as for planning and rehearsal of bone surgery procedures...Using the developed system, the user can perform virtual bone surgery by simultaneously seeing bone material removal through a graphic display device, feeling the force via a haptic deice, and hearing the sound of tool-bone interaction --Abstract, page iii

    Real-time hybrid cutting with dynamic fluid visualization for virtual surgery

    Get PDF
    It is widely accepted that a reform in medical teaching must be made to meet today's high volume training requirements. Virtual simulation offers a potential method of providing such trainings and some current medical training simulations integrate haptic and visual feedback to enhance procedure learning. The purpose of this project is to explore the capability of Virtual Reality (VR) technology to develop a training simulator for surgical cutting and bleeding in a general surgery

    Augmented reality for computer assisted orthopaedic surgery

    Get PDF
    In recent years, computer-assistance and robotics have established their presence in operating theatres and found success in orthopaedic procedures. Benefits of computer assisted orthopaedic surgery (CAOS) have been thoroughly explored in research, finding improvements in clinical outcomes, through increased control and precision over surgical actions. However, human-computer interaction in CAOS remains an evolving field, through emerging display technologies including augmented reality (AR) – a fused view of the real environment with virtual, computer-generated holograms. Interactions between clinicians and patient-specific data generated during CAOS are limited to basic 2D interactions on touchscreen monitors, potentially creating clutter and cognitive challenges in surgery. Work described in this thesis sought to explore the benefits of AR in CAOS through: an integration between commercially available AR and CAOS systems, creating a novel AR-centric surgical workflow to support various tasks of computer-assisted knee arthroplasty, and three pre–clinical studies exploring the impact of the new AR workflow on both existing and newly proposed quantitative and qualitative performance metrics. Early research focused on cloning the (2D) user-interface of an existing CAOS system onto a virtual AR screen and investigating any resulting impacts on usability and performance. An infrared-based registration system is also presented, describing a protocol for calibrating commercial AR headsets with optical trackers, calculating a spatial transformation between surgical and holographic coordinate frames. The main contribution of this thesis is a novel AR workflow designed to support computer-assisted patellofemoral arthroplasty. The reported workflow provided 3D in-situ holographic guidance for CAOS tasks including patient registration, pre-operative planning, and assisted-cutting. Pre-clinical experimental validation on a commercial system (NAVIO®, Smith & Nephew) for these contributions demonstrates encouraging early-stage results showing successful deployment of AR to CAOS systems, and promising indications that AR can enhance the clinician’s interactions in the future. The thesis concludes with a summary of achievements, corresponding limitations and future research opportunities.Open Acces

    Towards Skill Transfer via Learning-Based Guidance in Human-Robot Interaction

    Get PDF
    This thesis presents learning-based guidance (LbG) approaches that aim to transfer skills from human to robot. The approaches capture the temporal and spatial information of human motions and teach robot to assist human in human-robot collaborative tasks. In such physical human-robot interaction (pHRI) environments, learning from demonstrations (LfD) enables this transferring skill. Demonstrations can be provided through kinesthetic teaching and/or teleoperation. In kinesthetic teaching, humans directly guide robot’s body to perform a task while in teleoperation, demonstrations can be done through motion/vision-based systems or haptic devices. In this work, the LbG approaches are developed through kinesthetic teaching and teleoperation in both virtual and physical environments. First, this thesis compares and analyzes the capability of two types of statistical models, generative and discriminative, to generate haptic guidance (HG) forces as well as segment and recognize gestures for pHRI that can be used in virtual minimally invasive surgery (MIS) training. In this learning-based approach, the knowledge and experience of experts are modeled to improve the unpredictable motions of novice trainees. Two statistical models, hidden Markov model (HMM) and hidden Conditional Random Fields (HCRF), are used to learn gestures from demonstrations in a virtual MIS related task. The models are developed to automatically recognize and segment gestures as well as generate guidance forces. In practice phase, the guidance forces are adaptively calculated in real time regarding gesture similarities among user motion and the gesture models. Both statistical models can successfully capture the gestures of the user and provide adaptive HG, however, results show the superiority of HCRF, as a discriminative method, compared to HMM, as a generative method, in terms of user performance. In addition, LbG approaches are developed for kinesthetic HRI simulations that aim to transfer the skills of expert surgeons to resident trainees. The discriminative nature of HCRF is incorporated into the approach to produce LbG forces and discriminate the skill levels of users. To experimentally evaluate this kinesthetic-based approach, a femur bone drilling simulation is developed in which residents are provided haptic feedback based on real computed tomography (CT) data that enable them to feel the variable stiffness of bone layers. Orthepaedic surgeons require to adjust drilling force since bone layers have different stiffness. In the learning phase, using the simulation, an expert HCRF model is trained from expert surgeons demonstration to learn the stiffness variations of different bone layers. A novice HCRF model is also developed from the demonstration of novice residents to discriminate the skill levels of a new trainee. During the practice phase, the learning-based approach, which encoded the stiffness variations, guides the trainees to perform training tasks similar to experts motions. Finally, in contrast to other parts of the thesis, an LbG approach is developed through teleoperation in physical environment. The approach assists operators to navigate a teleoperated robot through a haptic steering wheel and a haptic gas pedal. A set of expert operator demonstrations are used to develop maneuvering skill model. The temporal and spatial variation of demonstrations are learned using HMM as the skill model. A modified Gaussian Mixture regression (GMR) in combination with the HMM is also developed to robustly produce the motion during reproduction. The GMR calculates outcome motions from a joint probability density function of data rather than directly model the regression function. In addition, the distance between the robot and obstacles is incorporated into the impedance control to generate guidance forces that also assist operators with avoiding obstacle collisions. Using different forms of variable impedance control, guidance forces are computed in real time with respect to the similarities between the maneuver of users and the skill model. This encourages users to navigate a robot similar to the expert operators. The results show that user performance is improved in terms of number of collisions, task completion time, and average closeness to obstacles

    Multipurpose virtual reality environment for biomedical and health applications

    Get PDF
    Virtual reality is a trending, widely accessible, and contemporary technology of increasing utility to biomedical and health applications. However, most implementations of virtual reality environments are tailored to specific applications. We describe the complete development of a novel, open-source virtual reality environment that is suitable for multipurpose biomedical and healthcare applications. This environment can be interfaced with different hardware and data sources, ranging from gyroscopes to fMRI scanners. The developed environment simulates an immersive (first-person perspective) run in the countryside, in a virtual landscape with various salient features. The utility of the developed VR environment has been validated via two test applications: an application in the context of motor rehabilitation following injury of the lower limbs and an application in the context of real-time functional magnetic resonance imaging neurofeedback, to regulate brain function in specific brain regions of interest. Both applications were tested by pilot subjects that unanimously provided very positive feedback, suggesting that appropriately designed VR environments can indeed be robustly and efficiently used for multiple biomedical purposes. We attribute the versatility of our approach on three principles implicit in the design: selectivity, immersiveness, and adaptability. The software, including both applications, is publicly available free of charge, via a GitHub repository, in support of the Open Science Initiative. Although using this software requires specialized hardware and engineering know-how, we anticipate our contribution to catalyze further progress, interdisciplinary collaborations and replicability, with regards to the usage of virtual reality in biomedical and health applications.Peer ReviewedPostprint (author's final draft

    Intrinsic Measures and Shape Analysis of the Intratemporal Facial Nerve

    Get PDF
    Hypothesis: To characterize anatomical measurements and shape variation of the facial nerve within the temporal bone, and to create statistical shape models (SSMs) to enhance knowledge of temporal bone anatomy and aid in automated segmentation. Background: The facial nerve is a fundamental structure in otologic surgery, and detailed anatomic knowledge with surgical experience are needed to avoid its iatrogenic injury. Trainees can use simulators to practice surgical techniques, however manual segmentation required to develop simulations can be time consuming. Consequently, automated segmentation algorithms have been developed that use atlas registration, SSMs, and deep learning. Methods: Forty cadaveric temporal bones were evaluated using three dimensional microCT (μCT) scans. The image sets were aligned using rigid fiducial registration, and the facial nerve canals were segmented and analyzed. Detailed measurements were performed along the various sections of the nerve. Shape variability was then studied using two SSMs: one involving principal component analysis (PCA) and a second using the Statismo framework. Results: Measurements of the nerve canal revealed mean diameters and lengths of the labyrinthine, tympanic, and mastoid segments. The landmark PCA analysis demonstrated significant shape variation along one mode at the distal tympanic segment, and along three modes at the distal mastoid segment. The Statismo shape model was consistent with this analysis, emphasizing the variability at the mastoid segment. The models were made publicly available to aid in future research and foster collaborative work. Conclusion: The facial nerve exhibited statistical variation within the temporal bone. The models used form a framework for automated facial nerve segmentation and simulation for trainees
    corecore