4,228 research outputs found

    Mixed Reality in Modern Surgical and Interventional Practice: Narrative Review of the Literature

    Full text link
    BACKGROUND Mixed reality (MR) and its potential applications have gained increasing interest within the medical community over the recent years. The ability to integrate virtual objects into a real-world environment within a single video-see-through display is a topic that sparks imagination. Given these characteristics, MR could facilitate preoperative and preinterventional planning, provide intraoperative and intrainterventional guidance, and aid in education and training, thereby improving the skills and merits of surgeons and residents alike. OBJECTIVE In this narrative review, we provide a broad overview of the different applications of MR within the entire spectrum of surgical and interventional practice and elucidate on potential future directions. METHODS A targeted literature search within the PubMed, Embase, and Cochrane databases was performed regarding the application of MR within surgical and interventional practice. Studies were included if they met the criteria for technological readiness level 5, and as such, had to be validated in a relevant environment. RESULTS A total of 57 studies were included and divided into studies regarding preoperative and interventional planning, intraoperative and interventional guidance, as well as training and education. CONCLUSIONS The overall experience with MR is positive. The main benefits of MR seem to be related to improved efficiency. Limitations primarily seem to be related to constraints associated with head-mounted display. Future directions should be aimed at improving head-mounted display technology as well as incorporation of MR within surgical microscopes, robots, and design of trials to prove superiority

    Virtual and Augmented Reality Techniques for Minimally Invasive Cardiac Interventions: Concept, Design, Evaluation and Pre-clinical Implementation

    Get PDF
    While less invasive techniques have been employed for some procedures, most intracardiac interventions are still performed under cardiopulmonary bypass, on the drained, arrested heart. The progress toward off-pump intracardiac interventions has been hampered by the lack of adequate visualization inside the beating heart. This thesis describes the development, assessment, and pre-clinical implementation of a mixed reality environment that integrates pre-operative imaging and modeling with surgical tracking technologies and real-time ultrasound imaging. The intra-operative echo images are augmented with pre-operative representations of the cardiac anatomy and virtual models of the delivery instruments tracked in real time using magnetic tracking technologies. As a result, the otherwise context-less images can now be interpreted within the anatomical context provided by the anatomical models. The virtual models assist the user with the tool-to-target navigation, while real-time ultrasound ensures accurate positioning of the tool on target, providing the surgeon with sufficient information to ``see\u27\u27 and manipulate instruments in absence of direct vision. Several pre-clinical acute evaluation studies have been conducted in vivo on swine models to assess the feasibility of the proposed environment in a clinical context. Following direct access inside the beating heart using the UCI, the proposed mixed reality environment was used to provide the necessary visualization and navigation to position a prosthetic mitral valve on the the native annulus, or to place a repair patch on a created septal defect in vivo in porcine models. Following further development and seamless integration into the clinical workflow, we hope that the proposed mixed reality guidance environment may become a significant milestone toward enabling minimally invasive therapy on the beating heart

    Research on real-time physics-based deformation for haptic-enabled medical simulation

    Full text link
    This study developed a multiple effective visuo-haptic surgical engine to handle a variety of surgical manipulations in real-time. Soft tissue models are based on biomechanical experiment and continuum mechanics for greater accuracy. Such models will increase the realism of future training systems and the VR/AR/MR implementations for the operating room

    Co-designing augmented reality books: connections between student distributed cognition and technology design

    Get PDF
    This study focuses on the relationship between community-based knowledge construction and participatory technology design. Contextualized against digital divide research, the goals of this study are twofold: understand how students from a low socioeconomic background perceive an emerging technology as a tool to learn with and understand how the iterative and inherently reflective process of technology design may influence student knowledge construction. Grounded in distributed cognition theory, this study examines this relationship by co-designing an augmented reality book with an eighth-grade class in New England, USA. Augmented reality (AR) is an emerging technology that allows users to place virtual models into the physical world using a mobile device or headset. The students created a book about COVID-19 and worked with the researcher to build an iOS app that projected virtual models atop the book. The analysis answers questions about how students perceive and use AR as a tool to learn with, demonstrating that students use AR as an exploratory tool, first broadly across a diverse set of topics, and then deeper into specific details. The study also examines the influence of iteration, observing how students switched between roles of producer and consumer as they sought to improve the app, thereby encouraging critical reflection on the material. Ultimately, this study shows the ways in which individual students used the unique affordances of AR to contribute and modify information for the advancement of the community’s knowledge

    Modeling, Simulation, And Visualization Of 3d Lung Dynamics

    Get PDF
    Medical simulation has facilitated the understanding of complex biological phenomenon through its inherent explanatory power. It is a critical component for planning clinical interventions and analyzing its effect on a human subject. The success of medical simulation is evidenced by the fact that over one third of all medical schools in the United States augment their teaching curricula using patient simulators. Medical simulators present combat medics and emergency providers with video-based descriptions of patient symptoms along with step-by-step instructions on clinical procedures that alleviate the patient\u27s condition. Recent advances in clinical imaging technology have led to an effective medical visualization by coupling medical simulations with patient-specific anatomical models and their physically and physiologically realistic organ deformation. 3D physically-based deformable lung models obtained from a human subject are tools for representing regional lung structure and function analysis. Static imaging techniques such as Magnetic Resonance Imaging (MRI), Chest x-rays, and Computed Tomography (CT) are conventionally used to estimate the extent of pulmonary disease and to establish available courses for clinical intervention. The predictive accuracy and evaluative strength of the static imaging techniques may be augmented by improved computer technologies and graphical rendering techniques that can transform these static images into dynamic representations of subject specific organ deformations. By creating physically based 3D simulation and visualization, 3D deformable models obtained from subject-specific lung images will better represent lung structure and function. Variations in overall lung deformations may indicate tissue pathologies, thus 3D visualization of functioning lungs may also provide a visual tool to current diagnostic methods. The feasibility of medical visualization using static 3D lungs as an effective tool for endotracheal intubation was previously shown using Augmented Reality (AR) based techniques in one of the several research efforts at the Optical Diagnostics and Applications Laboratory (ODALAB). This research effort also shed light on the potential usage of coupling such medical visualization with dynamic 3D lungs. The purpose of this dissertation is to develop 3D deformable lung models, which are developed from subject-specific high resolution CT data and can be visualized using the AR based environment. A review of the literature illustrates that the techniques for modeling real-time 3D lung dynamics can be roughly grouped into two categories: Geometrically-based and Physically-based. Additional classifications would include considering a 3D lung model as either a volumetric or surface model, modeling the lungs as either a single-compartment or a multi-compartment, modeling either the air-blood interaction or the air-blood-tissue interaction, and considering either a normal or pathophysical behavior of lungs. Validating the simulated lung dynamics is a complex problem and has been previously approached by tracking a set of landmarks on the CT images. An area that needs to be explored is the relationship between the choice of the deformation method for the 3D lung dynamics and its visualization framework. Constraints on the choice of the deformation method and the 3D model resolution arise from the visualization framework. Such constraints of our interest are the real-time requirement and the level of interaction required with the 3D lung models. The work presented here discusses a framework that facilitates a physics-based and physiology-based deformation of a single-compartment surface lung model that maintains the frame-rate requirements of the visualization system. The framework presented here is part of several research efforts at ODALab for developing an AR based medical visualization framework. The framework consists of 3 components, (i) modeling the Pressure-Volume (PV) relation, (ii) modeling the lung deformation using a Green\u27s function based deformation operator, and (iii) optimizing the deformation using state-of-art Graphics Processing Units (GPU). The validation of the results obtained in the first two modeling steps is also discussed for normal human subjects. Disease states such as Pneumothorax and lung tumors are modeled using the proposed deformation method. Additionally, a method to synchronize the instantiations of the deformation across a network is also discussed

    New perspectives in surgical treatment of aortic diseases

    Get PDF

    New perspectives in surgical treatment of aortic diseases

    Get PDF

    Example Based Caricature Synthesis

    Get PDF
    The likeness of a caricature to the original face image is an essential and often overlooked part of caricature production. In this paper we present an example based caricature synthesis technique, consisting of shape exaggeration, relationship exaggeration, and optimization for likeness. Rather than relying on a large training set of caricature face pairs, our shape exaggeration step is based on only one or a small number of examples of facial features. The relationship exaggeration step introduces two definitions which facilitate global facial feature synthesis. The first is the T-Shape rule, which describes the relative relationship between the facial elements in an intuitive manner. The second is the so called proportions, which characterizes the facial features in a proportion form. Finally we introduce a similarity metric as the likeness metric based on the Modified Hausdorff Distance (MHD) which allows us to optimize the configuration of facial elements, maximizing likeness while satisfying a number of constraints. The effectiveness of our algorithm is demonstrated with experimental results
    • …
    corecore