70 research outputs found

    Modeling and rendering for development of a virtual bone surgery system

    Get PDF
    A virtual bone surgery system is developed to provide the potential of a realistic, safe, and controllable environment for surgical education. It can be used for training in orthopedic surgery, as well as for planning and rehearsal of bone surgery procedures...Using the developed system, the user can perform virtual bone surgery by simultaneously seeing bone material removal through a graphic display device, feeling the force via a haptic deice, and hearing the sound of tool-bone interaction --Abstract, page iii

    Developing a virtual reality environment for petrous bone surgery: a state-of-the-art review

    Get PDF
    The increasing power of computers has led to the development of sophisticated systems that aim to immerse the user in a virtual environment. The benefits of this type of approach to the training of physicians and surgeons are immediately apparent. Unfortunately the implementation of “virtual reality” (VR) surgical simulators has been restricted by both cost and technical limitations. The few successful systems use standardized scenarios, often derived from typical clinical data, to allow the rehearsal of procedures. In reality we would choose a system that allows us not only to practice typical cases but also to enter our own patient data and use it to define the virtual environment. In effect we want to re-write the scenario every time we use the environment and to ensure that its behavior exactly duplicates the behavior of the real tissue. If this can be achieved then VR systems can be used not only to train surgeons but also to rehearse individual procedures where variations in anatomy or pathology present specific surgical problems. The European Union has recently funded a multinational 3-year project (IERAPSI, Integrated Environment for Rehearsal and Planning of Surgical Interventions) to produce a virtual reality system for surgical training and for rehearsing individual procedures. Building the IERAPSI system will bring together a wide range of experts and combine the latest technologies to produce a true, patient specific virtual reality surgical simulator for petrous/temporal bone procedures. This article presents a review of the “state of the art” technologies currently available to construct a system of this type and an overview of the functionality and specifications such a system requires

    WebGL-Based Simulation of Bone Removal in Surgical Orthopeadic Procedures

    Get PDF
    The effective role of virtual reality simulators in surgical operations has been demonstrated during the last decades. The proposed work has been done to give a perspective of the actual orthopeadic surgeries such as a total shoulder arthroplasty with low incidence and visibility of the operation to the surgeon. The research in this thesis is focused on the design and implementation of a web-based graphical feedback for a total shoulder arthroplasty (TSA) surgery. For portability of the simulation and powerful 3D programming features, WebGL is being applied. To simulate the reaming process of the shoulder bone, multiple steps has been passed to be able to remove the volumetric amount of bone which was touched by the reamer tool. A fast and accurate collision detection algorithm utilizing Möller –Trumbore ray-triangle method was implemented to detect the first collision of the bone and the tool in order to accelerate the computations for the bone removal process. Once the collision detected, a mesh Boolean operation using CSG method is being invoked to calculate the volumetric amount of bone which is intersected with the tool and should be removed. This work involves the user interaction to transform the tool in a Three.js scene for the simulated operation

    Virtual Reality Based Environment for Orthopedic Surgery (Veos)

    Get PDF
    The traditional way of teaching surgery involves students observing a �live� surgery and then gradually assisting experienced surgeons. The creation of a Virtual Reality environment for orthopedic surgery (VEOS) can be beneficial in improving the quality of training while decreasing the time needed for training. Developing such virtual environments for educational and training purposes can supplement existing approaches. In this research, the design and development of a virtual reality based environment for orthopedic surgery is described. The scope of the simulation environment is restricted to an orthopedic surgery process known as Less Invasive Stabilization System (LISS) surgery. The primary knowledge source for the LISS surgical process was Miguel A. Pirela-Cruz (Head of Orthopedic Surgery and Rehabilitation, Texas Tech University Health Sciences Center (TTHSC)). The VEOS was designed and developed on a PC based platform. The developed VEOS was validated through interactions with surgical residents at TTHSC. Feedback from residents and our collaborator Miguel A. Pirela-Cruz was used to make necessary modifications to the surgical environment.Industrial Engineering & Managemen

    Research on real-time physics-based deformation for haptic-enabled medical simulation

    Full text link
    This study developed a multiple effective visuo-haptic surgical engine to handle a variety of surgical manipulations in real-time. Soft tissue models are based on biomechanical experiment and continuum mechanics for greater accuracy. Such models will increase the realism of future training systems and the VR/AR/MR implementations for the operating room

    Haptics-based Modeling and Simulation of Micro-Implants Surgery

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Augmented reality (AR) for surgical robotic and autonomous systems: State of the art, challenges, and solutions

    Get PDF
    Despite the substantial progress achieved in the development and integration of augmented reality (AR) in surgical robotic and autonomous systems (RAS), the center of focus in most devices remains on improving end-effector dexterity and precision, as well as improved access to minimally invasive surgeries. This paper aims to provide a systematic review of different types of state-of-the-art surgical robotic platforms while identifying areas for technological improvement. We associate specific control features, such as haptic feedback, sensory stimuli, and human-robot collaboration, with AR technology to perform complex surgical interventions for increased user perception of the augmented world. Current researchers in the field have, for long, faced innumerable issues with low accuracy in tool placement around complex trajectories, pose estimation, and difficulty in depth perception during two-dimensional medical imaging. A number of robots described in this review, such as Novarad and SpineAssist, are analyzed in terms of their hardware features, computer vision systems (such as deep learning algorithms), and the clinical relevance of the literature. We attempt to outline the shortcomings in current optimization algorithms for surgical robots (such as YOLO and LTSM) whilst providing mitigating solutions to internal tool-to-organ collision detection and image reconstruction. The accuracy of results in robot end-effector collisions and reduced occlusion remain promising within the scope of our research, validating the propositions made for the surgical clearance of ever-expanding AR technology in the future

    Intraoperative Planning and Execution of Arbitrary Orthopedic Interventions Using Handheld Robotics and Augmented Reality

    Get PDF
    The focus of this work is a generic, intraoperative and image-free planning and execution application for arbitrary orthopedic interventions using a novel handheld robotic device and optical see-through glasses (AR). This medical CAD application enables the surgeon to intraoperatively plan the intervention directly on the patient’s bone. The glasses and all the other instruments are accurately calibrated using new techniques. Several interventions show the effectiveness of this approach

    Machine learning and interactive real-time simulation for training on relevant total hip replacement skills.

    Get PDF
    Virtual Reality simulators have proven to be an excellent tool in the medical sector to help trainees mastering surgical abilities by providing them with unlimited training opportunities. Total Hip Replacement (THR) is a procedure that can benefit significantly from VR/AR training, given its non-reversible nature. From all the different steps required while performing a THR, doctors agree that a correct fitting of the acetabular component of the implant has the highest relevance to ensure successful outcomes. Acetabular reaming is the step during which the acetabulum is resurfaced and prepared to receive the acetabular implant. The success of this step is directly related to the success of fitting the acetabular component. Therefore, this thesis will focus on developing digital tools that can be used to assist the training of acetabular reaming. Devices such as navigation systems and robotic arms have proven to improve the final accuracy of the procedure. However, surgeons must learn to adapt their instrument movements to be recognised by infrared cameras. When surgeons are initially introduced to these systems, surgical times can be extended up to 20 minutes, maximising surgical risks. Training opportunities are sparse, given the high investment required to purchase these devices. As a cheaper alternative, we developed an Augmented Reality (AR) alternative for training on the calibration of imageless navigation systems (INS). At the time, there were no alternative simulators that using head-mounted displays to train users into the steps to calibrate such systems. Our simulator replicates the presence of an infrared camera and its interaction with the reflecting markers located on the surgical tools. A group of 6 hip surgeons were invited to test the simulator. All of them expressed their satisfaction with the ease of use and attractiveness of the simulator as well as the similarity of interaction with the real procedure. The study confirmed that our simulator represents a cheaper and faster option to train multiple surgeons simultaneously in the use of Imageless Navigation Systems (INS) than learning exclusively on the surgical theatre. Current reviews on simulators for orthopaedic surgical procedures lack objective metrics of assessment given a standard set of design requirements. Instead, most of them rely exclusively on the level of interaction and functionality provided. We propose a comparative assessment rubric based on three different evaluation criteria. Namely immersion, interaction fidelity, and applied learning theories. After our assessment, we found that none of the simulators available for THR provides an accurate interactive representation of resurfacing procedures such as acetabular reaming based on force inputs exerted by the user. This feature is indispensable for an orthopaedics simulator, given that hand-eye coordination skills are essential skills to be trained before performing non-reversible bone removal on real patients. Based on the findings of our comparative assessment, we decided to develop a model to simulate the physically-based deformation expected during traditional acetabular reaming, given the user’s interaction with a volumetric mesh. Current interactive deformation methods on high-resolution meshes are based on geometrical collision detection and do not consider the contribution of the materials’ physical properties. By ignoring the effect of the material mechanics and the force exerted by the user, they become inadequate for training on hand- eye coordination skills transferable to the surgical theatre. Volumetric meshes are preferred in surgical simulation to geometric ones, given that they are able to represent the internal evolution of deformable solids resulting from cutting and shearing operations. Existing numerical methods for representing linear and corotational FEM cuts can only maintain interactive framerates at a low resolution of the mesh. Therefore, we decided to train a machine-learning model to learn the continuum mechanic laws relevant to acetabular reaming and predict deformations at interactive framerates. To the best of our knowledge, no research has been done previously on training a machine learning model on non-elastic FEM data to achieve results at interactive framerates. As training data, we used the results from XFEM simulations precomputed over 5000 frames for plastic deformations on tetrahedral meshes with 20406 elements each. We selected XFEM simulation as the physically-based deformation ground-truth given its accuracy and fast convergence to represent cuts, discontinuities and large strain rates. Our machine learning-based interactive model was trained following the Graph Neural Networks (GNN) blocks. GNNs were selected to learn on tetrahedral meshes as other supervised-learning architectures like the Multilayer perceptron (MLP), and Convolutional neural networks (CNN) are unable to learn the relationships between entities with an arbitrary number of neighbours. The learned simulator identifies the elements to be removed on each frame and describes the accumulated stress evolution in the whole machined piece. Using data generated from the results of XFEM allowed us to embed the effects of non-linearities in our interactive simulations without extra processing time. The trained model executed the prediction task using our tetrahedral mesh and unseen reamer orientations faster per frame than the time required to generate the training FEM dataset. Given an unseen orientation of the reamer, the trained GN model updates the value of accumulated stress on each of the 20406 tetrahedral elements that constitute our mesh during the prediction task. Once this value is updated, the tetrahedrons to be removed from the mesh are identified using a threshold condition. After using each single-frame output as input for the following prediction repeatedly for up to 60 iterations, our model can maintain an accuracy of up to 90.8% in identifying the status of each element given their value of accumulated stress. Finally, we demonstrate how the developed estimator can be easily connected to any game engine and included in developing a fully functional hip arthroplasty simulator

    Virtual Reality Simulation of Glenoid Reaming Procedure

    Get PDF
    Glenoid reaming is a bone machining operation in Total Shoulder Arthroplasty (TSA) in which the glenoid bone is resurfaced to make intimate contact with implant undersurface. While this step is crucial for the longevity of TSA, many surgeons find it technically challenging. With the recent advances in Virtual Reality (VR) simulations, it has become possible to realistically replicate complicated operations without any need for patients or cadavers, and at the same time, provide quantitative feedback to improve surgeons\u27 psycho-motor skills. In light of these advantages, the current thesis intends to develop tools and methods required for construction of a VR simulator for glenoid reaming, in an attempt to construct a reliable tool for preoperative training and planning for surgeons involved with TSA. Towards the end, this thesis presents computational algorithms to appropriately represent surgery tool and bone in the VR environment, determine their intersection and compute realistic haptic feedback based on the intersections. The core of the computations is constituted by sampled geometrical representations of both objects. In particular, point cloud model of the tool and voxelized model of bone - that is derived from Computed Tomography (CT) images - are employed. The thesis shows how to efficiently construct these models and adequately represent them in memory. It also elucidates how to effectively use these models to rapidly determine tool-bone collisions and account for bone removal momentarily. Furthermore, the thesis applies cadaveric experimental data to study the mechanics of glenoid reaming and proposes a realistic model for haptic computations. The proposed model integrates well with the developed computational tools, enabling real-time haptic and graphic simulation of glenoid reaming. Throughout the thesis, a particular emphasis is placed upon computational efficiency, especially on the use of parallel computing using Graphics Processing Units (GPUs). Extensive implementation results are also presented to verify the effectiveness of the developments. Not only do the results of this thesis advance the knowledge in the simulation of glenoid reaming, but they also rigorously contribute to the broader area of surgery simulation, and can serve as a step forward to the wider implementation of VR technology in surgeon training programs
    corecore