4,801 research outputs found

    Mixed reality simulation of rasping procedure in artificial cervical disc replacement (ACDR) surgery

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Until quite recently spinal disorder problems in the U.S. have been operated by fusing cervical vertebrae instead of replacement of the cervical disc with an artificial disc. Cervical disc replacement is a recently approved procedure in the U.S. It is one of the most challenging surgical procedures in the medical field due to the deficiencies in available diagnostic tools and insufficient number of surgical practices For physicians and surgical instrument developers, it is critical to understand how to successfully deploy the new artificial disc replacement systems. Without proper understanding and practice of the deployment procedure, it is possible to injure the vertebral body. Mixed reality (MR) and virtual reality (VR) surgical simulators are becoming an indispensable part of physicians’ training, since they offer a risk free training environment. In this study, MR simulation framework and intricacies involved in the development of a MR simulator for the rasping procedure in artificial cervical disc replacement (ACDR) surgery are investigated. The major components that make up the MR surgical simulator with motion tracking system are addressed. </p> <p>Findings</p> <p>A mixed reality surgical simulator that targets rasping procedure in the artificial cervical disc replacement surgery with a VICON motion tracking system was developed. There were several challenges in the development of MR surgical simulator. First, the assembly of different hardware components for surgical simulation development that involves knowledge and application of interdisciplinary fields such as signal processing, computer vision and graphics, along with the design and placements of sensors etc . Second challenge was the creation of a physically correct model of the rasping procedure in order to attain critical forces. This challenge was handled with finite element modeling. The third challenge was minimization of error in mapping movements of an actor in real model to a virtual model in a process called registration. This issue was overcome by a two-way (virtual object to real domain and real domain to virtual object) semi-automatic registration method.</p> <p>Conclusions</p> <p>The applicability of the VICON MR setting for the ACDR surgical simulator is demonstrated. The main stream problems encountered in MR surgical simulator development are addressed. First, an effective environment for MR surgical development is constructed. Second, the strain and the stress intensities and critical forces are simulated under the various rasp instrument loadings with impacts that are applied on intervertebral surfaces of the anterior vertebrae throughout the rasping procedure. Third, two approaches are introduced to solve the registration problem in MR setting. Results show that our system creates an effective environment for surgical simulation development and solves tedious and time-consuming registration problems caused by misalignments. Further, the MR ACDR surgery simulator was tested by 5 different physicians who found that the MR simulator is effective enough to teach the anatomical details of cervical discs and to grasp the basics of the ACDR surgery and rasping procedure</p

    DeformerNet: Learning Bimanual Manipulation of 3D Deformable Objects

    Full text link
    Applications in fields ranging from home care to warehouse fulfillment to surgical assistance require robots to reliably manipulate the shape of 3D deformable objects. Analytic models of elastic, 3D deformable objects require numerous parameters to describe the potentially infinite degrees of freedom present in determining the object's shape. Previous attempts at performing 3D shape control rely on hand-crafted features to represent the object shape and require training of object-specific control models. We overcome these issues through the use of our novel DeformerNet neural network architecture, which operates on a partial-view point cloud of the manipulated object and a point cloud of the goal shape to learn a low-dimensional representation of the object shape. This shape embedding enables the robot to learn a visual servo controller that computes the desired robot end-effector action to iteratively deform the object toward the target shape. We demonstrate both in simulation and on a physical robot that DeformerNet reliably generalizes to object shapes and material stiffness not seen during training. Crucially, using DeformerNet, the robot successfully accomplishes three surgical sub-tasks: retraction (moving tissue aside to access a site underneath it), tissue wrapping (a sub-task in procedures like aortic stent placements), and connecting two tubular pieces of tissue (a sub-task in anastomosis).Comment: Submitted to IEEE Transactions on Robotics (T-RO). 18 pages, 25 figures. arXiv admin note: substantial text overlap with arXiv:2110.0468

    Real-time hybrid cutting with dynamic fluid visualization for virtual surgery

    Get PDF
    It is widely accepted that a reform in medical teaching must be made to meet today's high volume training requirements. Virtual simulation offers a potential method of providing such trainings and some current medical training simulations integrate haptic and visual feedback to enhance procedure learning. The purpose of this project is to explore the capability of Virtual Reality (VR) technology to develop a training simulator for surgical cutting and bleeding in a general surgery

    Microscope Embedded Neurosurgical Training and Intraoperative System

    Get PDF
    In the recent years, neurosurgery has been strongly influenced by new technologies. Computer Aided Surgery (CAS) offers several benefits for patients\u27 safety but fine techniques targeted to obtain minimally invasive and traumatic treatments are required, since intra-operative false movements can be devastating, resulting in patients deaths. The precision of the surgical gesture is related both to accuracy of the available technological instruments and surgeon\u27s experience. In this frame, medical training is particularly important. From a technological point of view, the use of Virtual Reality (VR) for surgeon training and Augmented Reality (AR) for intra-operative treatments offer the best results. In addition, traditional techniques for training in surgery include the use of animals, phantoms and cadavers. The main limitation of these approaches is that live tissue has different properties from dead tissue and that animal anatomy is significantly different from the human. From the medical point of view, Low-Grade Gliomas (LGGs) are intrinsic brain tumours that typically occur in younger adults. The objective of related treatment is to remove as much of the tumour as possible while minimizing damage to the healthy brain. Pathological tissue may closely resemble normal brain parenchyma when looked at through the neurosurgical microscope. The tactile appreciation of the different consistency of the tumour compared to normal brain requires considerable experience on the part of the neurosurgeon and it is a vital point. The first part of this PhD thesis presents a system for realistic simulation (visual and haptic) of the spatula palpation of the LGG. This is the first prototype of a training system using VR, haptics and a real microscope for neurosurgery. This architecture can be also adapted for intra-operative purposes. In this instance, a surgeon needs the basic setup for the Image Guided Therapy (IGT) interventions: microscope, monitors and navigated surgical instruments. The same virtual environment can be AR rendered onto the microscope optics. The objective is to enhance the surgeon\u27s ability for a better intra-operative orientation by giving him a three-dimensional view and other information necessary for a safe navigation inside the patient. The last considerations have served as motivation for the second part of this work which has been devoted to improving a prototype of an AR stereoscopic microscope for neurosurgical interventions, developed in our institute in a previous work. A completely new software has been developed in order to reuse the microscope hardware, enhancing both rendering performances and usability. Since both AR and VR share the same platform, the system can be referred to as Mixed Reality System for neurosurgery. All the components are open source or at least based on a GPL license

    Advanced Strategies for Robot Manipulators

    Get PDF
    Amongst the robotic systems, robot manipulators have proven themselves to be of increasing importance and are widely adopted to substitute for human in repetitive and/or hazardous tasks. Modern manipulators are designed complicatedly and need to do more precise, crucial and critical tasks. So, the simple traditional control methods cannot be efficient, and advanced control strategies with considering special constraints are needed to establish. In spite of the fact that groundbreaking researches have been carried out in this realm until now, there are still many novel aspects which have to be explored

    Virtuality Supports Reality for e-Health Applications

    Get PDF
    Strictly speaking the word “virtuality” or the expression “virtual reality” refers to an application for things simulated or created by the computer, which not really exist. More and more often such things are becoming equally referred with the adjective “virtual” or “digital” or mentioned with the prefixes “e-” or “cyber-”. So we know, for instance, of virtual or digital or e- or cyber- community, cash, business, greetings, books .. till even pets. The virtuality offers interesting advantages with respect to the “simple” reality, since it can reproduce, augment and even overcome the reality. The reproduction is not intended as it has been so far that a camera films a scenario from a fixed point of view and a player shows it, but today it is possible to reproduce the scene dynamically moving the point of view in practically any directions, and “real” becomes “realistic”. The virtuality can augment the reality in the sense that graphics are pulled out from a television screen (or computer/laptop/palm display) and integrated with the real world environments. In this way useful, and often in somehow essentials, information are added for the user. As an example new apps are now available even for iphone users who can obtain graphical information overlapped on camera played real scene surroundings, so directly reading the height of mountains, names of streets, lined up of satellites .., directly over the real mountains, the real streets, the real sky. But the virtuality can even overcome reality, since it can produce and make visible the hidden or inaccessible or old reality and even provide an alternative not real world. So we can virtually see deeply into the matter till atomic dimensions, realize a virtual tour in a past century or give visibility to hypothetical lands otherwise difficult or impossible to simple describe. These are the fundamental reasons for a naturally growing interest in “producing” virtuality. So here we will discuss about some of the different available methods to “produce” virtuality, in particular pointing out some steps necessary for “crossing” reality “towards” virtuality. But between these two parallel worlds, as the “real” and the “virtual” ones are, interactions can exist and this can lead to some further advantages. We will treat about the “production” and the “interaction” with the aim to focus the attention on how the virtuality can be applied in biomedical fields, since it has been demonstrated that virtual reality can furnish important and relevant benefits in e-health applications. As an example virtual tomography joins together 3D imaging anatomical features from several CT (Computerized axial Tomography) or MRI (Magnetic Resonance Imaging) images overlapped with a computer-generated kinesthetic interface so to obtain a useful tool in diagnosis and healing. With the new endovascular simulation possibilities, a head mounted display superimposes 3D images on the patient’s skin so to furnish a direction for implantable devices inside blood vessels. Among all, we chose to investigate the fields where we believe the virtual applications can furnish the meaningful advantages, i.e. in surgery simulation, in cognitive and neurological rehabilitation, in postural and motor training, in brain computer interface. We will furnish to the reader a necessary partial but at the same time fundamental view on what the virtual reality can do to improve possible medical treatment and so, at the end, resulting a better quality of our life

    Haptics Rendering and Applications

    Get PDF
    There has been significant progress in haptic technologies but the incorporation of haptics into virtual environments is still in its infancy. A wide range of the new society's human activities including communication, education, art, entertainment, commerce and science would forever change if we learned how to capture, manipulate and reproduce haptic sensory stimuli that are nearly indistinguishable from reality. For the field to move forward, many commercial and technological barriers need to be overcome. By rendering how objects feel through haptic technology, we communicate information that might reflect a desire to speak a physically- based language that has never been explored before. Due to constant improvement in haptics technology and increasing levels of research into and development of haptics-related algorithms, protocols and devices, there is a belief that haptics technology has a promising future

    Image-Guided Robotic Dental Implantation With Natural-Root-Formed Implants

    Get PDF
    Dental implantation is now recognized as the standard of the care for tooth replacement. Although many studies show high short term survival rates greater than 95%, long term studies (\u3e 5 years) have shown success rates as low as 41.9%. Reasons affecting the long term success rates might include surgical factors such as limited accuracy of implant placement, lack of spacing controls, and overheating during the placement. In this dissertation, a comprehensive solution for improving the outcome of current dental implantation is presented, which includes computer-aided preoperative planning for better visualization of patient-specific information and automated robotic site-preparation for superior placement and orientation accuracy. Surgical planning is generated using patient-specific three-dimensional (3D) models which are reconstructed from Cone-beam CT images. An innovative image-guided robotic site-preparation system for implants insertion is designed and implemented. The preoperative plan of the implant insertion is transferred into intra-operative operations of the robot using a two-step registration procedure with the help of a Coordinate Measurement Machine (CMM). The natural-root implants mimic the root structure of natural teeth and were proved by Finite Element Method (FEM) to provide superior stress distribution than current cylinder-shape implants. However, due to their complicated geometry, manual site-preparation for these implants cannot be accomplished. Our innovative image-guided robotic implantation system provides the possibility of using this advanced type of implant. Phantom experiments with patient-specific jaw models were performed to evaluate the accuracy of positioning and orientation. Fiducial Registration Error (FRE) values less than 0.20 mm and final Target Registration Error (TRE) values after the two-step registration of 0.36±0.13 mm (N=5) were achieved. Orientation error was 1.99±1.27° (N=14). Robotic milling of the natural-root implant shape with single- and double-root was also tested, and the results proved that their complicated volumes can be removed as designed by the robot. The milling time for single- and double-root shape was 177 s and 1522 s, respectively
    • …
    corecore