98 research outputs found

    Robot-assistive minimally invasive surgery:trends and future directions

    Get PDF
    The evolution of medical technologies—such as surgical devices and imaging techniques—has transformed all aspects of surgery. A key area of development is robot-assisted minimally invasive surgery (MIS). This review paper provides an overview of the evolution of robotic MIS, from its infancy to our days, and envisioned future challenges. It provides an outlook of breakthrough surgical robotic platforms, their clinical applications, and their evolution over the years. It discusses how the integration of robotic, imaging, and sensing technologies has contributed to create novel surgical platforms that can provide the surgeons with enhanced dexterity, precision, and surgical navigation while reducing the invasiveness and efficacy of the intervention. Finally, this review provides an outlook on the future of robotic MIS discussing opportunities and challenges that the scientific community will have to address in the coming decade. We hope that this review serves to provide a quick and accessible way to introduce the readers to this exciting and fast-evolving area of research, and to inspire future research in this field

    MR-based navigation for robot-assisted endovascular procedures

    Get PDF
    There is increasing interests in robotic and computer technologies to accurately perform endovascular intervention. One major limitation of current endovascular intervention—either manual or robot-assisted is the surgical navigation which still relies on 2D fluoroscopy. Recent research efforts are towards MRI-guided interventions to reduce ionizing radiation exposure, and to improve diagnosis, planning, navigation, and execution of endovascular interventions. We propose an MR-based navigation framework for robot-assisted endovascular procedures. The framework allows the acquisition of real-time MR images; segmentation of the vasculature and tracking of vascular instruments; and generation of MR-based guidance, both visual and haptic. The instrument tracking accuracy—a key aspect of the navigation framework—was assessed via 4 dedicated experiments with different acquisition settings, framerate, and time. The experiments showed clinically acceptable tracking accuracy in the range of 1.30–3.80 mm RMSE. We believe that this work represents a valuable first step towards MR-guided robot-assisted intervention

    Patient Specific Systems for Computer Assisted Robotic Surgery Simulation, Planning, and Navigation

    Get PDF
    The evolving scenario of surgery: starting from modern surgery, to the birth of medical imaging and the introduction of minimally invasive techniques, has seen in these last years the advent of surgical robotics. These systems, making possible to get through the difficulties of endoscopic surgery, allow an improved surgical performance and a better quality of the intervention. Information technology contributed to this evolution since the beginning of the digital revolution: providing innovative medical imaging devices and computer assisted surgical systems. Afterwards, the progresses in computer graphics brought innovative visualization modalities for medical datasets, and later the birth virtual reality has paved the way for virtual surgery. Although many surgical simulators already exist, there are no patient specific solutions. This thesis presents the development of patient specific software systems for preoperative planning, simulation and intraoperative assistance, designed for robotic surgery: in particular for bimanual robots that are becoming the future of single port interventions. The first software application is a virtual reality simulator for this kind of surgical robots. The system has been designed to validate the initial port placement and the operative workspace for the potential application of this surgical device. Given a bimanual robot with its own geometry and kinematics, and a patient specific 3D virtual anatomy, the surgical simulator allows the surgeon to choose the optimal positioning of the robot and the access port in the abdominal wall. Additionally, it makes possible to evaluate in a virtual environment if a dexterous movability of the robot is achievable, avoiding unwanted collisions with the surrounding anatomy to prevent potential damages in the real surgical procedure. Even if the software has been designed for a specific bimanual surgical robot, it supports any open kinematic chain structure: as far as it can be described in our custom format. The robot capabilities to accomplish specific tasks can be virtually tested using the deformable models: interacting directly with the target virtual organs, trying to avoid unwanted collisions with the surrounding anatomy not involved in the intervention. Moreover, the surgical simulator has been enhanced with algorithms and data structures to integrate biomechanical parameters into virtual deformable models (based on mass-spring-damper network) of target solid organs, in order to properly reproduce the physical behaviour of the patient anatomy during the interactions. The main biomechanical parameters (Young's modulus and density) have been integrated, allowing the automatic tuning of some model network elements, such as: the node mass and the spring stiffness. The spring damping coefficient has been modeled using the Rayleigh approach. Furthermore, the developed method automatically detect the external layer, allowing the usage of both the surface and internal Young's moduli, in order to model the main parts of dense organs: the stroma and the parenchyma. Finally the model can be manually tuned to represent lesion with specific biomechanical properties. Additionally, some software modules of the simulator have been properly extended to be integrated in a patient specific computer guidance system for intraoperative navigation and assistance in robotic single port interventions. This application provides guidance functionalities working in three different modalities: passive as a surgical navigator, assistive as a guide for the single port placement and active as a tutor preventing unwanted collision during the intervention. The simulation system has beed tested by five surgeons: simulating the robot access port placemen, and evaluating the robot movability and workspace inside the patient abdomen. The tested functionalities, rated by expert surgeons, have shown good quality and performance of the simulation. Moreover, the integration of biomechanical parameters into deformable models has beed tested with various material samples. The results have shown a good visual realism ensuring the performance required by an interactive simulation. Finally, the intraoperative navigator has been tested performing a cholecystectomy on a synthetic patient mannequin, in order to evaluate: the intraoperative navigation accuracy, the network communications latency and the overall usability of the system. The tests performed demonstrated the effectiveness and the usability of the software systems developed: encouraging the introduction of the proposed solution in the clinical practice, and the implementation of further improvements. Surgical robotics will be enhanced by an advanced integration of medical images into software systems: allowing the detailed planning of surgical interventions by means of virtual surgery simulation based on patient specific biomechanical parameters. Furthermore, the advanced functionalities offered by these systems, enable surgical robots to improve the intraoperative surgical assistance: benefitting of the knowledge of the virtual patient anatomy

    Modelling and simulation of flexible instruments for minimally invasive surgical training in virtual reality

    No full text
    Improvements in quality and safety standards in surgical training, reduction in training hours and constant technological advances have challenged the traditional apprenticeship model to create a competent surgeon in a patient-safe way. As a result, pressure on training outside the operating room has increased. Interactive, computer based Virtual Reality (VR) simulators offer a safe, cost-effective, controllable and configurable training environment free from ethical and patient safety issues. Two prototype, yet fully-functional VR simulator systems for minimally invasive procedures relying on flexible instruments were developed and validated. NOViSE is the first force-feedback enabled VR simulator for Natural Orifice Transluminal Endoscopic Surgery (NOTES) training supporting a flexible endoscope. VCSim3 is a VR simulator for cardiovascular interventions using catheters and guidewires. The underlying mathematical model of flexible instruments in both simulator prototypes is based on an established theoretical framework – the Cosserat Theory of Elastic Rods. The efficient implementation of the Cosserat Rod model allows for an accurate, real-time simulation of instruments at haptic-interactive rates on an off-the-shelf computer. The behaviour of the virtual tools and its computational performance was evaluated using quantitative and qualitative measures. The instruments exhibited near sub-millimetre accuracy compared to their real counterparts. The proposed GPU implementation further accelerated their simulation performance by approximately an order of magnitude. The realism of the simulators was assessed by face, content and, in the case of NOViSE, construct validity studies. The results indicate good overall face and content validity of both simulators and of virtual instruments. NOViSE also demonstrated early signs of construct validity. VR simulation of flexible instruments in NOViSE and VCSim3 can contribute to surgical training and improve the educational experience without putting patients at risk, raising ethical issues or requiring expensive animal or cadaver facilities. Moreover, in the context of an innovative and experimental technique such as NOTES, NOViSE could potentially facilitate its development and contribute to its popularization by keeping practitioners up to date with this new minimally invasive technique.Open Acces

    Learning-based autonomous vascular guidewire navigation without human demonstration in the venous system of a porcine liver

    Get PDF
    Purpose The navigation of endovascular guidewires is a dexterous task where physicians and patients can benefit from automation. Machine learning-based controllers are promising to help master this task. However, human-generated training data are scarce and resource-intensive to generate. We investigate if a neural network-based controller trained without human-generated data can learn human-like behaviors. Methods We trained and evaluated a neural network-based controller via deep reinforcement learning in a finite element simulation to navigate the venous system of a porcine liver without human-generated data. The behavior is compared to manual expert navigation, and real-world transferability is evaluated. Results The controller achieves a success rate of 100% in simulation. The controller applies a wiggling behavior, where the guidewire tip is continuously rotated alternately clockwise and counterclockwise like the human expert applies. In the ex vivo porcine liver, the success rate drops to 30%, because either the wrong branch is probed, or the guidewire becomes entangled. Conclusion In this work, we prove that a learning-based controller is capable of learning human-like guidewire navigation behavior without human-generated data, therefore, mitigating the requirement to produce resource-intensive human-generated training data. Limitations are the restriction to one vessel geometry, the neglected safeness of navigation, and the reduced transferability to the real world

    Real-time haptic modeling and simulation for prosthetic insertion

    Get PDF
    In this work a surgical simulator is produced which enables a training otologist to conduct a virtual, real-time prosthetic insertion. The simulator provides the Ear, Nose and Throat surgeon with real-time visual and haptic responses during virtual cochlear implantation into a 3D model of the human Scala Tympani (ST). The parametric model is derived from measured data as published in the literature and accounts for human morphological variance, such as differences in cochlear shape, enabling patient-specific pre- operative assessment. Haptic modeling techniques use real physical data and insertion force measurements, to develop a force model which mimics the physical behavior of an implant as it collides with the ST walls during an insertion. Output force profiles are acquired from the insertion studies conducted in the work, to validate the haptic model. The simulator provides the user with real-time, quantitative insertion force information and associated electrode position as user inserts the virtual implant into the ST model. The information provided by this study may also be of use to implant manufacturers for design enhancements as well as for training specialists in optimal force administration, using the simulator. The paper reports on the methods for anatomical modeling and haptic algorithm development, with focus on simulator design, development, optimization and validation. The techniques may be transferrable to other medical applications that involve prosthetic device insertions where user vision is obstructed

    Image-Based Force Estimation and Haptic Rendering For Robot-Assisted Cardiovascular Intervention

    Get PDF
    Clinical studies have indicated that the loss of haptic perception is the prime limitation of robot-assisted cardiovascular intervention technology, hindering its global adoption. It causes compromised situational awareness for the surgeon during the intervention and may lead to health risks for the patients. This doctoral research was aimed at developing technology for addressing the limitation of the robot-assisted intervention technology in the provision of haptic feedback. The literature review showed that sensor-free force estimation (haptic cue) on endovascular devices, intuitive surgeon interface design, and haptic rendering within the surgeon interface were the major knowledge gaps. For sensor-free force estimation, first, an image-based force estimation methods based on inverse finite-element methods (iFEM) was developed and validated. Next, to address the limitation of the iFEM method in real-time performance, an inverse Cosserat rod model (iCORD) with a computationally efficient solution for endovascular devices was developed and validated. Afterward, the iCORD was adopted for analytical tip force estimation on steerable catheters. The experimental studies confirmed the accuracy and real-time performance of the iCORD for sensor-free force estimation. Afterward, a wearable drift-free rotation measurement device (MiCarp) was developed to facilitate the design of an intuitive surgeon interface by decoupling the rotation measurement from the insertion measurement. The validation studies showed that MiCarp had a superior performance for spatial rotation measurement compared to other modalities. In the end, a novel haptic feedback system based on smart magnetoelastic elastomers was developed, analytically modeled, and experimentally validated. The proposed haptics-enabled surgeon module had an unbounded workspace for interventional tasks and provided an intuitive interface. Experimental validation, at component and system levels, confirmed the usability of the proposed methods for robot-assisted intervention systems

    Virtuality Supports Reality for e-Health Applications

    Get PDF
    Strictly speaking the word “virtuality” or the expression “virtual reality” refers to an application for things simulated or created by the computer, which not really exist. More and more often such things are becoming equally referred with the adjective “virtual” or “digital” or mentioned with the prefixes “e-” or “cyber-”. So we know, for instance, of virtual or digital or e- or cyber- community, cash, business, greetings, books .. till even pets. The virtuality offers interesting advantages with respect to the “simple” reality, since it can reproduce, augment and even overcome the reality. The reproduction is not intended as it has been so far that a camera films a scenario from a fixed point of view and a player shows it, but today it is possible to reproduce the scene dynamically moving the point of view in practically any directions, and “real” becomes “realistic”. The virtuality can augment the reality in the sense that graphics are pulled out from a television screen (or computer/laptop/palm display) and integrated with the real world environments. In this way useful, and often in somehow essentials, information are added for the user. As an example new apps are now available even for iphone users who can obtain graphical information overlapped on camera played real scene surroundings, so directly reading the height of mountains, names of streets, lined up of satellites .., directly over the real mountains, the real streets, the real sky. But the virtuality can even overcome reality, since it can produce and make visible the hidden or inaccessible or old reality and even provide an alternative not real world. So we can virtually see deeply into the matter till atomic dimensions, realize a virtual tour in a past century or give visibility to hypothetical lands otherwise difficult or impossible to simple describe. These are the fundamental reasons for a naturally growing interest in “producing” virtuality. So here we will discuss about some of the different available methods to “produce” virtuality, in particular pointing out some steps necessary for “crossing” reality “towards” virtuality. But between these two parallel worlds, as the “real” and the “virtual” ones are, interactions can exist and this can lead to some further advantages. We will treat about the “production” and the “interaction” with the aim to focus the attention on how the virtuality can be applied in biomedical fields, since it has been demonstrated that virtual reality can furnish important and relevant benefits in e-health applications. As an example virtual tomography joins together 3D imaging anatomical features from several CT (Computerized axial Tomography) or MRI (Magnetic Resonance Imaging) images overlapped with a computer-generated kinesthetic interface so to obtain a useful tool in diagnosis and healing. With the new endovascular simulation possibilities, a head mounted display superimposes 3D images on the patient’s skin so to furnish a direction for implantable devices inside blood vessels. Among all, we chose to investigate the fields where we believe the virtual applications can furnish the meaningful advantages, i.e. in surgery simulation, in cognitive and neurological rehabilitation, in postural and motor training, in brain computer interface. We will furnish to the reader a necessary partial but at the same time fundamental view on what the virtual reality can do to improve possible medical treatment and so, at the end, resulting a better quality of our life

    Recent Developments and Future Challenges in Medical Mixed Reality

    Get PDF
    As AR technology matures, we have seen many applicationsemerge in entertainment, education and training. However, the useof AR is not yet common in medical practice, despite the great po-tential of this technology to help not only learning and training inmedicine, but also in assisting diagnosis and surgical guidance. Inthis paper, we present recent trends in the use of AR across all med-ical specialties and identify challenges that must be overcome tonarrow the gap between academic research and practical use of ARin medicine. A database of 1403 relevant research papers publishedover the last two decades has been reviewed by using a novel re-search trend analysis method based on text mining algorithm. Wesemantically identified 10 topics including varies of technologiesand applications based on the non-biased and in-personal cluster-ing results from the Latent Dirichlet Allocatio (LDA) model andanalysed the trend of each topic from 1995 to 2015. The statisticresults reveal a taxonomy that can best describes the developmentof the medical AR research during the two decades. And the trendanalysis provide a higher level of view of how the taxonomy haschanged and where the focus will goes. Finally, based on the valu-able results, we provide a insightful discussion to the current limi-tations, challenges and future directions in the field. Our objectiveis to aid researchers to focus on the application areas in medicalAR that are most needed, as well as providing medical practitioners with latest technology advancements
    • …
    corecore