36 research outputs found

    Sensorisation of a novel biologically inspired flexible needle

    Get PDF
    Percutaneous interventions are commonly performed during minimally invasive brain surgery, where a straight rigid instrument is inserted through a small incision to access a deep lesion in the brain. Puncturing a vessel during this procedure can be a life-threatening complication. Embedding a forward-looking sensor in a rigid needle has been proposed to tackle this problem; however, using a rigid needle, the procedure needs to be interrupted if a vessel is detected. Steerable needle technology could be used to avoid obstacles, such as blood vessels, due to its ability to follow curvilinear paths, but research to date was lacking in this respect. This thesis aims to investigate the deployment of forward-looking sensors for vessel detection in a steerable needle. The needle itself is based on a bioinspired programmable bevel-tip needle (PBN), a multi-segment design featuring four hollow working channels. In this thesis, laser Doppler flowmetry (LDF) is initially characterised to ensure that the sensor fulfils the minimum requirements for it to be used in conjunction with the needle. Subsequently, vessel reconstruction algorithms are proposed. To determine the axial and off-axis position of the vessel with respect to the probe, successive measurements of the LDF sensor are used. Ideally, full knowledge of the vessel orientation is required to execute an avoidance strategy. Using two LDF probes and a novel signal processing method described in this thesis, the predicted possible vessel orientations can be reduced to four, a setup which is explored here to demonstrate viable obstacle detection with only partial sensor information. Relative measurements from four LDF sensors are also explored to classify possible vessel orientations in full and without ambiguity, but under the assumption that the vessel is perpendicular to the needle insertion axis. Experimental results on a synthetic grey matter phantom are presented, which confirm these findings. To release the perpendicularity assumption, the thesis concludes with the description of a machine learning technique based on a Long Short-term memory network, which enables a vessel's spatial position, cross-sectional diameter and full pose to be predicted with sub-millimetre accuracy. Simulated and in-vitro examinations of vessel detection with this approach are used to demonstrate effective predictive ability. Collectively, these results demonstrate that the proposed steerable needle sensorisation is viable and could lead to improved safety during robotic assisted needle steering interventions.Open Acces

    Planning for steerable needles in neurosurgery

    Get PDF
    The increasing adoption of robotic-assisted surgery has opened up the possibility to control innovative dexterous tools to improve patient outcomes in a minimally invasive way. Steerable needles belong to this category, and their potential has been recognised in various surgical fields, including neurosurgery. However, planning for steerable catheters' insertions might appear counterintuitive even for expert clinicians. Strategies and tools to aid the surgeon in selecting a feasible trajectory to follow and methods to assist them intra-operatively during the insertion process are currently of great interest as they could accelerate steerable needles' translation from research to practical use. However, existing computer-assisted planning (CAP) algorithms are often limited in their ability to meet both operational and kinematic constraints in the context of precise neurosurgery, due to its demanding surgical conditions and highly complex environment. The research contributions in this thesis relate to understanding the existing gap in planning curved insertions for steerable needles and implementing intelligent CAP techniques to use in the context of neurosurgery. Among this thesis contributions showcase (i) the development of a pre-operative CAP for precise neurosurgery applications able to generate optimised paths at a safe distance from brain sensitive structures while meeting steerable needles kinematic constraints; (ii) the development of an intra-operative CAP able to adjust the current insertion path with high stability while compensating for online tissue deformation; (iii) the integration of both methods into a commercial user front-end interface (NeuroInspire, Renishaw plc.) tested during a series of user-controlled needle steering animal trials, demonstrating successful targeting performances. (iv) investigating the use of steerable needles in the context of laser interstitial thermal therapy (LiTT) for maesial temporal lobe epilepsy patients and proposing the first LiTT CAP for steerable needles within this context. The thesis concludes with a discussion of these contributions and suggestions for future work.Open Acces

    DESIGN, DEVELOPMENT, AND EVALUATION OF A DISCRETELY ACTUATED STEERABLE CANNULA

    Get PDF
    Needle-based procedures require the guidance of the needle to a target region to deliver therapy or to remove tissue samples for diagnosis. During needle insertion, needle deflection occurs due to needle-tissue interaction which deviates the needle from its insertion direction. Manipulating the needle at the base provides limited control over the needle trajectory after the insertion. Furthermore, some sites are inaccessible using straight-line trajectories due to delicate structures that need to be avoided. The goal of this research is to develop a discretely actuated steerable cannula to enable active trajectory corrections and achieve accurate targeting in needle-based procedures. The cannula is composed of straight segments connected by shape memory alloy (SMA) actuators and has multiple degrees-of-freedom. To control the motion of the cannula two approaches have been explored. One approach is to measure the cannula configuration directly from the imaging modality and to use this information as a feedback to control the joint motion. The second approach is a model-based controller where the strain of the SMA actuator is controlled by controlling the temperature of the SMA actuator. The constitutive model relates the stress, strain and the temperature of the SMA actuator. The uniaxial constitutive model of the SMA that describes the tensile behavior was extended to one-dimensional pure- bending case to model the phase transformation of the arc-shaped SMA wire. An experimental characterization procedure was devised to obtain the parameters of the SMA that are used in the constitutive model. Experimental results demonstrate that temperature feedback can be effectively used to control the strain of the SMA actuator and image feedback can be reliably used to control the joint motion. Using tools from differential geometry and the configuration control approach, motion planning algorithms were developed to create pre-operative plans that steer the cannula to a desired surgical site (nodule or suspicious tissue). Ultrasound-based tracking algorithms were developed to automate the needle insertion procedure using 2D ultrasound guidance. The effectiveness of the proposed in-plane and out-of-plane tracking methods were demonstrated through experiments inside tissue phantom made of gelatin and ex-vivo experiments. An optical coherence tomography probe was integrated into the cannula and in-situ microscale imaging was performed. The results demonstrate the use of the cannula as a delivery mechanism for diagnostic applications. The tools that were developed in this dissertation form the foundations of developing a complete steerable-cannula system. It is anticipated that the cannula could be used as a delivery mechanism in image-guided needle-based interventions to introduce therapeutic and diagnostic tools to a target region

    Image guided robotic assistance for the diagnosis and treatment of tumor

    Get PDF
    The aim of this thesis is to demonstrate the feasibility and the potentiality of introduction of robotics and image guidance in the overall oncologic workflow, from the diagnosis to the treatment phase. The popularity of robotics in the operating room has grown in recent years. Currently the most popular systems is the da Vinci telemanipulator (Intuitive Surgical), it is based on a master-slave control, for minimally invasive surgery and it is used in several surgical fields such us urology, general, gynecology, cardiothoracic. An accurate study of this system, from a technological field of view, has been conducted addressing all drawbacks and advantages of this system. The da Vinci System creates an immersive operating environment for the surgeon by providing both high quality stereo visualization and a human-machine interface that directly connects the surgeon’s hands to the motion of the surgical tool tips inside the patient’s body. It has undoubted advantages for the surgeon work and for the patient health, at least for some interventions, while its very high costs leaves many doubts on its price benefit ratio. In the robotic surgery field many researchers are working on the optimization and miniaturization robots mechanic, while others are trying to obtain smart functionalities to realize robotic systems, that, “knowing” the patient anatomy from radiological images, can assists the surgeon in an active way. Regarding the second point, image guided systems can be useful to plan and to control medical robots motion and to provide the surgeon pre-operative and intra-operative images with augmented reality visualization to enhance his/her perceptual capacities and, as a consequence, to improve the quality of treatments. To demonstrate this thesis some prototypes has been designed, implemented and tested. The development of image guided medical devices, comprehensive of augmented reality, virtual navigation and robotic surgical features, requires to address several problems. The first ones are the choosing of the robotic platform and of the image source to employ. An industrial anthropomorphic arm has been used as testing platform. The idea of integrating industrial robot components in the clinical workflow has been supported by the da Vinci technical analysis. The algorithms and methods developed, regarding in particular robot calibration, based on literature theories and on an easily integration in the clinical scenario, can be adapted to each anthropomorphic arm. In this way this work can be integrated with light-weight robots, for industrial or clinical use, able to work in close contact to humans, which will become numerous in the early future. Regarding the medical image source, it has been decided to work with ultrasound imaging. Two-dimensional ultrasound imaging is widely used in clinical practice because is not dangerous for the patient, inexpensive, compact and it is a highly flexible imaging that allows users to study many anatomic structures. It is routinely used for diagnosis and as guidance in percutaneous treatments. However the use of 2D ultrasound imaging presents some disadvantages that require great ability of the user: it requires that the clinician mentally integrates many images to reconstruct a complete idea of the anatomy in 3D. Furthermore the freehand control of the probe make it difficult to individuate anatomic positions and orientations and probe repositioning to reach a particular location. To overcome these problems it has been developed an image guided system that fuse 2D US real time images with routinely CT or MRI 3D images, previously acquired from the patient, to enhance clinician orientation and probe guidance. The implemented algorithms for robot calibration and US image guidance has been used to realize two applications responding to specific clinical needs. The first one to speed up the execution of routinely and very recurrently procedures like percutaneous biopsy or ablation. The second one to improve a new completely non invasive type of treatment for solid tumors, the HIFU (High Intensity Focused Ultrasound). An ultrasound guided robotic system has been developed to assist the clinician to execute complicated biopsies, or percutaneous ablations, in particular for deep abdominal organs. It was developed an integrated system that provides the clinician two types of assistance: a mixed reality visualization allows accurate and easy planning of needle trajectory and target reaching verification; the robot arm equipped with a six-degree-of-freedom force sensor allows the precise positioning of the needle holder and allows the clinician to adjust, by means of a cooperative control, the planned trajectory to overcome needle deflection and target motion. The second application consists in an augmented reality navigation system for HIFU treatment. HIFU represents a completely non invasive method for treatment of solid tumors, hemostasis and other vascular features in human tissues. The technology for HIFU treatments is still evolving and the systems available on the market have some limitations and drawbacks. A disadvantage resulting from our experience with the machinery available in our hospital (JC200 therapeutic system Haifu (HIFU) by Tech Co., Ltd, Chongqing), which is similar to other analogous machines, is the long time required to perform the procedure due to the difficulty to find the target, using the remote motion of an ultrasound probe under the patient. This problem has been addressed developing an augmented reality navigation system to enhance US guidance during HIFU treatments allowing an easy target localization. The system was implemented using an additional free hand ultrasound probe coupled with a localizer and CT fused imaging. It offers a simple and an economic solution to an easy HIFU target localization. This thesis demonstrates the utility and usability of robots for diagnosis and treatment of the tumor, in particular the combination of automatic positioning and cooperative control allows the surgeon and the robot to work in synergy. Further the work demonstrates the feasibility and the potentiality of the use of a mixed reality navigation system to facilitate the target localization and consequently to reduce the times of sittings, to increase the number of possible diagnosis/treatments and to decrease the risk of potential errors. The proposed solutions for the integration of robotics and image guidance in the overall oncologic workflow, take into account current available technologies, traditional clinical procedures and cost minimization

    A flexible access platform for robot-assisted minimally invasive surgery

    No full text
    Advances in Minimally Invasive Surgery (MIS) are driven by the clinical demand to reduce the invasiveness of surgical procedures so patients undergo less trauma and experience faster recoveries. These well documented benefits of MIS have been achieved through parallel advances in the technology and instrumentation used during procedures. The new and evolving field of Flexible Access Surgery (FAS), where surgeons access the operative site through a single incision or a natural orifice incision, is being promoted as the next potential step in the evolution of surgery. In order to achieve similar levels of success and adoption as MIS, technology again has its role to play in developing new instruments to solve the unmet clinical challenges of FAS. As procedures become less invasive, these instruments should not just address the challenges presented by the complex access routes of FAS, but should also build on the recent advances in pre- and intraoperative imaging techniques to provide surgeons with new diagnostic and interventional decision making capabilities. The main focus of this thesis is the development and applications of a flexible robotic device that is capable of providing controlled flexibility along curved pathways inside the body. The principal component of the device is its modular mechatronic joint design which utilises an embedded micromotor-tendon actuation scheme to provide independently addressable degrees of freedom and three internal working channels. Connecting multiple modules together allows a seven degree-of-freedom (DoF) flexible access platform to be constructed. The platform is intended for use as a research test-bed to explore engineering and surgical challenges of FAS. Navigation of the platform is realised using a handheld controller optimised for functionality and ergonomics, or in a "hands-free" manner via a gaze contingent control framework. Under this framework, the operator's gaze fixation point is used as feedback to close the servo control loop. The feasibility and potential of integrating multi-spectral imaging capabilities into flexible robotic devices is also demonstrated. A force adaptive servoing mechanism is developed to simplify the deployment, and improve the consistency of probe-based optical imaging techniques by automatically controlling the contact force between the probe tip and target tissue. The thesis concludes with the description of two FAS case studies performed with the platform during in-vivo porcine experiments. These studies demonstrate the ability of the platform to perform large area explorations within the peritoneal cavity and to provide a stable base for the deployment of interventional instruments and imaging probes

    Virtual and Augmented Reality Techniques for Minimally Invasive Cardiac Interventions: Concept, Design, Evaluation and Pre-clinical Implementation

    Get PDF
    While less invasive techniques have been employed for some procedures, most intracardiac interventions are still performed under cardiopulmonary bypass, on the drained, arrested heart. The progress toward off-pump intracardiac interventions has been hampered by the lack of adequate visualization inside the beating heart. This thesis describes the development, assessment, and pre-clinical implementation of a mixed reality environment that integrates pre-operative imaging and modeling with surgical tracking technologies and real-time ultrasound imaging. The intra-operative echo images are augmented with pre-operative representations of the cardiac anatomy and virtual models of the delivery instruments tracked in real time using magnetic tracking technologies. As a result, the otherwise context-less images can now be interpreted within the anatomical context provided by the anatomical models. The virtual models assist the user with the tool-to-target navigation, while real-time ultrasound ensures accurate positioning of the tool on target, providing the surgeon with sufficient information to ``see\u27\u27 and manipulate instruments in absence of direct vision. Several pre-clinical acute evaluation studies have been conducted in vivo on swine models to assess the feasibility of the proposed environment in a clinical context. Following direct access inside the beating heart using the UCI, the proposed mixed reality environment was used to provide the necessary visualization and navigation to position a prosthetic mitral valve on the the native annulus, or to place a repair patch on a created septal defect in vivo in porcine models. Following further development and seamless integration into the clinical workflow, we hope that the proposed mixed reality guidance environment may become a significant milestone toward enabling minimally invasive therapy on the beating heart

    Cable-driven parallel mechanisms for minimally invasive robotic surgery

    Get PDF
    Minimally invasive surgery (MIS) has revolutionised surgery by providing faster recovery times, less post-operative complications, improved cosmesis and reduced pain for the patient. Surgical robotics are used to further decrease the invasiveness of procedures, by using yet smaller and fewer incisions or using natural orifices as entry point. However, many robotic systems still suffer from technical challenges such as sufficient instrument dexterity and payloads, leading to limited adoption in clinical practice. Cable-driven parallel mechanisms (CDPMs) have unique properties, which can be used to overcome existing challenges in surgical robotics. These beneficial properties include high end-effector payloads, efficient force transmission and a large configurable instrument workspace. However, the use of CDPMs in MIS is largely unexplored. This research presents the first structured exploration of CDPMs for MIS and demonstrates the potential of this type of mechanism through the development of multiple prototypes: the ESD CYCLOPS, CDAQS, SIMPLE, neuroCYCLOPS and microCYCLOPS. One key challenge for MIS is the access method used to introduce CDPMs into the body. Three different access methods are presented by the prototypes. By focusing on the minimally invasive access method in which CDPMs are introduced into the body, the thesis provides a framework, which can be used by researchers, engineers and clinicians to identify future opportunities of CDPMs in MIS. Additionally, through user studies and pre-clinical studies, these prototypes demonstrate that this type of mechanism has several key advantages for surgical applications in which haptic feedback, safe automation or a high payload are required. These advantages, combined with the different access methods, demonstrate that CDPMs can have a key role in the advancement of MIS technology.Open Acces

    Flexible robotic device for spinal surgery

    No full text
    Surgical robots have proliferated in recent years, with well-established benefits including: reduced patient trauma, shortened hospitalisation, and improved diagnostic accuracy and therapeutic outcome. Despite these benefits, many challenges in their development remain, including improved instrument control and ergonomics caused by rigid instrumentation and its associated fulcrum effect. Consequently, it is still extremely challenging to utilise such devices in cases that involve complex anatomical pathways such as the spinal column. The focus of this thesis is the development of a flexible robotic surgical cutting device capable of manoeuvring around the spinal column. The target application of the flexible surgical tool is the removal of cancerous tumours surrounding the spinal column, which cannot be excised completely using the straight surgical tools in use today; anterior and posterior sections of the spine must be accessible for complete tissue removal. A parallel robot platform with six degrees of freedom (6 DoFs) has been designed and fabricated to direct a flexible cutting tool to produce the necessary range of movements to reach anterior and posterior sections of the spinal column. A flexible water jet cutting system and a flexible mechanical drill, which may be assembled interchangeably with the flexible probe, have been developed and successfully tested experimentally. A model predicting the depth of cut by the water jet was developed and experimentally validated. A flexion probe that is able to guide the surgical cutting device around the spinal column has been fabricated and tested with human lumber model. Modelling and simulations show the capacity for the flexible surgical system to enable entering the posterior side of the human lumber model and bend around the vertebral body to reach the anterior side of the spinal column. A computer simulation with a full Graphical User Interface (GUI) was created and used to validate the system of inverse kinematic equations for the robot platform. The constraint controller and the inverse kinematics relations are both incorporated into the overall positional control structure of the robot, and have successfully established a haptic feedback controller for the 6 DoFs surgical probe, and effectively tested in vitro on spinal mock surgery. The flexible surgical system approached the surgery from the posterior side of the human lumber model and bend around the vertebral body to reach the anterior side of the spinal column. The flexible surgical robot removed 82% of mock cancerous tissue compared to 16% of tissue removed by the rigid tool.Open Acces

    Electronically integrated microcatheters based on self-assembling polymer films

    Get PDF
    Existing electronically integrated catheters rely on the manual assembly of separate components to integrate sensing and actuation capabilities. This strongly impedes their miniaturization and further integration. Here, we report an electronically integrated self-assembled microcatheter. Electronic components for sensing and actuation are embedded into the catheter wall through the self-assembly of photolithographically processed polymer thin films. With a diameter of only about 0.1 mm, the catheter integrates actuated digits for manipulation and a magnetic sensor for navigation and is capable of targeted delivery of liquids. Fundamental functionalities are demonstrated and evaluated with artificial model environments and ex vivo tissue. Using the integrated magnetic sensor, we develop a strategy for the magnetic tracking of medical tools that facilitates basic navigation with a high resolution below 0.1 mm. These highly flexible and microsized integrated catheters might expand the boundary of minimally invasive surgery and lead to new biomedical applications. Copyright © 2021 The Authors, some rights reserved

    AUGMENTED REALITY AND INTRAOPERATIVE C-ARM CONE-BEAM COMPUTED TOMOGRAPHY FOR IMAGE-GUIDED ROBOTIC SURGERY

    Get PDF
    Minimally-invasive robotic-assisted surgery is a rapidly-growing alternative to traditionally open and laparoscopic procedures; nevertheless, challenges remain. Standard of care derives surgical strategies from preoperative volumetric data (i.e., computed tomography (CT) and magnetic resonance (MR) images) that benefit from the ability of multiple modalities to delineate different anatomical boundaries. However, preoperative images may not reflect a possibly highly deformed perioperative setup or intraoperative deformation. Additionally, in current clinical practice, the correspondence of preoperative plans to the surgical scene is conducted as a mental exercise; thus, the accuracy of this practice is highly dependent on the surgeon’s experience and therefore subject to inconsistencies. In order to address these fundamental limitations in minimally-invasive robotic surgery, this dissertation combines a high-end robotic C-arm imaging system and a modern robotic surgical platform as an integrated intraoperative image-guided system. We performed deformable registration of preoperative plans to a perioperative cone-beam computed tomography (CBCT), acquired after the patient is positioned for intervention. From the registered surgical plans, we overlaid critical information onto the primary intraoperative visual source, the robotic endoscope, by using augmented reality. Guidance afforded by this system not only uses augmented reality to fuse virtual medical information, but also provides tool localization and other dynamic intraoperative updated behavior in order to present enhanced depth feedback and information to the surgeon. These techniques in guided robotic surgery required a streamlined approach to creating intuitive and effective human-machine interferences, especially in visualization. Our software design principles create an inherently information-driven modular architecture incorporating robotics and intraoperative imaging through augmented reality. The system's performance is evaluated using phantoms and preclinical in-vivo experiments for multiple applications, including transoral robotic surgery, robot-assisted thoracic interventions, and cocheostomy for cochlear implantation. The resulting functionality, proposed architecture, and implemented methodologies can be further generalized to other C-arm-based image guidance for additional extensions in robotic surgery
    corecore