84 research outputs found

    Autonomous Tissue Scanning under Free-Form Motion for Intraoperative Tissue Characterisation

    Full text link
    In Minimally Invasive Surgery (MIS), tissue scanning with imaging probes is required for subsurface visualisation to characterise the state of the tissue. However, scanning of large tissue surfaces in the presence of deformation is a challenging task for the surgeon. Recently, robot-assisted local tissue scanning has been investigated for motion stabilisation of imaging probes to facilitate the capturing of good quality images and reduce the surgeon's cognitive load. Nonetheless, these approaches require the tissue surface to be static or deform with periodic motion. To eliminate these assumptions, we propose a visual servoing framework for autonomous tissue scanning, able to deal with free-form tissue deformation. The 3D structure of the surgical scene is recovered and a feature-based method is proposed to estimate the motion of the tissue in real-time. A desired scanning trajectory is manually defined on a reference frame and continuously updated using projective geometry to follow the tissue motion and control the movement of the robotic arm. The advantage of the proposed method is that it does not require the learning of the tissue motion prior to scanning and can deal with free-form deformation. We deployed this framework on the da Vinci surgical robot using the da Vinci Research Kit (dVRK) for Ultrasound tissue scanning. Since the framework does not rely on information from the Ultrasound data, it can be easily extended to other probe-based imaging modalities.Comment: 7 pages, 5 figures, ICRA 202

    Autonomous pick-and-place using the dVRK.

    Get PDF
    PURPOSE: Robotic-assisted partial nephrectomy (RAPN) is a tissue-preserving approach to treating renal cancer, where ultrasound (US) imaging is used for intra-operative identification of tumour margins and localisation of blood vessels. With the da Vinci Surgical System (Sunnyvale, CA), the US probe is inserted through an auxiliary access port, grasped by the robotic tool and moved over the surface of the kidney. Images from US probe are displayed separately to the surgical site video within the surgical console leaving the surgeon to interpret and co-registers information which is challenging and complicates the procedural workflow. METHODS: We introduce a novel software architecture to support a hardware soft robotic rail designed to automate intra-operative US acquisition. As a preliminary step towards complete task automation, we automatically grasp the rail and position it on the tissue surface so that the surgeon is then able to manipulate manually the US probe along it. RESULTS: A preliminary clinical study, involving five surgeons, was carried out to evaluate the potential performance of the system. Results indicate that the proposed semi-autonomous approach reduced the time needed to complete a US scan compared to manual tele-operation. CONCLUSION: Procedural automation can be an important workflow enhancement functionality in future robotic surgery systems. We have shown a preliminary study on semi-autonomous US imaging, and this could support more efficient data acquisition

    Computational ultrasound tissue characterisation for brain tumour resection

    Get PDF
    In brain tumour resection, it is vital to know where critical neurovascular structuresand tumours are located to minimise surgical injuries and cancer recurrence. Theaim of this thesis was to improve intraoperative guidance during brain tumourresection by integrating both ultrasound standard imaging and elastography in thesurgical workflow. Brain tumour resection requires surgeons to identify the tumourboundaries to preserve healthy brain tissue and prevent cancer recurrence. Thisthesis proposes to use ultrasound elastography in combination with conventionalultrasound B-mode imaging to better characterise tumour tissue during surgery.Ultrasound elastography comprises a set of techniques that measure tissue stiffness,which is a known biomarker of brain tumours. The objectives of the researchreported in this thesis are to implement novel learning-based methods for ultrasoundelastography and to integrate them in an image-guided intervention framework.Accurate and real-time intraoperative estimation of tissue elasticity can guide towardsbetter delineation of brain tumours and improve the outcome of neurosurgery. We firstinvestigated current challenges in quasi-static elastography, which evaluates tissuedeformation (strain) by estimating the displacement between successive ultrasoundframes, acquired before and after applying manual compression. Recent approachesin ultrasound elastography have demonstrated that convolutional neural networkscan capture ultrasound high-frequency content and produce accurate strain estimates.We proposed a new unsupervised deep learning method for strain prediction, wherethe training of the network is driven by a regularised cost function, composed of asimilarity metric and a regularisation term that preserves displacement continuityby directly optimising the strain smoothness. We further improved the accuracy of our method by proposing a recurrent network architecture with convolutional long-short-term memory decoder blocks to improve displacement estimation and spatio-temporal continuity between time series ultrasound frames. We then demonstrateinitial results towards extending our ultrasound displacement estimation method toshear wave elastography, which provides a quantitative estimation of tissue stiffness.Furthermore, this thesis describes the development of an open-source image-guidedintervention platform, specifically designed to combine intra-operative ultrasoundimaging with a neuronavigation system and perform real-time ultrasound tissuecharacterisation. The integration was conducted using commercial hardware andvalidated on an anatomical phantom. Finally, preliminary results on the feasibilityand safety of the use of a novel intraoperative ultrasound probe designed for pituitarysurgery are presented. Prior to the clinical assessment of our image-guided platform,the ability of the ultrasound probe to be used alongside standard surgical equipmentwas demonstrated in 5 pituitary cases

    Service Robots in Healthcare Settings

    Get PDF
    Robots will play a part in all aspects of healthcare. The presence of service robots in healthcare demands special attention, whether it is in the automation of menial labour, prescription distribution, or offering comfort. In this chapter, we examine the several applications of healthcare-oriented robots in the acute, ambulatory and at-home settings. We discuss the role of robotics in reducing environmental dangers, as well as at the patient’s bedside and in the operating room, in the acute setting. We examine how robotics can protect and scale up healthcare services in the ambulatory setting. Finally, in the at-home scenario, we look at how robots can be employed for both rural/remote healthcare delivery and home-based care. In addition to assessing the current state of robotics at the interface of healthcare delivery, we describe critical problems for the future where such technology will be ubiquitous. Patients, health care workers, institutions, insurance companies, and governments will realize that service robots will deliver significant benefits in the future in terms of leverage and cost savings, while maintaining or improving access, equity, and high-quality health care

    Robot Assisted Object Manipulation for Minimally Invasive Surgery

    Get PDF
    Robotic systems have an increasingly important role in facilitating minimally invasive surgical treatments. In robot-assisted minimally invasive surgery, surgeons remotely control instruments from a console to perform operations inside the patient. However, despite the advanced technological status of surgical robots, fully autonomous systems, with decision-making capabilities, are not yet available. In 2017, a structure to classify the research efforts toward autonomy achievable with surgical robots was proposed by Yang et al. Six different levels were identified: no autonomy, robot assistance, task autonomy, conditional autonomy, high autonomy, and full autonomy. All the commercially available platforms in robot-assisted surgery is still in level 0 (no autonomy). Despite increasing the level of autonomy remains an open challenge, its adoption could potentially introduce multiple benefits, such as decreasing surgeons’ workload and fatigue and pursuing a consistent quality of procedures. Ultimately, allowing the surgeons to interpret the ample and intelligent information from the system will enhance the surgical outcome and positively reflect both on patients and society. Three main aspects are required to introduce automation into surgery: the surgical robot must move with high precision, have motion planning capabilities and understand the surgical scene. Besides these main factors, depending on the type of surgery, there could be other aspects that might play a fundamental role, to name some compliance, stiffness, etc. This thesis addresses three technological challenges encountered when trying to achieve the aforementioned goals, in the specific case of robot-object interaction. First, how to overcome the inaccuracy of cable-driven systems when executing fine and precise movements. Second, planning different tasks in dynamically changing environments. Lastly, how the understanding of a surgical scene can be used to solve more than one manipulation task. To address the first challenge, a control scheme relying on accurate calibration is implemented to execute the pick-up of a surgical needle. Regarding the planning of surgical tasks, two approaches are explored: one is learning from demonstration to pick and place a surgical object, and the second is using a gradient-based approach to trigger a smoother object repositioning phase during intraoperative procedures. Finally, to improve scene understanding, this thesis focuses on developing a simulation environment where multiple tasks can be learned based on the surgical scene and then transferred to the real robot. Experiments proved that automation of the pick and place task of different surgical objects is possible. The robot was successfully able to autonomously pick up a suturing needle, position a surgical device for intraoperative ultrasound scanning and manipulate soft tissue for intraoperative organ retraction. Despite automation of surgical subtasks has been demonstrated in this work, several challenges remain open, such as the capabilities of the generated algorithm to generalise over different environment conditions and different patients

    Medical Robotics

    Get PDF
    The first generation of surgical robots are already being installed in a number of operating rooms around the world. Robotics is being introduced to medicine because it allows for unprecedented control and precision of surgical instruments in minimally invasive procedures. So far, robots have been used to position an endoscope, perform gallbladder surgery and correct gastroesophogeal reflux and heartburn. The ultimate goal of the robotic surgery field is to design a robot that can be used to perform closed-chest, beating-heart surgery. The use of robotics in surgery will expand over the next decades without any doubt. Minimally Invasive Surgery (MIS) is a revolutionary approach in surgery. In MIS, the operation is performed with instruments and viewing equipment inserted into the body through small incisions created by the surgeon, in contrast to open surgery with large incisions. This minimizes surgical trauma and damage to healthy tissue, resulting in shorter patient recovery time. The aim of this book is to provide an overview of the state-of-art, to present new ideas, original results and practical experiences in this expanding area. Nevertheless, many chapters in the book concern advanced research on this growing area. The book provides critical analysis of clinical trials, assessment of the benefits and risks of the application of these technologies. This book is certainly a small sample of the research activity on Medical Robotics going on around the globe as you read it, but it surely covers a good deal of what has been done in the field recently, and as such it works as a valuable source for researchers interested in the involved subjects, whether they are currently “medical roboticists” or not

    Registration of ultrasound and computed tomography for guidance of laparoscopic liver surgery

    Get PDF
    Laparoscopic Ultrasound (LUS) imaging is a standard tool used for image-guidance during laparoscopic liver resection, as it provides real-time information on the internal structure of the liver. However, LUS probes are di cult to handle and their resulting images hard to interpret. Additionally, some anatomical targets such as tumours are not always visible, making the LUS guidance less e ective. To solve this problem, registration between the LUS images and a pre-operative Computed Tomography (CT) scan using information from blood vessels has been previously proposed. By merging these two modalities, the relative position between the LUS images and the anatomy of CT is obtained and both can be used to guide the surgeon. The problem of LUS to CT registration is specially challenging, as besides being a multi-modal registration, the eld of view of LUS is signi cantly smaller than that of CT. Therefore, this problem becomes poorly constrained and typically an accurate initialisation is needed. Also, the liver is highly deformed during laparoscopy, complicating the problem further. So far, the methods presented in the literature are not clinically feasible as they depend on manually set correspondences between both images. In this thesis, a solution for this registration problem that may be more transferable to the clinic is proposed. Firstly, traditional registration approaches comprised of manual initialisation and optimisation of a cost function are studied. Secondly, it is demonstrated that a globally optimal registration without a manual initialisation is possible. Finally, a new globally optimal solution that does not require commonly used tracking technologies is proposed and validated. The resulting approach provides clinical value as it does not require manual interaction in the operating room or tracking devices. Furthermore, the proposed method could potentially be applied to other image-guidance problems that require registration between ultrasound and a pre-operative scan

    Ultrasound-Augmented Laparoscopy

    Get PDF
    Laparoscopic surgery is perhaps the most common minimally invasive procedure for many diseases in the abdomen. Since the laparoscopic camera provides only the surface view of the internal organs, in many procedures, surgeons use laparoscopic ultrasound (LUS) to visualize deep-seated surgical targets. Conventionally, the 2D LUS image is visualized in a display spatially separate from that displays the laparoscopic video. Therefore, reasoning about the geometry of hidden targets requires mentally solving the spatial alignment, and resolving the modality differences, which is cognitively very challenging. Moreover, the mental representation of hidden targets in space acquired through such cognitive medication may be error prone, and cause incorrect actions to be performed. To remedy this, advanced visualization strategies are required where the US information is visualized in the context of the laparoscopic video. To this end, efficient computational methods are required to accurately align the US image coordinate system with that centred in the camera, and to render the registered image information in the context of the camera such that surgeons perceive the geometry of hidden targets accurately. In this thesis, such a visualization pipeline is described. A novel method to register US images with a camera centric coordinate system is detailed with an experimental investigation into its accuracy bounds. An improved method to blend US information with the surface view is also presented with an experimental investigation into the accuracy of perception of the target locations in space

    A continuum robotic platform for endoscopic non-contact laser surgery: design, control, and preclinical evaluation

    Get PDF
    The application of laser technologies in surgical interventions has been accepted in the clinical domain due to their atraumatic properties. In addition to manual application of fibre-guided lasers with tissue contact, non-contact transoral laser microsurgery (TLM) of laryngeal tumours has been prevailed in ENT surgery. However, TLM requires many years of surgical training for tumour resection in order to preserve the function of adjacent organs and thus preserve the patient’s quality of life. The positioning of the microscopic laser applicator outside the patient can also impede a direct line-of-sight to the target area due to anatomical variability and limit the working space. Further clinical challenges include positioning the laser focus on the tissue surface, imaging, planning and performing laser ablation, and motion of the target area during surgery. This dissertation aims to address the limitations of TLM through robotic approaches and intraoperative assistance. Although a trend towards minimally invasive surgery is apparent, no highly integrated platform for endoscopic delivery of focused laser radiation is available to date. Likewise, there are no known devices that incorporate scene information from endoscopic imaging into ablation planning and execution. For focusing of the laser beam close to the target tissue, this work first presents miniaturised focusing optics that can be integrated into endoscopic systems. Experimental trials characterise the optical properties and the ablation performance. A robotic platform is realised for manipulation of the focusing optics. This is based on a variable-length continuum manipulator. The latter enables movements of the endoscopic end effector in five degrees of freedom with a mechatronic actuation unit. The kinematic modelling and control of the robot are integrated into a modular framework that is evaluated experimentally. The manipulation of focused laser radiation also requires precise adjustment of the focal position on the tissue. For this purpose, visual, haptic and visual-haptic assistance functions are presented. These support the operator during teleoperation to set an optimal working distance. Advantages of visual-haptic assistance are demonstrated in a user study. The system performance and usability of the overall robotic system are assessed in an additional user study. Analogous to a clinical scenario, the subjects follow predefined target patterns with a laser spot. The mean positioning accuracy of the spot is 0.5 mm. Finally, methods of image-guided robot control are introduced to automate laser ablation. Experiments confirm a positive effect of proposed automation concepts on non-contact laser surgery.Die Anwendung von Lasertechnologien in chirurgischen Interventionen hat sich aufgrund der atraumatischen Eigenschaften in der Klinik etabliert. Neben manueller Applikation von fasergeführten Lasern mit Gewebekontakt hat sich die kontaktfreie transorale Lasermikrochirurgie (TLM) von Tumoren des Larynx in der HNO-Chirurgie durchgesetzt. Die TLM erfordert zur Tumorresektion jedoch ein langjähriges chirurgisches Training, um die Funktion der angrenzenden Organe zu sichern und damit die Lebensqualität der Patienten zu erhalten. Die Positionierung des mikroskopis chen Laserapplikators außerhalb des Patienten kann zudem die direkte Sicht auf das Zielgebiet durch anatomische Variabilität erschweren und den Arbeitsraum einschränken. Weitere klinische Herausforderungen betreffen die Positionierung des Laserfokus auf der Gewebeoberfläche, die Bildgebung, die Planung und Ausführung der Laserablation sowie intraoperative Bewegungen des Zielgebietes. Die vorliegende Dissertation zielt darauf ab, die Limitierungen der TLM durch robotische Ansätze und intraoperative Assistenz zu adressieren. Obwohl ein Trend zur minimal invasiven Chirurgie besteht, sind bislang keine hochintegrierten Plattformen für die endoskopische Applikation fokussierter Laserstrahlung verfügbar. Ebenfalls sind keine Systeme bekannt, die Szeneninformationen aus der endoskopischen Bildgebung in die Ablationsplanung und -ausführung einbeziehen. Für eine situsnahe Fokussierung des Laserstrahls wird in dieser Arbeit zunächst eine miniaturisierte Fokussieroptik zur Integration in endoskopische Systeme vorgestellt. Experimentelle Versuche charakterisieren die optischen Eigenschaften und das Ablationsverhalten. Zur Manipulation der Fokussieroptik wird eine robotische Plattform realisiert. Diese basiert auf einem längenveränderlichen Kontinuumsmanipulator. Letzterer ermöglicht in Kombination mit einer mechatronischen Aktuierungseinheit Bewegungen des Endoskopkopfes in fünf Freiheitsgraden. Die kinematische Modellierung und Regelung des Systems werden in ein modulares Framework eingebunden und evaluiert. Die Manipulation fokussierter Laserstrahlung erfordert zudem eine präzise Anpassung der Fokuslage auf das Gewebe. Dafür werden visuelle, haptische und visuell haptische Assistenzfunktionen eingeführt. Diese unterstützen den Anwender bei Teleoperation zur Einstellung eines optimalen Arbeitsabstandes. In einer Anwenderstudie werden Vorteile der visuell-haptischen Assistenz nachgewiesen. Die Systemperformanz und Gebrauchstauglichkeit des robotischen Gesamtsystems werden in einer weiteren Anwenderstudie untersucht. Analog zu einem klinischen Einsatz verfolgen die Probanden mit einem Laserspot vorgegebene Sollpfade. Die mittlere Positioniergenauigkeit des Spots beträgt dabei 0,5 mm. Zur Automatisierung der Ablation werden abschließend Methoden der bildgestützten Regelung vorgestellt. Experimente bestätigen einen positiven Effekt der Automationskonzepte für die kontaktfreie Laserchirurgie

    Medical robots for MRI guided diagnosis and therapy

    No full text
    Magnetic Resonance Imaging (MRI) provides the capability of imaging tissue with fine resolution and superior soft tissue contrast, when compared with conventional ultrasound and CT imaging, which makes it an important tool for clinicians to perform more accurate diagnosis and image guided therapy. Medical robotic devices combining the high resolution anatomical images with real-time navigation, are ideal for precise and repeatable interventions. Despite these advantages, the MR environment imposes constraints on mechatronic devices operating within it. This thesis presents a study on the design and development of robotic systems for particular MR interventions, in which the issue of testing the MR compatibility of mechatronic components, actuation control, kinematics and workspace analysis, and mechanical and electrical design of the robot have been investigated. Two types of robotic systems have therefore been developed and evaluated along the above aspects. (i) A device for MR guided transrectal prostate biopsy: The system was designed from components which are proven to be MR compatible, actuated by pneumatic motors and ultrasonic motors, and tracked by optical position sensors and ducial markers. Clinical trials have been performed with the device on three patients, and the results reported have demonstrated its capability to perform needle positioning under MR guidance, with a procedure time of around 40mins and with no compromised image quality, which achieved our system speci cations. (ii) Limb positioning devices to facilitate the magic angle effect for diagnosis of tendinous injuries: Two systems were designed particularly for lower and upper limb positioning, which are actuated and tracked by the similar methods as the first device. A group of volunteers were recruited to conduct tests to verify the functionality of the systems. The results demonstrate the clear enhancement of the image quality with an increase in signal intensity up to 24 times in the tendon tissue caused by the magic angle effect, showing the feasibility of the proposed devices to be applied in clinical diagnosis
    • …
    corecore