15 research outputs found

    Robotic Assistant Systems for Otolaryngology-Head and Neck Surgery

    Get PDF
    Recently, there has been a significant movement in otolaryngology-head and neck surgery (OHNS) toward minimally invasive techniques, particularly those utilizing natural orifices. However, while these techniques can reduce the risk of complications encountered with classic open approaches such as scarring, infection, and damage to healthy tissue in order to access the surgical site, there remain significant challenges in both visualization and manipulation, including poor sensory feedback, reduced visibility, limited working area, and decreased precision due to long instruments. This work presents two robotic assistance systems which help to overcome different aspects of these challenges. The first is the Robotic Endo-Laryngeal Flexible (Robo-ELF) Scope, which assists surgeons in manipulating flexible endoscopes. Flexible endoscopes can provide superior visualization compared to microscopes or rigid endoscopes by allowing views not constrained by line-of-sight. However, they are seldom used in the operating room due to the difficulty in precisely manually manipulating and stabilizing them for long periods of time. The Robo-ELF Scope enables stable, precise robotic manipulation for flexible scopes and frees the surgeon’s hands to operate bimanually. The Robo-ELF Scope has been demonstrated and evaluated in human cadavers and is moving toward a human subjects study. The second is the Robotic Ear Nose and Throat Microsurgery System (REMS), which assists surgeons in manipulating rigid instruments and endoscopes. There are two main types of challenges involved in manipulating rigid instruments: reduced precision from hand tremor amplified by long instruments, and difficulty navigating through complex anatomy surrounded by sensitive structures. The REMS enables precise manipulation by allowing the surgeon to hold the surgical instrument while filtering unwanted movement such as hand tremor. The REMS also enables augmented navigation by calculating the position of the instrument with high accuracy, and combining this information with registered preoperative imaging data to enforce virtual safety barriers around sensitive anatomy. The REMS has been demonstrated and evaluated in user studies with synthetic phantoms and human cadavers

    AUGMENTED REALITY AND INTRAOPERATIVE C-ARM CONE-BEAM COMPUTED TOMOGRAPHY FOR IMAGE-GUIDED ROBOTIC SURGERY

    Get PDF
    Minimally-invasive robotic-assisted surgery is a rapidly-growing alternative to traditionally open and laparoscopic procedures; nevertheless, challenges remain. Standard of care derives surgical strategies from preoperative volumetric data (i.e., computed tomography (CT) and magnetic resonance (MR) images) that benefit from the ability of multiple modalities to delineate different anatomical boundaries. However, preoperative images may not reflect a possibly highly deformed perioperative setup or intraoperative deformation. Additionally, in current clinical practice, the correspondence of preoperative plans to the surgical scene is conducted as a mental exercise; thus, the accuracy of this practice is highly dependent on the surgeon’s experience and therefore subject to inconsistencies. In order to address these fundamental limitations in minimally-invasive robotic surgery, this dissertation combines a high-end robotic C-arm imaging system and a modern robotic surgical platform as an integrated intraoperative image-guided system. We performed deformable registration of preoperative plans to a perioperative cone-beam computed tomography (CBCT), acquired after the patient is positioned for intervention. From the registered surgical plans, we overlaid critical information onto the primary intraoperative visual source, the robotic endoscope, by using augmented reality. Guidance afforded by this system not only uses augmented reality to fuse virtual medical information, but also provides tool localization and other dynamic intraoperative updated behavior in order to present enhanced depth feedback and information to the surgeon. These techniques in guided robotic surgery required a streamlined approach to creating intuitive and effective human-machine interferences, especially in visualization. Our software design principles create an inherently information-driven modular architecture incorporating robotics and intraoperative imaging through augmented reality. The system's performance is evaluated using phantoms and preclinical in-vivo experiments for multiple applications, including transoral robotic surgery, robot-assisted thoracic interventions, and cocheostomy for cochlear implantation. The resulting functionality, proposed architecture, and implemented methodologies can be further generalized to other C-arm-based image guidance for additional extensions in robotic surgery

    From teleoperation to autonomous robot-assisted microsurgery: A survey

    Get PDF
    Robot-assisted microsurgery (RAMS) has many benefits compared to traditional microsurgery. Microsurgical platforms with advanced control strategies, high-quality micro-imaging modalities and micro-sensing systems are worth developing to further enhance the clinical outcomes of RAMS. Within only a few decades, microsurgical robotics has evolved into a rapidly developing research field with increasing attention all over the world. Despite the appreciated benefits, significant challenges remain to be solved. In this review paper, the emerging concepts and achievements of RAMS will be presented. We introduce the development tendency of RAMS from teleoperation to autonomous systems. We highlight the upcoming new research opportunities that require joint efforts from both clinicians and engineers to pursue further outcomes for RAMS in years to come

    Modeling, Sensorization and Control of Concentric-Tube Robots

    Get PDF
    Since the concept of the Concentric-Tube Robot (CTR) was proposed in 2006, CTRs have been a popular research topic in the field of surgical robotics. The unique mechanical design of this robot allows it to navigate through narrow channels in the human anatomy and operate in highly constrained environments. It is therefore likely to become the next generation of surgical robots to overcome the challenges that cannot be addressed by current technologies. In CSTAR, we have had ongoing work over the past several years aimed at developing novel techniques and technologies for CTRs. This thesis describes the contributions made in this context, focusing primarily on topics such as modeling, sensorization, and control of CTRs. Prior to this work, one of the main challenges in CTRs was to develop a kinematic model that achieves a balance between the numerical accuracy and computational efficiency for surgical applications. In this thesis, a fast kinematic model of CTRs is proposed, which can be solved at a comparatively fast rate (0.2 ms) with minimal loss of accuracy (0.1 mm) for a 3-tube CTR. A Jacobian matrix is derived based on this model, leading to the development of a real-time trajectory tracking controller for CTRs. For tissue-robot interactions, a force-rejection controller is proposed for position control of CTRs under time-varying force disturbances. In contrast to rigid-link robots, instability of position control could be caused by non-unique solutions to the forward kinematics of CTRs. This phenomenon is modeled and analyzed, resulting in design criteria that can ensure kinematic stability of a CTR in its entire workspace. Force sensing is another major difficulty for CTRs. To address this issue, commercial force/torque sensors (Nano43, ATI Industrial Automation, United States) are integrated into one of our CTR prototypes. These force/torque sensors are replaced by Fiber-Bragg Grating (FBG) sensors that are helically-wrapped and embedded in CTRs. A strain-force calculation algorithm is proposed, to convert the reflected wavelength of FBGs into force measurements with 0.1 N force resolution at 100 Hz sampling rate. In addition, this thesis reports on our innovations in prototyping drive units for CTRs. Three designs of CTR prototypes are proposed, the latest one being significantly more compact and cost efficient in comparison with most designs in the literature. All of these contributions have brought this technology a few steps closer to being used in operating rooms. Some of the techniques and technologies mentioned above are not merely limited to CTRs, but are also suitable for problems arising in other types of surgical robots, for example, for sensorizing da Vinci surgical instruments for force sensing (see Appendix A)

    The design and analysis of a novel 5 degree of freedom parallel kinematic manipulator.

    Get PDF
    Masters Degree. University of KwaZulu-Natal, Durban.Abstract available in PDF

    Medical Ultrasound Imaging and Interventional Component (MUSiiC) Framework for Advanced Ultrasound Image-guided Therapy

    Get PDF
    Medical ultrasound (US) imaging is a popular and convenient medical imaging modality thanks to its mobility, non-ionizing radiation, ease-of-use, and real-time data acquisition. Conventional US brightness mode (B-Mode) is one type of diagnostic medical imaging modality that represents tissue morphology by collecting and displaying the intensity information of a reflected acoustic wave. Moreover, US B-Mode imaging is frequently integrated with tracking systems and robotic systems in image-guided therapy (IGT) systems. Recently, these systems have also begun to incorporate advanced US imaging such as US elasticity imaging, photoacoustic imaging, and thermal imaging. Several software frameworks and toolkits have been developed for US imaging research and the integration of US data acquisition, processing and display with existing IGT systems. However, there is no software framework or toolkit that supports advanced US imaging research and advanced US IGT systems by providing low-level US data (channel data or radio-frequency (RF) data) essential for advanced US imaging. In this dissertation, we propose a new medical US imaging and interventional component framework for advanced US image-guided therapy based on networkdistributed modularity, real-time computation and communication, and open-interface design specifications. Consequently, the framework can provide a modular research environment by supporting communication interfaces between heterogeneous systems to allow for flexible interventional US imaging research, and easy reconfiguration of an entire interventional US imaging system by adding or removing devices or equipment specific to each therapy. In addition, our proposed framework offers real-time synchronization between data from multiple data acquisition devices for advanced iii interventional US imaging research and integration of the US imaging system with other IGT systems. Moreover, we can easily implement and test new advanced ultrasound imaging techniques inside the proposed framework in real-time because our software framework is designed and optimized for advanced ultrasound research. The system’s flexibility, real-time performance, and open-interface are demonstrated and evaluated through performing experimental tests for several applications

    DESIGN, DEVELOPMENT, AND EVALUATION OF A DISCRETELY ACTUATED STEERABLE CANNULA

    Get PDF
    Needle-based procedures require the guidance of the needle to a target region to deliver therapy or to remove tissue samples for diagnosis. During needle insertion, needle deflection occurs due to needle-tissue interaction which deviates the needle from its insertion direction. Manipulating the needle at the base provides limited control over the needle trajectory after the insertion. Furthermore, some sites are inaccessible using straight-line trajectories due to delicate structures that need to be avoided. The goal of this research is to develop a discretely actuated steerable cannula to enable active trajectory corrections and achieve accurate targeting in needle-based procedures. The cannula is composed of straight segments connected by shape memory alloy (SMA) actuators and has multiple degrees-of-freedom. To control the motion of the cannula two approaches have been explored. One approach is to measure the cannula configuration directly from the imaging modality and to use this information as a feedback to control the joint motion. The second approach is a model-based controller where the strain of the SMA actuator is controlled by controlling the temperature of the SMA actuator. The constitutive model relates the stress, strain and the temperature of the SMA actuator. The uniaxial constitutive model of the SMA that describes the tensile behavior was extended to one-dimensional pure- bending case to model the phase transformation of the arc-shaped SMA wire. An experimental characterization procedure was devised to obtain the parameters of the SMA that are used in the constitutive model. Experimental results demonstrate that temperature feedback can be effectively used to control the strain of the SMA actuator and image feedback can be reliably used to control the joint motion. Using tools from differential geometry and the configuration control approach, motion planning algorithms were developed to create pre-operative plans that steer the cannula to a desired surgical site (nodule or suspicious tissue). Ultrasound-based tracking algorithms were developed to automate the needle insertion procedure using 2D ultrasound guidance. The effectiveness of the proposed in-plane and out-of-plane tracking methods were demonstrated through experiments inside tissue phantom made of gelatin and ex-vivo experiments. An optical coherence tomography probe was integrated into the cannula and in-situ microscale imaging was performed. The results demonstrate the use of the cannula as a delivery mechanism for diagnostic applications. The tools that were developed in this dissertation form the foundations of developing a complete steerable-cannula system. It is anticipated that the cannula could be used as a delivery mechanism in image-guided needle-based interventions to introduce therapeutic and diagnostic tools to a target region

    Robotic manipulators for in situ inspections of jet engines

    Get PDF
    Jet engines need to be inspected periodically and, in some instances, repaired. Currently, some of these maintenance operations require the engine to be removed from the wing and dismantled, which has a significant associated cost. The capability of performing some of these inspections and repairs while the engine is on-wing could lead to important cost savings. However, existing technology for on-wing operations is limited, and does not suffice to satisfy some of the needs. In this work, the problem of performing on-wing operations such as inspection and repair is analysed, and after an extensive literature review, a novel robotic system for the on-wing insertion and deployment of probes or other tools is proposed. The system consists of a fine-positioner, which is a miniature and dexterous robotic manipulator; a gross-positioner, which is a device to insert the fine-positioner to the engine region of interest; an end-effector, such as a probe; a deployment mechanism, which is a passive device to ensure correct contact between probe and component; and a feedback system that provides information about the robot state for control. The research and development work conducted to address the main challenges to create this robotic system is presented in this thesis. The work is focussed on the fine-positioner, as it is the most relevant and complex part of the system. After a literature review of relevant work, and as part of the exploration of potential robot concepts for the system, the kinematic capabilities of concentric tube robots (CTRs) are first investigated. The complete set of stable trajectories that can be traced in follow-the-leader motion is discovered. A case study involving simulations and an experiment is then presented to showcase and verify the work. The research findings indicate that CTRs are not suitable for the fine-positioner. However, they show that CTRs with non-annular cross section can be used for the gross-positioner. In addition, the new trajectories discovered show promise in minimally invasive surgery (MIS). Soft robotic manipulators with fluidic actuation are then selected as the most suitable concept for the fine-positioner. The design of soft robotic manipulators with fluidic actuation is investigated from a general perspective. A general framework for the design of these devices is proposed, and a set of design principles are derived. These principles are first applied in a MIS case study to illustrate and verify the work. Finite element (FE) simulations are then reported to perform design optimisation, and thus complete the case study. The design study is then applied to determine the most suitable design for the fine-positioner. An additional analytical derivation is developed, followed by FE simulations, which extend those of the case study. Eventually, this work yields a final design of the fine-positioner. The final design found is different from existing ones, and is shown to provide an important performance improvement with respect to existing soft robots in terms of wrenches it can support. The control of soft and continuum robots relevant to the fine-positioner is also studied. The full kinematics of continuum robots with constant curvature bending and extending capabilities are first investigated, which correspond to a preliminary design concept conceived for the fine-positioner. Closed-form solutions are derived, closing an open problem. These kinematics, however, do not exactly match the final fine-positioner design selected. Thus, an alternative control approach based on closed-loop control laws is then adopted. For this, a mechanical model is first developed. Closed-loop control laws are then derived based on this mechanical model for planar operation of a segment of the fine-positioner. The control laws obtained represent the foundation for the subsequent development of control laws for a full fine-positioner operating in 3D. Furthermore, work on path planning for nonholonomic systems is also reported, and a new algorithm is presented, which can be applied for the insertion of the overall robotic system. Solutions to the other parts of the robotic system for on-wing operations are also reported. A gross-positioner consisting of a non-annular CTR is proposed. Solutions for a deployment mechanism are also presented. Potential feedback systems are outlined. In addition, methods for the fabrication of the systems are reported, and the electronics and systems required for the assembly of the different parts are described. Finally, the use of the robotic system to perform on-wing inspections in a representative case study is studied to determine the viability. Inspection strategies are shortlisted, and simulations and experiments are used to study them. The results, however, indicate that inspection is not viable since the signal to noise ratio is excessively low. Nonetheless, the robotic system proposed, and the research conducted, are still expected to be useful to perform a range of on-wing operations that require the insertion and deployment of a probe or other end-effector. In addition, the trajectories discovered for CTRs, the design found for the fine-positioner, and the advances on control, also have significant potential in MIS, where there is an important need for miniature robotic manipulators and similar devices.Open Acces

    Integrated navigation and visualisation for skull base surgery

    Get PDF
    Skull base surgery involves the management of tumours located on the underside of the brain and the base of the skull. Skull base tumours are intricately associated with several critical neurovascular structures making surgery challenging and high risk. Vestibular schwannoma (VS) is a benign nerve sheath tumour arising from one of the vestibular nerves and is the commonest pathology encountered in skull base surgery. The goal of modern VS surgery is maximal tumour removal whilst preserving neurological function and maintaining quality of life but despite advanced neurosurgical techniques, facial nerve paralysis remains a potentially devastating complication of this surgery. This thesis describes the development and integration of various advanced navigation and visualisation techniques to increase the precision and accuracy of skull base surgery. A novel Diffusion Magnetic Resonance Imaging (dMRI) acquisition and processing protocol for imaging the facial nerve in patients with VS was developed to improve delineation of facial nerve preoperatively. An automated Artificial Intelligence (AI)-based framework was developed to segment VS from MRI scans. A user-friendly navigation system capable of integrating dMRI and tractography of the facial nerve, 3D tumour segmentation and intraoperative 3D ultrasound was developed and validated using an anatomically-realistic acoustic phantom model of a head including the skull, brain and VS. The optical properties of five types of human brain tumour (meningioma, pituitary adenoma, schwannoma, low- and high-grade glioma) and nine different types of healthy brain tissue were examined across a wavelength spectrum of 400 nm to 800 nm in order to inform the development of an Intraoperative Hypserpectral Imaging (iHSI) system. Finally, functional and technical requirements of an iHSI were established and a prototype system was developed and tested in a first-in-patient study

    Enabling Technologies for Co-Robotic Translational Ultrasound and Photoacoustic Imaging

    Get PDF
    Among many medical imaging modalities, medical ultrasound possesses its unique advantages of non-ionizing, real-time, and non-invasive properties. With its safeness, ease of use, and cost-effectiveness, ultrasound imaging has been used in a wide variety of diagnostic applications. Photoacoustic imaging is a hybrid imaging modality merging light and ultrasound, and reveals the tissue metabolism and molecular distribution with the utilization of endo- and exogenous contrast agents. With the emergence of photoacoustic imaging, ultrasound and photoacoustic imaging can comprehensively depict not only anatomical but also functional information of biological tissue. To broaden the impact of translational ultrasound and photoacoustic imaging, this dissertation focuses on the development of enabling technologies and the exploration of associated applications. The goals of these technologies are; (1) Enabling Technologies for Translational Photoacoustic Imaging. We investigated the potential of maximizing the access to translational photoacoustic imaging using a clinical ultrasound scanner and a low-cost light source, instead of widely used customized data acquisition system and expensive high power laser. (2) Co-robotic Ultrasound and Photoacoustic Imaging. We introduced a co-robotic paradigm to make ultrasound/photoacoustic imaging more comprehensive and capable of imaging deeper with higher resolution and wider field-of-view.(3) Advancements on Translational Photoacoustic Imaging. We explored the new use of translational photoacoustic imaging for molecular-based cancer detection and the sensing of neurotransmitter activity in the brain. Together, these parts explore the feasibility of co-robotic translational ultrasound and photoacoustic imaging
    corecore