32 research outputs found

    A continuum robotic platform for endoscopic non-contact laser surgery: design, control, and preclinical evaluation

    Get PDF
    The application of laser technologies in surgical interventions has been accepted in the clinical domain due to their atraumatic properties. In addition to manual application of fibre-guided lasers with tissue contact, non-contact transoral laser microsurgery (TLM) of laryngeal tumours has been prevailed in ENT surgery. However, TLM requires many years of surgical training for tumour resection in order to preserve the function of adjacent organs and thus preserve the patient’s quality of life. The positioning of the microscopic laser applicator outside the patient can also impede a direct line-of-sight to the target area due to anatomical variability and limit the working space. Further clinical challenges include positioning the laser focus on the tissue surface, imaging, planning and performing laser ablation, and motion of the target area during surgery. This dissertation aims to address the limitations of TLM through robotic approaches and intraoperative assistance. Although a trend towards minimally invasive surgery is apparent, no highly integrated platform for endoscopic delivery of focused laser radiation is available to date. Likewise, there are no known devices that incorporate scene information from endoscopic imaging into ablation planning and execution. For focusing of the laser beam close to the target tissue, this work first presents miniaturised focusing optics that can be integrated into endoscopic systems. Experimental trials characterise the optical properties and the ablation performance. A robotic platform is realised for manipulation of the focusing optics. This is based on a variable-length continuum manipulator. The latter enables movements of the endoscopic end effector in five degrees of freedom with a mechatronic actuation unit. The kinematic modelling and control of the robot are integrated into a modular framework that is evaluated experimentally. The manipulation of focused laser radiation also requires precise adjustment of the focal position on the tissue. For this purpose, visual, haptic and visual-haptic assistance functions are presented. These support the operator during teleoperation to set an optimal working distance. Advantages of visual-haptic assistance are demonstrated in a user study. The system performance and usability of the overall robotic system are assessed in an additional user study. Analogous to a clinical scenario, the subjects follow predefined target patterns with a laser spot. The mean positioning accuracy of the spot is 0.5 mm. Finally, methods of image-guided robot control are introduced to automate laser ablation. Experiments confirm a positive effect of proposed automation concepts on non-contact laser surgery.Die Anwendung von Lasertechnologien in chirurgischen Interventionen hat sich aufgrund der atraumatischen Eigenschaften in der Klinik etabliert. Neben manueller Applikation von fasergefĂŒhrten Lasern mit Gewebekontakt hat sich die kontaktfreie transorale Lasermikrochirurgie (TLM) von Tumoren des Larynx in der HNO-Chirurgie durchgesetzt. Die TLM erfordert zur Tumorresektion jedoch ein langjĂ€hriges chirurgisches Training, um die Funktion der angrenzenden Organe zu sichern und damit die LebensqualitĂ€t der Patienten zu erhalten. Die Positionierung des mikroskopis chen Laserapplikators außerhalb des Patienten kann zudem die direkte Sicht auf das Zielgebiet durch anatomische VariabilitĂ€t erschweren und den Arbeitsraum einschrĂ€nken. Weitere klinische Herausforderungen betreffen die Positionierung des Laserfokus auf der GewebeoberflĂ€che, die Bildgebung, die Planung und AusfĂŒhrung der Laserablation sowie intraoperative Bewegungen des Zielgebietes. Die vorliegende Dissertation zielt darauf ab, die Limitierungen der TLM durch robotische AnsĂ€tze und intraoperative Assistenz zu adressieren. Obwohl ein Trend zur minimal invasiven Chirurgie besteht, sind bislang keine hochintegrierten Plattformen fĂŒr die endoskopische Applikation fokussierter Laserstrahlung verfĂŒgbar. Ebenfalls sind keine Systeme bekannt, die Szeneninformationen aus der endoskopischen Bildgebung in die Ablationsplanung und -ausfĂŒhrung einbeziehen. FĂŒr eine situsnahe Fokussierung des Laserstrahls wird in dieser Arbeit zunĂ€chst eine miniaturisierte Fokussieroptik zur Integration in endoskopische Systeme vorgestellt. Experimentelle Versuche charakterisieren die optischen Eigenschaften und das Ablationsverhalten. Zur Manipulation der Fokussieroptik wird eine robotische Plattform realisiert. Diese basiert auf einem lĂ€ngenverĂ€nderlichen Kontinuumsmanipulator. Letzterer ermöglicht in Kombination mit einer mechatronischen Aktuierungseinheit Bewegungen des Endoskopkopfes in fĂŒnf Freiheitsgraden. Die kinematische Modellierung und Regelung des Systems werden in ein modulares Framework eingebunden und evaluiert. Die Manipulation fokussierter Laserstrahlung erfordert zudem eine prĂ€zise Anpassung der Fokuslage auf das Gewebe. DafĂŒr werden visuelle, haptische und visuell haptische Assistenzfunktionen eingefĂŒhrt. Diese unterstĂŒtzen den Anwender bei Teleoperation zur Einstellung eines optimalen Arbeitsabstandes. In einer Anwenderstudie werden Vorteile der visuell-haptischen Assistenz nachgewiesen. Die Systemperformanz und Gebrauchstauglichkeit des robotischen Gesamtsystems werden in einer weiteren Anwenderstudie untersucht. Analog zu einem klinischen Einsatz verfolgen die Probanden mit einem Laserspot vorgegebene Sollpfade. Die mittlere Positioniergenauigkeit des Spots betrĂ€gt dabei 0,5 mm. Zur Automatisierung der Ablation werden abschließend Methoden der bildgestĂŒtzten Regelung vorgestellt. Experimente bestĂ€tigen einen positiven Effekt der Automationskonzepte fĂŒr die kontaktfreie Laserchirurgie

    Medical Robotics

    Get PDF
    The first generation of surgical robots are already being installed in a number of operating rooms around the world. Robotics is being introduced to medicine because it allows for unprecedented control and precision of surgical instruments in minimally invasive procedures. So far, robots have been used to position an endoscope, perform gallbladder surgery and correct gastroesophogeal reflux and heartburn. The ultimate goal of the robotic surgery field is to design a robot that can be used to perform closed-chest, beating-heart surgery. The use of robotics in surgery will expand over the next decades without any doubt. Minimally Invasive Surgery (MIS) is a revolutionary approach in surgery. In MIS, the operation is performed with instruments and viewing equipment inserted into the body through small incisions created by the surgeon, in contrast to open surgery with large incisions. This minimizes surgical trauma and damage to healthy tissue, resulting in shorter patient recovery time. The aim of this book is to provide an overview of the state-of-art, to present new ideas, original results and practical experiences in this expanding area. Nevertheless, many chapters in the book concern advanced research on this growing area. The book provides critical analysis of clinical trials, assessment of the benefits and risks of the application of these technologies. This book is certainly a small sample of the research activity on Medical Robotics going on around the globe as you read it, but it surely covers a good deal of what has been done in the field recently, and as such it works as a valuable source for researchers interested in the involved subjects, whether they are currently “medical roboticists” or not

    Methods, Models, and Datasets for Visual Servoing and Vehicle Localisation

    Get PDF
    Machine autonomy has become a vibrant part of industrial and commercial aspirations. A growing demand exists for dexterous and intelligent machines that can work in unstructured environments without any human assistance. An autonomously operating machine should sense its surroundings, classify diïŹ€erent kinds of observed objects, and interpret sensory information to perform necessary operations. This thesis summarizes original methods aimed at enhancing machine’s autonomous operation capability. These methods and the corresponding results are grouped into two main categories. The ïŹrst category consists of research works that focus on improving visual servoing systems for robotic manipulators to accurately position workpieces. We start our investigation with the hand-eye calibration problem that focuses on calibrating visual sensors with a robotic manipulator. We thoroughly investigate the problem from various perspectives and provide alternative formulations of the problem and error objectives. The experimental results demonstrate that the proposed methods are robust and yield accurate solutions when tested on real and simulated data. The work package is bundled as a toolkit and available online for public use. In an extension, we proposed a constrained multiview pose estimation approach for robotic manipulators. The approach exploits the available geometric constraints on the robotic system and infuses them directly into the pose estimation method. The empirical results demonstrate higher accuracy and signiïŹcantly higher precision compared to other studies. In the second part of this research, we tackle problems pertaining to the ïŹeld of autonomous vehicles and its related applications. First, we introduce a pose estimation and mapping scheme to extend the application of visual Simultaneous Localization and Mapping to unstructured dynamic environments. We identify, extract, and discard dynamic entities from the pose estimation step. Moreover, we track the dynamic entities and actively update the map based on changes in the environment. Upon observing the limitations of the existing datasets during our earlier work, we introduce FinnForest, a novel dataset for testing and validating the performance of visual odometry and Simultaneous Localization and Mapping methods in an un-structured environment. We explored an environment with a forest landscape and recorded data with multiple stereo cameras, an IMU, and a GNSS receiver. The dataset oïŹ€ers unique challenges owing to the nature of the environment, variety of trajectories, and changes in season, weather, and daylight conditions. Building upon the future works proposed in FinnForest Dataset, we introduce a novel scheme that can localize an observer with extreme perspective changes. More speciïŹcally, we tailor the problem for autonomous vehicles such that they can recognize a previously visited place irrespective of the direction it previously traveled the route. To the best of our knowledge, this is the ïŹrst study that accomplishes bi-directional loop closure on monocular images with a nominal ïŹeld of view. To solve the localisation problem, we segregate the place identiïŹcation from the pose regression by using deep learning in two steps. We demonstrate that bi-directional loop closure on monocular images is indeed possible when the problem is posed correctly, and the training data is adequately leveraged. All methodological contributions of this thesis are accompanied by extensive empirical analysis and discussions demonstrating the need, novelty, and improvement in performance over existing methods for pose estimation, odometry, mapping, and place recognition

    A Control Architecture for Unmanned Aerial Vehicles Operating in Human-Robot Team for Service Robotic Tasks

    Get PDF
    In this thesis a Control architecture for an Unmanned Aerial Vehicle (UAV) is presented. The aim of the thesis is to address the problem of control a flying robot operating in human robot team at different level of abstraction. For this purpose, three different layers in the design of the architecture were considered, namely, the high level, the middle level and the low level layers. The special case of an UAV operating in service robotics tasks and in particular in Search&Rescue mission in alpine scenario is considered. Different methodologies for each layer are presented with simulated or real-world experimental validation

    Vision-based control of multi-agent systems

    Get PDF
    Scope and Methodology of Study: Creating systems with multiple autonomous vehicles places severe demands on the design of decision-making supervisors, cooperative control schemes, and communication strategies. In last years, several approaches have been developed in the literature. Most of them solve the vehicle coordination problem assuming some kind of communications between team members. However, communications make the group sensitive to failure and restrict the applicability of the controllers to teams of friendly robots. This dissertation deals with the problem of designing decentralized controllers that use just local sensor information to achieve some group goals.Findings and Conclusions: This dissertation presents a decentralized architecture for vision-based stabilization of unmanned vehicles moving in formation. The architecture consists of two main components: (i) a vision system, and (ii) vision-based control algorithms. The vision system is capable of recognizing and localizing robots. It is a model-based scheme composed of three main components: image acquisition and processing, robot identification, and pose estimation.Using vision information, we address the problem of stabilizing groups of mobile robots in leader- or two leader-follower formations. The strategies use relative pose between a robot and its designated leader or leaders to achieve formation objectives. Several leader-follower formation control algorithms, which ensure asymptotic coordinated motion, are described and compared. Lyapunov's stability theory-based analysis and numerical simulations in a realistic tridimensional environment show the stability properties of the control approaches

    On Providing Efficient Real-Time Solutions to Motion Planning Problems of High Complexity

    Get PDF
    The holy grail of robotics is producing robotic systems capable of efficiently executing all the tasks that are hard, or even impossible, for humans. Humans, undoubtedly, from both a hardware and software perspective, are extremely complex systems capable of executing many complicated tasks. Thus, the complexity of many state-of-the-art robotic systems is also expected to progressively increase, with the goal to match or even surpass human abilities. Recent developments have emphasized mostly hardware, providing highly complex robots with exceptional capabilities. On the other hand, they have illustrated that one important bottleneck of realizing such systems as a common reality is real-time motion planning. This thesis aims to assist the development of complex robotic systems from a computational perspective. The primary focus is developing novel methodologies to address real-time motion planning that enables the robots to accomplish their goals safely and provide the building blocks for developing robust advanced robot behavior in the future. The proposed methods utilize and enhance state-of-the-art approaches to overcome three different types of complexity: 1. Motion planning for high-dimensional systems. RRT+, a new family of general sampling-based planners, was introduced to accelerate solving the motion planning problem for robotic systems with many degrees of freedom by iteratively searching in lowerdimensional subspaces of increasing dimension. RRT+ variants computed solutions orders of magnitude faster compared to state-of-the-art planners. Experiments in simulation of kinematic chains up to 50 degrees of freedom, and the Baxter humanoid robot validate the effectiveness of the proposed technique. 2. Underwater navigation for robots in cluttered environments. AquaNav, a real-time navigation pipeline for robots moving efficiently in challenging, unknown, and unstructured environments, was developed for Aqua2, a hexapod swimming robot with complex, yet to be fully discovered, dynamics. AquaNav was tested offline in known maps, and online in unknown maps utilizing vision-based SLAM. Rigorous testing in simulation, inpool, and open-water trials show the robustness of the method on providing efficient and safe performance, enabling the robot to navigate by avoiding static and dynamic obstacles in open-water settings with turbidity and surge. 3. Active perception of areas of interest during underwater operation. AquaVis, an extension of AquaNav, is a real-time navigation technique enabling robots, with arbitrary multi-sensor configurations, to safely reach their target, while at the same time observing multiple areas of interest from a desired proximity. Extensive simulations show safe behavior, and strong potential for improving underwater state estimation, monitoring, tracking, inspection, and mapping of objects of interest in the underwater domain, such as coral reefs, shipwrecks, marine life, and human infrastructure

    Robust localization and navigation with linear programming

    Full text link
    Linear programming is an established, well-understood technique optimization problem; the goal of this thesis is to show that we can still use linear programming to advance the state of the art in two important blocks of modern robotic systems, namely perception, and control. In the context of perception, we study the effects of outliers in the solution of localization problems. In its essence, this problem reduces to finding the coordinates of a set of nodes in a common reference frame starting from relative pairwise measurements and is at the core of many applications such as Structure from Motion (SfM), sensor networks, and Simultaneous Localization And Mapping (SLAM). In practical situations, the accuracy of the relative measurements is marred by noise and outliers (large-magnitude errors). In particular, outliers might introduce significant errors in the final result, hence, we have the problem of quantifying how much we should trust the solution returned by some given localization solver. In this work, we focus on the question of whether an L1-norm robust optimization formulation can recover a solution that is identical to the ground truth, under the scenario of translation-only measurements corrupted exclusively by outliers and no noise. In the context of control, we study the problem of robust path planning. Path planning deals with the problem of finding a path from an initial state toward a goal state while considering collision avoidance. We propose a novel approach for navigating in polygonal environments by synthesizing controllers that take as input relative displacement measurements with respect to a set of landmarks. Our algorithm is based on solving a sequence of robust min-max Linear Programming problems on the elements of a cell decomposition of the environment. The optimization problems are formulated using linear Control Lyapunov Function (CLF) and Control Barrier Function (CBF) constraints, to provide stability and safety guarantees, respectively. We integrate the CBF and CLF constraints with sampling-based path planning methods to omit the assumption of having a polygonal environment and add implementation to learn the constraints and estimate the controller when the environment is not fully known. We introduce a method to find the controller synthesis using bearing-only measurements in order to use monocular camera measurements. We show through simulations that the resulting controllers are robust to significant deformations of the environment. These works provide a simple approach in terms of computation to study the robustness of the localization and navigation problem

    SPATIO-TEMPORAL REGISTRATION IN AUGMENTED REALITY

    Get PDF
    The overarching goal of Augmented Reality (AR) is to provide users with the illusion that virtual and real objects coexist indistinguishably in the same space. An effective persistent illusion requires accurate registration between the real and the virtual objects, registration that is spatially and temporally coherent. However, visible misregistration can be caused by many inherent error sources, such as errors in calibration, tracking, and modeling, and system delay. This dissertation focuses on new methods that could be considered part of "the last mile" of spatio-temporal registration in AR: closed-loop spatial registration and low-latency temporal registration: 1. For spatial registration, the primary insight is that calibration, tracking and modeling are means to an end---the ultimate goal is registration. In this spirit I present a novel pixel-wise closed-loop registration approach that can automatically minimize registration errors using a reference model comprised of the real scene model and the desired virtual augmentations. Registration errors are minimized in both global world space via camera pose refinement, and local screen space via pixel-wise adjustments. This approach is presented in the context of Video See-Through AR (VST-AR) and projector-based Spatial AR (SAR), where registration results are measurable using a commodity color camera. 2. For temporal registration, the primary insight is that the real-virtual relationships are evolving throughout the tracking, rendering, scanout, and display steps, and registration can be improved by leveraging fine-grained processing and display mechanisms. In this spirit I introduce a general end-to-end system pipeline with low latency, and propose an algorithm for minimizing latency in displays (DLP DMD projectors in particular). This approach is presented in the context of Optical See-Through AR (OST-AR), where system delay is the most detrimental source of error. I also discuss future steps that may further improve spatio-temporal registration. Particularly, I discuss possibilities for using custom virtual or physical-virtual fiducials for closed-loop registration in SAR. The custom fiducials can be designed to elicit desirable optical signals that directly indicate any error in the relative pose between the physical and projected virtual objects.Doctor of Philosoph
    corecore