2,410 research outputs found

    3D vessel reconstruction based on intra-operative intravascular ultrasound for robotic autonomous catheter navigation

    Get PDF
    In recent years, robotic technology has improved instrument navigation precision and accuracy, and helped decrease the complexity of minimally invasive surgery. Still, the inherent restricted access to the anatomy of the patients severely complicates many procedures. Interventionists frequently depend on external technologies for visual guidance, usually employing ionizing radiation, due to the limited view upon the surgical scene. In the case of endovascular procedures, fluoroscopy is the common imaging modality used for visualization. This modality is based on X-rays and only offers a two- dimensional (2D) view of the surgical scene. Having a real-time, up-to-date understanding of the surrounding environment of the surgical instruments within the vasculature and not depending on using ionizing radiation would not only be very helpful for interventionists, but also paramount for the navigation of an intraluminal robot. Therefore, the aim of this thesis is to develop an algorithm able to do an intra-operative and real-time three-dimensional (3D) vessel reconstruction. The algorithm is divided into two parts: the reconstruction and the merging. In the first one, it is obtained the 3D vessel reconstruction of a section of the vessel and in the second one, the different sections of 3D vessel reconstruction are combined. A real vessel mesh is used to calculate the fitting errors of the reconstructed vessel which are very smallEn los últimos años, la tecnología robótica ha mejorado la precisión y fiabilidad de la navegación de instrumentos y ha ayudado a disminuir la complejidad de la cirugía mínimamente invasiva. Aún así, el acceso restringido inherente a la anatomía de los pacientes complica gravemente muchos procedimientos. Los intervencionistas dependen con frecuencia de tecnologías externas para la guía visual, generalmente empleando radiación ionizante, debido a la visión limitada de la escena quirúrgica. En el caso de los procedimientos endovasculares, la fluoroscopia es la modalidad de imagen común utilizada para la visualización. Esta modalidad se basa en rayos X y solo ofrece una vista bidimensional (2D) de la escena quirúrgica. Poder saber en tiempo real y de forma actualizada como es el entorno alrededor de los instrumentos quirúrgicos que se encuentran dentro de la vasculatura y no depender del uso de radiación ionizante no solo sería muy útil para los intervencionistas, sino también fundamental para la navegación de un robot intraluminal. Por lo tanto, el objetivo de esta tesis es desarrollar un algoritmo capaz de realizar una reconstrucción tridimensional (3D) del vaso sanguíneo de forma intraoperatoria y en tiempo real. El algoritmo se divide en dos partes: la reconstrucción y la unión. En la primera se obtiene la reconstrucción 3D de una sección del vaso sanguíneo y en el segundo se combinan las diferentes secciones obtenidas de vasos sanguíneos reconstruidos en 3D. Se utiliza una malla de un vaso sanguíneo real para calcular los errores de ajuste del vaso sanguíneo reconstruido, son errores muy pequeñosEn els últims anys, la tecnologia robòtica ha millorat la precisió i la fiabilitat de la navegació dels instruments i ha ajudat a disminuir la complexitat de la cirurgia mínimament invasiva. Tot i així, l'accés restringit inherent a l'anatomia dels pacients complica greument molts procediments. Els intervencionistes sovint depenen de tecnologies externes per a la guia visual, normalment emprant radiacions ionitzants, a causa de la visió limitada de l'escena quirúrgica. En el cas dels procediments endovasculars, la fluoroscòpia és la modalitat d'imatge comuna utilitzada per a la visualització. Aquesta modalitat es basa en raigs X i només ofereix una visió bidimensional (2D) de l'escena quirúrgica. Poder saber en temps real i de forma actualitzada com és l'entorn al voltant dels instruments quirúrgics que es troben dins de la vasculatura i no depèn de l'ús de radiació ionitzant no només seria molt útil per als intervencionistes, sinó també fonamental per a la navegació d'un robot intraluminal. Per tant, l'objectiu d'aquesta tesi és desenvolupar un algorisme capaç de fer una reconstrucció tridimensional (3D) del vas sanguini de forma intraoperatòria i en temps real. L'algorisme es divideix en dues parts: la reconstrucció i la fusió. En la primera s'obté la reconstrucció en 3D d'una secció del vas sanguini i en la segona, es combinen les diferents seccions obtingudes de vasos sanguinis reconstruïts en 3D. S'utilitza una malla d’un vas sanguini real per calcular els errors d'ajust del vas sanguini reconstruït, els errors son molt petit

    Autonomous robotic intracardiac catheter navigation using haptic vision

    Get PDF
    International audienceWhile all minimally invasive procedures involve navigating from a small incision in the skin to the site of the intervention, it has not been previously demonstrated how this can be done 10 autonomously. To show that autonomous navigation is possible, we investigated it in the hardest place to do it-inside the beating heart. We created a robotic catheter that can navigate through the blood-filled heart using wall-following algorithms inspired by positively thigmotactic animals. The catheter employs haptic vision, a hybrid sense using imaging for both touch-based surface identification and force sensing, to accomplish wall following inside the blood-filled heart. 15 Through in vivo animal experiments, we demonstrate that the performance of an autonomously-controlled robotic catheter rivals that of an experienced clinician. Autonomous navigation is a fundamental capability on which more sophisticated levels of autonomy can be built, e.g., to perform a procedure. Similar to the role of automation in fighter aircraft, such capabilities can free the clinician to focus on the most critical aspects of the procedure while providing precise and 20 repeatable tool motions independent of operator experience and fatigue

    Control and Estimation Methods Towards Safe Robot-assisted Eye Surgery

    Get PDF
    Vitreoretinal surgery is among the most delicate surgical tasks in which physiological hand tremor may severely diminish surgeon performance and put the eye at high risk of injury. Unerring targeting accuracy is required to perform precise operations on micro-scale tissues. Tool tip to tissue interaction forces are usually below human tactile perception, which may result in exertion of excessive forces to the retinal tissue leading to irreversible damages. Notable challenges during retinal surgery lend themselves to robotic assistance which has proven beneficial in providing a safe steady-hand manipulation. Efficient assistance from the robots heavily relies on accurate sensing and intelligent control algorithms of important surgery states and situations (e.g. instrument tip position measurements and control of interaction forces). This dissertation provides novel control and state estimation methods to improve safety during robot-assisted eye surgery. The integration of robotics into retinal microsurgery leads to a reduction in surgeon perception of tool-to-tissue forces at sclera. This blunting of human tactile sensory input, which is due to the inflexible inertia of the robot, is a potential iatrogenic risk during robotic eye surgery. To address this issue, a sensorized surgical instrument equipped with Fiber Bragg Grating (FBG) sensors, which is capable of measuring the sclera forces and instrument insertion depth into the eye, is integrated to the Steady-Hand Eye Robot (SHER). An adaptive control scheme is then customized and implemented on the robot that is intended to autonomously mitigate the risk of unsafe scleral forces and excessive insertion of the instrument. Various preliminary and multi-user clinician studies are then conducted to evaluate the effectiveness of the control method during mock retinal surgery procedures. In addition, due to inherent flexibility and the resulting deflection of eye surgical instruments as well as the need for targeting accuracy, we have developed a method to enhance deflected instrument tip position estimation. Using an iterative method and microscope data, we develop a calibration- and registration-independent (RI) framework to provide online estimates of the instrument stiffness (least squares and adaptive). The estimations are then combined with a state-space model for tip position evolution obtained based on the forward kinematics (FWK) of the robot and FBG sensor measurements. This is accomplished using a Kalman Filtering (KF) approach to improve the instrument tip position estimation during robotic surgery. The entire framework is independent of camera-to-robot coordinate frame registration and is evaluated during various phantom experiments to demonstrate its effectiveness

    Hospital management

    Get PDF

    Internet of robotic things : converging sensing/actuating, hypoconnectivity, artificial intelligence and IoT Platforms

    Get PDF
    The Internet of Things (IoT) concept is evolving rapidly and influencing newdevelopments in various application domains, such as the Internet of MobileThings (IoMT), Autonomous Internet of Things (A-IoT), Autonomous Systemof Things (ASoT), Internet of Autonomous Things (IoAT), Internetof Things Clouds (IoT-C) and the Internet of Robotic Things (IoRT) etc.that are progressing/advancing by using IoT technology. The IoT influencerepresents new development and deployment challenges in different areassuch as seamless platform integration, context based cognitive network integration,new mobile sensor/actuator network paradigms, things identification(addressing, naming in IoT) and dynamic things discoverability and manyothers. The IoRT represents new convergence challenges and their need to be addressed, in one side the programmability and the communication ofmultiple heterogeneous mobile/autonomous/robotic things for cooperating,their coordination, configuration, exchange of information, security, safetyand protection. Developments in IoT heterogeneous parallel processing/communication and dynamic systems based on parallelism and concurrencyrequire new ideas for integrating the intelligent “devices”, collaborativerobots (COBOTS), into IoT applications. Dynamic maintainability, selfhealing,self-repair of resources, changing resource state, (re-) configurationand context based IoT systems for service implementation and integrationwith IoT network service composition are of paramount importance whennew “cognitive devices” are becoming active participants in IoT applications.This chapter aims to be an overview of the IoRT concept, technologies,architectures and applications and to provide a comprehensive coverage offuture challenges, developments and applications

    Implementation of safe human robot collaboration for ultrasound guided radiation therapy

    Get PDF
    This thesis shows that safe human-robot-interaction and Human Robot Collaboration is possible for Ultrasound (US) guided radiotherapy. Via the chosen methodology, all components (US, optical room monitoring and robot) could be linked and integrated and realized in a realistic clinical workflow. US guided radiotherapy offers a complement and alternative to existing image-guided therapy approaches. The real-time capability of US and high soft tissue contrast allow target structures to be tracked and radiation delivery to be modulated. However, Ultrasound guided radiation therapy (USgRT) is not yet clinically established but is still under development, as reliable and safe methods of image acquisition are not yet available. In particular, the loss of contact of the US probe to the patient surface poses a problem for patient movements such as breathing. For this purpose, a Breathing and motion compensation (BaMC) was developed in this work, which together with the safe control of a lightweight robot represents a new development for USgRT. The developed BaMC can be used to control the US probe with contact to the patient. The conducted experiments have confirmed that a steady contact with the patient surface and thus a continuous image acquisition can be ensured by the developed methodology. In addition, the image position in space can be accurately maintained in the submillimeter range. The BaMC seamlessly integrates into a developed clinical workflow. The graphical user interfaces developed for this purpose, as well as direct haptic control with the robot, provide an easy interaction option for the clinical user. The developed autonomous positioning of the transducer represents a good example of the feasibility of the approach. With the help of the user interface, an acoustic plane can be defined and autonomously approached via the robot in a time-efficient and precise manner. The tests carried out show that this methodology is suitable for a wide range of transducer positions. Safety in a human-robot interaction task is essential and requires individually customized concepts. In this work, adequate monitoring mechanisms could be found to ensure both patient and staff safety. In collision tests it could be shown that the implemented detection measures work and that the robot moves into a safe parking position. The forces acting on the patient could thus be pushed well below the limits required by the standard. This work has demonstrated the first important steps towards safe robot-assisted ultrasound imaging, which is not only applicable to USgRT. The developed interfaces provide the basis for further investigations in this field, especially in the area of image recognition, for example to determine the position of the target structure. With the proof of safety of the developed system, first study in human can now follow
    corecore