163 research outputs found

    Medical SLAM in an autonomous robotic system

    Get PDF
    One of the main challenges for computer-assisted surgery (CAS) is to determine the intra-operative morphology and motion of soft-tissues. This information is prerequisite to the registration of multi-modal patient-specific data for enhancing the surgeon’s navigation capabilities by observing beyond exposed tissue surfaces and for providing intelligent control of robotic-assisted instruments. In minimally invasive surgery (MIS), optical techniques are an increasingly attractive approach for in vivo 3D reconstruction of the soft-tissue surface geometry. This thesis addresses the ambitious goal of achieving surgical autonomy, through the study of the anatomical environment by Initially studying the technology present and what is needed to analyze the scene: vision sensors. A novel endoscope for autonomous surgical task execution is presented in the first part of this thesis. Which combines a standard stereo camera with a depth sensor. This solution introduces several key advantages, such as the possibility of reconstructing the 3D at a greater distance than traditional endoscopes. Then the problem of hand-eye calibration is tackled, which unites the vision system and the robot in a single reference system. Increasing the accuracy in the surgical work plan. In the second part of the thesis the problem of the 3D reconstruction and the algorithms currently in use were addressed. In MIS, simultaneous localization and mapping (SLAM) can be used to localize the pose of the endoscopic camera and build ta 3D model of the tissue surface. Another key element for MIS is to have real-time knowledge of the pose of surgical tools with respect to the surgical camera and underlying anatomy. Starting from the ORB-SLAM algorithm we have modified the architecture to make it usable in an anatomical environment by adding the registration of the pre-operative information of the intervention to the map obtained from the SLAM. Once it has been proven that the slam algorithm is usable in an anatomical environment, it has been improved by adding semantic segmentation to be able to distinguish dynamic features from static ones. All the results in this thesis are validated on training setups, which mimics some of the challenges of real surgery and on setups that simulate the human body within Autonomous Robotic Surgery (ARS) and Smart Autonomous Robotic Assistant Surgeon (SARAS) projects

    Medical SLAM in an autonomous robotic system

    Get PDF
    One of the main challenges for computer-assisted surgery (CAS) is to determine the intra-operative morphology and motion of soft-tissues. This information is prerequisite to the registration of multi-modal patient-specific data for enhancing the surgeon’s navigation capabilities by observing beyond exposed tissue surfaces and for providing intelligent control of robotic-assisted instruments. In minimally invasive surgery (MIS), optical techniques are an increasingly attractive approach for in vivo 3D reconstruction of the soft-tissue surface geometry. This thesis addresses the ambitious goal of achieving surgical autonomy, through the study of the anatomical environment by Initially studying the technology present and what is needed to analyze the scene: vision sensors. A novel endoscope for autonomous surgical task execution is presented in the first part of this thesis. Which combines a standard stereo camera with a depth sensor. This solution introduces several key advantages, such as the possibility of reconstructing the 3D at a greater distance than traditional endoscopes. Then the problem of hand-eye calibration is tackled, which unites the vision system and the robot in a single reference system. Increasing the accuracy in the surgical work plan. In the second part of the thesis the problem of the 3D reconstruction and the algorithms currently in use were addressed. In MIS, simultaneous localization and mapping (SLAM) can be used to localize the pose of the endoscopic camera and build ta 3D model of the tissue surface. Another key element for MIS is to have real-time knowledge of the pose of surgical tools with respect to the surgical camera and underlying anatomy. Starting from the ORB-SLAM algorithm we have modified the architecture to make it usable in an anatomical environment by adding the registration of the pre-operative information of the intervention to the map obtained from the SLAM. Once it has been proven that the slam algorithm is usable in an anatomical environment, it has been improved by adding semantic segmentation to be able to distinguish dynamic features from static ones. All the results in this thesis are validated on training setups, which mimics some of the challenges of real surgery and on setups that simulate the human body within Autonomous Robotic Surgery (ARS) and Smart Autonomous Robotic Assistant Surgeon (SARAS) projects

    Approach for reducing the computational cost of environment classification systems for mobile robots

    Get PDF
    Disertační práce se věnuje problému změny prostředí v úlohách mobilní robotiky. Zaměřuje se na využití jednodimenzionálních nevizuálních senzorů za účelem redukce výpočetních nároků. V práci je představen nový systém pro detekci a klasifikaci prostředí robota založený na datech z kamery a z nevizuálních senzorů. Nevizuální senzory zde slouží jako prostředek detekce probíhající změny, která iniciuje klasifikaci prostředí pomocí kamerových dat. To může významně snížit výpočetní nároky v porovnání se situací, kdy je zpracováván každý a nebo každý n-tý snímek obrazu. Systém je otestován na případu změny prostředí mezi vnitřním a venkovním prostředím. Přínosy této práce jsou následující: (1) Představení systému pro detekci a klasifikaci prostředí mobilního robota; (2) Analýzu state-of-the-art v oblasti Simultánní Lokalizace a Mapování za účelem zjištění otevřených problémů, které je potřeba řešit; (3) Analýza nevizuálních senzorů vzhledem k jejich vhodnosti pro danou úlohu. (4) Analýza existujících metod pro detekci změny ve 2D signálu a představení dvou jednoduchých přístupů k tomuto problému; (5) Analýza state-of-the art v oblasti klasifikace prostředí se zaměřením na klasifikaci vnitřního a venkovního prostředí; (6) Experiment porovnávající metody studované v předchozím bodu. Jedná se dle mých znalostí o nejrozsáhlejší porovnání těchto metod na jednom jediném datasetu. Navíc jsou do experimentu zahrnuty také klasifikátory založené na neuronových sítích, které dosahují lepších výsledků než klasické přístupy; (7) Vytvoření datasetu pro testování navrženého systému na sestaveném 6-ti kolovém mobilním robotu. Podle mých znalostí do této doby neexistoval dataset, který by kromě dat potřebných k řešení úlohy SLAM, naíc přidával data umožňující detekci a klasifikaci prostředí i pomocí nevizuálních dat; (8) Implementace představného systému jako open-source balík pro Robot Operating System na platformě GitHub; (9) Implementace knihovny pro výpočet globálního popisovače Centrist v C++, taktéž dostupná jako open-source na platformě GitHub.ObhájenoThis dissertation thesis deals with the problem of environment changes in the tasks of mobile robotics. In particular, it focuses on using of one-dimensional non-visual sensors in order to reduce computation cost. The work presents a new system for detection and classification of the robot environment based on data from the camera and non-visual sensors. Non-visual sensors serve as detectors of ongoing change of the environment that initiates the classification of the environment using camera data. This can significantly reduce computational demands compared to a situation where every or every n-th frame of an image is processed. The system is evaluated on the case of a change of environment between indoor and outdoor environment. The contributions of this work are the following: (1) Proposed system for detection and classification of the environment of mobile robot; (2) State-of-the-art analysis in the field of Simultaneous Localization and Mapping in order to identify existing open issues that need to be addressed; (3) Analysis of non-visual sensors with respect to their suitability for solving change detection problem. (4) Analysis of existing methods for detecting changes in 2D signal and introduction of two simple approaches to this problem; (5) State-of-the-art analysis in the field of environment classification with a focus on the classification of indoor vs. outdoor environments; (6) Experiment comparing the methods studied in the previous point. To my best knowledge, this is the most extensive comparison of these methods on a single dataset. In addition, classifiers based on neural networks, which achieve better results than classical approaches, are also included in the experiment. (7) Creation of a dataset for testing the designed system on an assembled 6-wheel mobile robot. To the best of my knowledge, there has been no dataset that, in addition to the data needed to solve the SLAM task, adds data that allows the environment to be detected and classified using non-visual data. (8) Implementation of the proposed system as an open-source package for the Robot Operating System on the GitHub platform. (9) Implementation of a library for calculating the Centrist global descriptor in C++ and Python. Library is also available as open-source on the GitHub platform

    A Multi-Sensor Fusion-Based Underwater Slam System

    Get PDF
    This dissertation addresses the problem of real-time Simultaneous Localization and Mapping (SLAM) in challenging environments. SLAM is one of the key enabling technologies for autonomous robots to navigate in unknown environments by processing information on their on-board computational units. In particular, we study the exploration of challenging GPS-denied underwater environments to enable a wide range of robotic applications, including historical studies, health monitoring of coral reefs, underwater infrastructure inspection e.g., bridges, hydroelectric dams, water supply systems, and oil rigs. Mapping underwater structures is important in several fields, such as marine archaeology, Search and Rescue (SaR), resource management, hydrogeology, and speleology. However, due to the highly unstructured nature of such environments, navigation by human divers could be extremely dangerous, tedious, and labor intensive. Hence, employing an underwater robot is an excellent fit to build the map of the environment while simultaneously localizing itself in the map. The main contribution of this dissertation is the design and development of a real-time robust SLAM algorithm for small and large scale underwater environments. SVIn – a novel tightly-coupled keyframe-based non-linear optimization framework fusing Sonar, Visual, Inertial and water depth information with robust initialization, loop-closing, and relocalization capabilities has been presented. Introducing acoustic range information to aid the visual data, shows improved reconstruction and localization. The availability of depth information from water pressure enables a robust initialization and refines the scale factor, as well as assists to reduce the drift for the tightly-coupled integration. The complementary characteristics of these sensing v modalities provide accurate and robust localization in unstructured environments with low visibility and low visual features – as such make them the ideal choice for underwater navigation. The proposed system has been successfully tested and validated in both benchmark datasets and numerous real world scenarios. It has also been used for planning for underwater robot in the presence of obstacles. Experimental results on datasets collected with a custom-made underwater sensor suite and an autonomous underwater vehicle (AUV) Aqua2 in challenging underwater environments with poor visibility, demonstrate performance never achieved before in terms of accuracy and robustness. To aid the sparse reconstruction, a contour-based reconstruction approach utilizing the well defined edges between the well lit area and darkness has been developed. In particular, low lighting conditions, or even complete absence of natural light inside caves, results in strong lighting variations, e.g., the cone of the artificial video light intersecting underwater structures and the shadow contours. The proposed method utilizes these contours to provide additional features, resulting into a denser 3D point cloud than the usual point clouds from a visual odometry system. Experimental results in an underwater cave demonstrate the performance of our system. This enables more robust navigation of autonomous underwater vehicles using the denser 3D point cloud to detect obstacles and achieve higher resolution reconstructions

    UVC Dose Mapping by Mobile Robots

    Get PDF
    As infeções adquiridas em ambientes hospitalares são um problema persistente e crescente e a sua prevenção envolve a desinfeção de áreas e superfícies. A necessidade de métodos de desinfeção eficazes aumentou muito em consequência da pandemia de Covid-19. Um método eficaz é a utilização de exposição UVC porque a radiação UVC é absorvida pelos ácidos nucleicos e, portanto, é capaz de inativar microrganismos. Este método também traz muitas vantagens quando comparado com os métodos tradicionais de desinfeção. A desinfeção UVC pode ser realizada por equipamentos fixos que têm de ser deslocados de um local para outro de modo a desinfetar toda uma área, ou por um equipamento móvel autónomo que requer intervenção humana mínima para desinfetar completamente um ambiente. Esta dissertação foca em robôs móveis que desinfetam um ambiente utilizando radiação UVC. Estes robôs móveis são capazes de se mover autonomamente enquanto mapeiam o ambiente à sua volta e simultaneamente o desinfetam. Os robôs mantêm registo da dose aplicada a cada área do ambiente de modo a construir um mapa da dose e diferenciar as áreas completamente desinfetadas das que não o estão. Esta solução tem a vantagem de o robô realizar a desinfeção UVC sem necessitar de parar em cada área nem ter conhecimentos prévios sobre o ambiente. A validação desta solução foi realizada utilizando o rviz, uma ferramenta do Robot Operating System (ROS), e a LiDAR Camera L515. A câmara foi utilizada para recolher a informação necessária para a criação do mapa do ambiente e o rviz foi utilizado para visualizar o mapa da dose.Hospital-acquired infections are a persistent and increasing problem and their prevention involves disinfecting areas and surfaces. The necessity for effective disinfection methods has highly increased in consequence of the Covid-19 pandemic. An effective method is using UVC exposure because UVC radiation is absorbed by nucleic acids and, therefore, is able to inactivate microorganisms. This method also brings many advantages when compared with traditional disinfection methods. UVC disinfection can be performed by fixed equipments that have to be moved from place to place to disinfect an entire area, or by an autonomous mobile equipment that requires minimal human intervention to completely disinfect an environment. This dissertation is focused on mobile robots that disinfect an environment using UVC radiation. These mobile robots are able to move autonomously while mapping the surrounding environment and simultaneously disinfect it. The robots keep track of the dose applied to each area of the environment in order to build a dose map and differentiate areas that are completely disinfected from those that are not. This solution has the advantage of the robot performing UVC disinfection without needing to stop in each area nor having previous knowledge of the environment. The validation of this solution was performed using rviz, a Robot Operating System (ROS) tool, and the LiDAR Camera L515. The camera was used to capture the necessary information for creating the map of the environment and rviz was used to visualize the dose map

    15th SC@RUG 2018 proceedings 2017-2018

    Get PDF

    15th SC@RUG 2018 proceedings 2017-2018

    Get PDF
    corecore