484 research outputs found

    Medical SLAM in an autonomous robotic system

    Get PDF
    One of the main challenges for computer-assisted surgery (CAS) is to determine the intra-operative morphology and motion of soft-tissues. This information is prerequisite to the registration of multi-modal patient-specific data for enhancing the surgeon’s navigation capabilities by observing beyond exposed tissue surfaces and for providing intelligent control of robotic-assisted instruments. In minimally invasive surgery (MIS), optical techniques are an increasingly attractive approach for in vivo 3D reconstruction of the soft-tissue surface geometry. This thesis addresses the ambitious goal of achieving surgical autonomy, through the study of the anatomical environment by Initially studying the technology present and what is needed to analyze the scene: vision sensors. A novel endoscope for autonomous surgical task execution is presented in the first part of this thesis. Which combines a standard stereo camera with a depth sensor. This solution introduces several key advantages, such as the possibility of reconstructing the 3D at a greater distance than traditional endoscopes. Then the problem of hand-eye calibration is tackled, which unites the vision system and the robot in a single reference system. Increasing the accuracy in the surgical work plan. In the second part of the thesis the problem of the 3D reconstruction and the algorithms currently in use were addressed. In MIS, simultaneous localization and mapping (SLAM) can be used to localize the pose of the endoscopic camera and build ta 3D model of the tissue surface. Another key element for MIS is to have real-time knowledge of the pose of surgical tools with respect to the surgical camera and underlying anatomy. Starting from the ORB-SLAM algorithm we have modified the architecture to make it usable in an anatomical environment by adding the registration of the pre-operative information of the intervention to the map obtained from the SLAM. Once it has been proven that the slam algorithm is usable in an anatomical environment, it has been improved by adding semantic segmentation to be able to distinguish dynamic features from static ones. All the results in this thesis are validated on training setups, which mimics some of the challenges of real surgery and on setups that simulate the human body within Autonomous Robotic Surgery (ARS) and Smart Autonomous Robotic Assistant Surgeon (SARAS) projects

    Medical SLAM in an autonomous robotic system

    Get PDF
    One of the main challenges for computer-assisted surgery (CAS) is to determine the intra-operative morphology and motion of soft-tissues. This information is prerequisite to the registration of multi-modal patient-specific data for enhancing the surgeon’s navigation capabilities by observing beyond exposed tissue surfaces and for providing intelligent control of robotic-assisted instruments. In minimally invasive surgery (MIS), optical techniques are an increasingly attractive approach for in vivo 3D reconstruction of the soft-tissue surface geometry. This thesis addresses the ambitious goal of achieving surgical autonomy, through the study of the anatomical environment by Initially studying the technology present and what is needed to analyze the scene: vision sensors. A novel endoscope for autonomous surgical task execution is presented in the first part of this thesis. Which combines a standard stereo camera with a depth sensor. This solution introduces several key advantages, such as the possibility of reconstructing the 3D at a greater distance than traditional endoscopes. Then the problem of hand-eye calibration is tackled, which unites the vision system and the robot in a single reference system. Increasing the accuracy in the surgical work plan. In the second part of the thesis the problem of the 3D reconstruction and the algorithms currently in use were addressed. In MIS, simultaneous localization and mapping (SLAM) can be used to localize the pose of the endoscopic camera and build ta 3D model of the tissue surface. Another key element for MIS is to have real-time knowledge of the pose of surgical tools with respect to the surgical camera and underlying anatomy. Starting from the ORB-SLAM algorithm we have modified the architecture to make it usable in an anatomical environment by adding the registration of the pre-operative information of the intervention to the map obtained from the SLAM. Once it has been proven that the slam algorithm is usable in an anatomical environment, it has been improved by adding semantic segmentation to be able to distinguish dynamic features from static ones. All the results in this thesis are validated on training setups, which mimics some of the challenges of real surgery and on setups that simulate the human body within Autonomous Robotic Surgery (ARS) and Smart Autonomous Robotic Assistant Surgeon (SARAS) projects

    Augmented Reality and Artificial Intelligence in Image-Guided and Robot-Assisted Interventions

    Get PDF
    In minimally invasive orthopedic procedures, the surgeon places wires, screws, and surgical implants through the muscles and bony structures under image guidance. These interventions require alignment of the pre- and intra-operative patient data, the intra-operative scanner, surgical instruments, and the patient. Suboptimal interaction with patient data and challenges in mastering 3D anatomy based on ill-posed 2D interventional images are essential concerns in image-guided therapies. State of the art approaches often support the surgeon by using external navigation systems or ill-conditioned image-based registration methods that both have certain drawbacks. Augmented reality (AR) has been introduced in the operating rooms in the last decade; however, in image-guided interventions, it has often only been considered as a visualization device improving traditional workflows. Consequently, the technology is gaining minimum maturity that it requires to redefine new procedures, user interfaces, and interactions. This dissertation investigates the applications of AR, artificial intelligence, and robotics in interventional medicine. Our solutions were applied in a broad spectrum of problems for various tasks, namely improving imaging and acquisition, image computing and analytics for registration and image understanding, and enhancing the interventional visualization. The benefits of these approaches were also discovered in robot-assisted interventions. We revealed how exemplary workflows are redefined via AR by taking full advantage of head-mounted displays when entirely co-registered with the imaging systems and the environment at all times. The proposed AR landscape is enabled by co-localizing the users and the imaging devices via the operating room environment and exploiting all involved frustums to move spatial information between different bodies. The system's awareness of the geometric and physical characteristics of X-ray imaging allows the exploration of different human-machine interfaces. We also leveraged the principles governing image formation and combined it with deep learning and RGBD sensing to fuse images and reconstruct interventional data. We hope that our holistic approaches towards improving the interface of surgery and enhancing the usability of interventional imaging, not only augments the surgeon's capabilities but also augments the surgical team's experience in carrying out an effective intervention with reduced complications

    NASA Tech Briefs, November 1993

    Get PDF
    Topics covered: Advanced Manufacturing; Electronic Components and Circuits; Electronic Systems; Physical Sciences; Materials; Computer Programs; Mechanics; Machinery; Fabrication Technology; Mathematics and Information Sciences; Life Sciences

    Intelligent collision avoidance system for industrial manipulators

    Get PDF
    Mestrado de dupla diplomação com a UTFPR - Universidade Tecnológica Federal do ParanáThe new paradigm of Industry 4.0 demand the collaboration between robot and humans. They could help (human and robot) and collaborate each other without any additional security, unlike other conventional manipulators. For this, the robot should have the ability of acquire the environment and plan (or re-plan) on-the-fly the movement avoiding the obstacles and people. This work proposes a system that acquires the space of the environment, based on a Kinect sensor, verifies the free spaces generated by a Point Cloud and executes the trajectory of manipulators in these free spaces. The simulation system should perform the path planning of a UR5 manipulator for pick-and-place tasks, while avoiding the objects around it, based on the point cloud from Kinect. And due to the results obtained in the simulation, it was possible to apply this system in real situations. The basic structure of the system is the ROS software, which facilitates robotic applications with a powerful set of libraries and tools. The MoveIt! and Rviz are examples of these tools, with them it was possible to carry out simulations and obtain planning results. The results are reported through logs files, indicating whether the robot motion plain was successful and how many manipulator poses were needed to create the final movement. This last step, allows to validate the proposed system, through the use of the RRT and PRM algorithms. Which were chosen because they are most used in the field of robot path planning.Os novos paradigmas da Indústria 4.0 exigem a colaboração entre robôs e seres humanos. Estes podem ajudar e colaborar entre si sem qualquer segurança adicional, ao contrário de outros manipuladores convencionais. Para isto, o robô deve ter a capacidade de adquirir o meio ambiente e planear (ou re-planear) on-the-fly o movimento evitando obstáculos e pessoas. Este trabalho propõe um sistema que adquire o espaço do ambiente através do sensor Kinect. O sistema deve executar o planeamento do caminho de manipuladores que possuem movimentos de um ponto a outro (ponto inicial e final), evitando os objetos ao seu redor, com base na nuvem de pontos gerada pelo Kinect. E devido aos resultados obtidos na simulação, foi possível aplicar este sistema em situações reais. A estrutura base do sistema é o software ROS, que facilita aplicações robóticas com um poderoso conjunto de bibliotecas e ferramentas. O MoveIt! e Rviz são exemplos destas ferramentas, com elas foi possível realizar simulações e conseguir os resultados de planeamento livre de colisões. Os resultados são informados por meio de arquivos logs, indicando se o movimento do UR5 foi realizado com sucesso e quantas poses do manipulador foram necessárias criar para atingir o movimento final. Este último passo, permite validar o sistema proposto, através do uso dos algoritmos RRT e PRM. Que foram escolhidos por serem mais utilizados no ramo de planeamento de trajetória para robôs

    Medical Robotics

    Get PDF
    The first generation of surgical robots are already being installed in a number of operating rooms around the world. Robotics is being introduced to medicine because it allows for unprecedented control and precision of surgical instruments in minimally invasive procedures. So far, robots have been used to position an endoscope, perform gallbladder surgery and correct gastroesophogeal reflux and heartburn. The ultimate goal of the robotic surgery field is to design a robot that can be used to perform closed-chest, beating-heart surgery. The use of robotics in surgery will expand over the next decades without any doubt. Minimally Invasive Surgery (MIS) is a revolutionary approach in surgery. In MIS, the operation is performed with instruments and viewing equipment inserted into the body through small incisions created by the surgeon, in contrast to open surgery with large incisions. This minimizes surgical trauma and damage to healthy tissue, resulting in shorter patient recovery time. The aim of this book is to provide an overview of the state-of-art, to present new ideas, original results and practical experiences in this expanding area. Nevertheless, many chapters in the book concern advanced research on this growing area. The book provides critical analysis of clinical trials, assessment of the benefits and risks of the application of these technologies. This book is certainly a small sample of the research activity on Medical Robotics going on around the globe as you read it, but it surely covers a good deal of what has been done in the field recently, and as such it works as a valuable source for researchers interested in the involved subjects, whether they are currently “medical roboticists” or not

    Augmented reality for computer assisted orthopaedic surgery

    Get PDF
    In recent years, computer-assistance and robotics have established their presence in operating theatres and found success in orthopaedic procedures. Benefits of computer assisted orthopaedic surgery (CAOS) have been thoroughly explored in research, finding improvements in clinical outcomes, through increased control and precision over surgical actions. However, human-computer interaction in CAOS remains an evolving field, through emerging display technologies including augmented reality (AR) – a fused view of the real environment with virtual, computer-generated holograms. Interactions between clinicians and patient-specific data generated during CAOS are limited to basic 2D interactions on touchscreen monitors, potentially creating clutter and cognitive challenges in surgery. Work described in this thesis sought to explore the benefits of AR in CAOS through: an integration between commercially available AR and CAOS systems, creating a novel AR-centric surgical workflow to support various tasks of computer-assisted knee arthroplasty, and three pre–clinical studies exploring the impact of the new AR workflow on both existing and newly proposed quantitative and qualitative performance metrics. Early research focused on cloning the (2D) user-interface of an existing CAOS system onto a virtual AR screen and investigating any resulting impacts on usability and performance. An infrared-based registration system is also presented, describing a protocol for calibrating commercial AR headsets with optical trackers, calculating a spatial transformation between surgical and holographic coordinate frames. The main contribution of this thesis is a novel AR workflow designed to support computer-assisted patellofemoral arthroplasty. The reported workflow provided 3D in-situ holographic guidance for CAOS tasks including patient registration, pre-operative planning, and assisted-cutting. Pre-clinical experimental validation on a commercial system (NAVIO®, Smith & Nephew) for these contributions demonstrates encouraging early-stage results showing successful deployment of AR to CAOS systems, and promising indications that AR can enhance the clinician’s interactions in the future. The thesis concludes with a summary of achievements, corresponding limitations and future research opportunities.Open Acces

    Scaled Autonomy for Networked Humanoids

    Get PDF
    Humanoid robots have been developed with the intention of aiding in environments designed for humans. As such, the control of humanoid morphology and effectiveness of human robot interaction form the two principal research issues for deploying these robots in the real world. In this thesis work, the issue of humanoid control is coupled with human robot interaction under the framework of scaled autonomy, where the human and robot exchange levels of control depending on the environment and task at hand. This scaled autonomy is approached with control algorithms for reactive stabilization of human commands and planned trajectories that encode semantically meaningful motion preferences in a sequential convex optimization framework. The control and planning algorithms have been extensively tested in the field for robustness and system verification. The RoboCup competition provides a benchmark competition for autonomous agents that are trained with a human supervisor. The kid-sized and adult-sized humanoid robots coordinate over a noisy network in a known environment with adversarial opponents, and the software and routines in this work allowed for five consecutive championships. Furthermore, the motion planning and user interfaces developed in the work have been tested in the noisy network of the DARPA Robotics Challenge (DRC) Trials and Finals in an unknown environment. Overall, the ability to extend simplified locomotion models to aid in semi-autonomous manipulation allows untrained humans to operate complex, high dimensional robots. This represents another step in the path to deploying humanoids in the real world, based on the low dimensional motion abstractions and proven performance in real world tasks like RoboCup and the DRC

    Smart Technologies for Precision Assembly

    Get PDF
    This open access book constitutes the refereed post-conference proceedings of the 9th IFIP WG 5.5 International Precision Assembly Seminar, IPAS 2020, held virtually in December 2020. The 16 revised full papers and 10 revised short papers presented together with 1 keynote paper were carefully reviewed and selected from numerous submissions. The papers address topics such as assembly design and planning; assembly operations; assembly cells and systems; human centred assembly; and assistance methods in assembly

    NASA Tech Briefs, December 1989

    Get PDF
    Topics include: Electronic Components and Circuits. Electronic Systems, Physical Sciences, Materials, Computer Programs, Mechanics, Machinery, Fabrication Technology, Mathematics and Information Sciences, and Life Sciences
    • …
    corecore