1,207 research outputs found

    Autonomous model building using vision and manipulation

    Get PDF
    It is often the case that robotic systems require models, in order to successfully control themselves, and to interact with the world. Models take many forms and include kinematic models to plan motions, dynamics models to understand the interaction of forces, and models of 3D geometry to check for collisions, to name but a few. Traditionally, models are provided to the robotic system by the designers that build the system. However, for long-term autonomy it becomes important for the robot to be able to build and maintain models of itself, and of objects it might encounter. In this thesis, the argument for enabling robotic systems to autonomously build models is advanced and explored. The main contribution of this research is to show how a layered approach can be taken to building models. Thus a robot, starting with a limited amount of information, can autonomously build a number of models, including a kinematic model, which describes the robot’s body, and allows it to plan and perform future movements. Key to the incremental, autonomous approach is the use of exploratory actions. These are actions that the robot can perform in order to gain some more information, either about itself, or about an object with which it is interacting. A method is then presented whereby a robot, after being powered on, can home its joints using just vision, i.e. traditional methods such as absolute encoders, or limit switches are not required. The ability to interact with objects in order to extract information is one of the main advantages that a robotic system has over a purely passive system, when attempting to learn about or build models of objects. In light of this, the next contribution of this research is to look beyond the robot’s body and to present methods with which a robot can autonomously build models of objects in the world around it. The first class of objects examined are flat pack cardboard boxes, a class of articulated objects with a number of interesting properties. It is shown how exploratory actions can be used to build a model of a flat pack cardboard box and to locate any hinges the box may have. Specifically, it is shown how when interacting with an object, a robot can combine haptic feedback from force sensors, with visual feedback from a camera to get more information from an object than would be possible using just a single sensor modality. The final contribution of this research is to present a series of exploratory actions for a robotic text reading system that allow text to be found and read from an object. The text reading system highlights how models of objects can take many forms, from a representation of their physical extents, to the text that is written on them

    A gaze-contingent framework for perceptually-enabled applications in healthcare

    Get PDF
    Patient safety and quality of care remain the focus of the smart operating room of the future. Some of the most influential factors with a detrimental effect are related to suboptimal communication among the staff, poor flow of information, staff workload and fatigue, ergonomics and sterility in the operating room. While technological developments constantly transform the operating room layout and the interaction between surgical staff and machinery, a vast array of opportunities arise for the design of systems and approaches, that can enhance patient safety and improve workflow and efficiency. The aim of this research is to develop a real-time gaze-contingent framework towards a "smart" operating suite, that will enhance operator's ergonomics by allowing perceptually-enabled, touchless and natural interaction with the environment. The main feature of the proposed framework is the ability to acquire and utilise the plethora of information provided by the human visual system to allow touchless interaction with medical devices in the operating room. In this thesis, a gaze-guided robotic scrub nurse, a gaze-controlled robotised flexible endoscope and a gaze-guided assistive robotic system are proposed. Firstly, the gaze-guided robotic scrub nurse is presented; surgical teams performed a simulated surgical task with the assistance of a robot scrub nurse, which complements the human scrub nurse in delivery of surgical instruments, following gaze selection by the surgeon. Then, the gaze-controlled robotised flexible endoscope is introduced; experienced endoscopists and novice users performed a simulated examination of the upper gastrointestinal tract using predominately their natural gaze. Finally, a gaze-guided assistive robotic system is presented, which aims to facilitate activities of daily living. The results of this work provide valuable insights into the feasibility of integrating the developed gaze-contingent framework into clinical practice without significant workflow disruptions.Open Acces

    Towards Closed-loop, Robot Assisted Percutaneous Interventions under MRI Guidance

    Get PDF
    Image guided therapy procedures under MRI guidance has been a focused research area over past decade. Also, over the last decade, various MRI guided robotic devices have been developed and used clinically for percutaneous interventions, such as prostate biopsy, brachytherapy, and tissue ablation. Though MRI provides better soft tissue contrast compared to Computed Tomography and Ultrasound, it poses various challenges like constrained space, less ergonomic patient access and limited material choices due to its high magnetic field. Even after, advancements in MRI compatible actuation methods and robotic devices using them, most MRI guided interventions are still open-loop in nature and relies on preoperative or intraoperative images. In this thesis, an intraoperative MRI guided robotic system for prostate biopsy comprising of an MRI compatible 4-DOF robotic manipulator, robot controller and control application with Clinical User Interface (CUI) and surgical planning applications (3DSlicer and RadVision) is presented. This system utilizes intraoperative images acquired after each full or partial needle insertion for needle tip localization. Presented system was approved by Institutional Review Board at Brigham and Women\u27s Hospital(BWH) and has been used in 30 patient trials. Successful translation of such a system utilizing intraoperative MR images motivated towards the development of a system architecture for close-loop, real-time MRI guided percutaneous interventions. Robot assisted, close-loop intervention could help in accurate positioning and localization of the therapy delivery instrument, improve physician and patient comfort and allow real-time therapy monitoring. Also, utilizing real-time MR images could allow correction of surgical instrument trajectory and controlled therapy delivery. Two of the applications validating the presented architecture; closed-loop needle steering and MRI guided brain tumor ablation are demonstrated under real-time MRI guidance

    Teleoperation of MRI-Compatible Robots with Hybrid Actuation and Haptic Feedback

    Get PDF
    Image guided surgery (IGS), which has been developing fast recently, benefits significantly from the superior accuracy of robots and magnetic resonance imaging (MRI) which is a great soft tissue imaging modality. Teleoperation is especially desired in the MRI because of the highly constrained space inside the closed-bore MRI and the lack of haptic feedback with the fully autonomous robotic systems. It also very well maintains the human in the loop that significantly enhances safety. This dissertation describes the development of teleoperation approaches and implementation on an example system for MRI with details of different key components. The dissertation firstly describes the general teleoperation architecture with modular software and hardware components. The MRI-compatible robot controller, driving technology as well as the robot navigation and control software are introduced. As a crucial step to determine the robot location inside the MRI, two methods of registration and tracking are discussed. The first method utilizes the existing Z shaped fiducial frame design but with a newly developed multi-image registration method which has higher accuracy with a smaller fiducial frame. The second method is a new fiducial design with a cylindrical shaped frame which is especially suitable for registration and tracking for needles. Alongside, a single-image based algorithm is developed to not only reach higher accuracy but also run faster. In addition, performance enhanced fiducial frame is also studied by integrating self-resonant coils. A surgical master-slave teleoperation system for the application of percutaneous interventional procedures under continuous MRI guidance is presented. The slave robot is a piezoelectric-actuated needle insertion robot with fiber optic force sensor integrated. The master robot is a pneumatic-driven haptic device which not only controls the position of the slave robot, but also renders the force associated with needle placement interventions to the surgeon. Both of master and slave robots mechanical design, kinematics, force sensing and feedback technologies are discussed. Force and position tracking results of the master-slave robot are demonstrated to validate the tracking performance of the integrated system. MRI compatibility is evaluated extensively. Teleoperated needle steering is also demonstrated under live MR imaging. A control system of a clinical grade MRI-compatible parallel 4-DOF surgical manipulator for minimally invasive in-bore prostate percutaneous interventions through the patient’s perineum is discussed in the end. The proposed manipulator takes advantage of four sliders actuated by piezoelectric motors and incremental rotary encoders, which are compatible with the MRI environment. Two generations of optical limit switches are designed to provide better safety features for real clinical use. The performance of both generations of the limit switch is tested. MRI guided accuracy and MRI-compatibility of whole robotic system is also evaluated. Two clinical prostate biopsy cases have been conducted with this assistive robot

    A Unified Task Priority Control Framework Design for Autonomous Underwater Vehicles

    Get PDF
    In this thesis, we investigate the problem of bringing various behaviours of Autonomous Underwater Vehicles under a common control framework. Thereby, we propose a unified guidance and control framework for AUVs based on the task priority control approach. This incorporate various behaviors such as path following, terrain following, obstacle avoidance, as well as homing and docking to stationary and moving docking stations. The integration of homing and docking maneuvers into the task priority framework is thus a novel contribution of this thesis. This integration allows, for example, to execute homing maneuvers close to uneven seafloor or obstacles, ensuring the safety of the AUV by giving the highest priority to the safety tasks. Furthermore, the proposed approach tackles a wide range of scenarios without ad hoc solutions. Indeed, the proposed approach is well suited for both the emerging trend of resident AUVs, which stay underwater for a long period inside garage stations, exiting to perform inspection and maintenance missions and homing back to them, and for AUVs that are required to dock to moving stations such as surface vehicles, or towed docking stations. The proposed techniques are further studied in a simulation setting, taking into account the rich number of aforementioned scenarios
    • …
    corecore