4,881 research outputs found

    Precise localization for aerial inspection using augmented reality markers

    Get PDF
    The final publication is available at link.springer.comThis chapter is devoted to explaining a method for precise localization using augmented reality markers. This method can achieve precision of less of 5 mm in position at a distance of 0.7 m, using a visual mark of 17 mm × 17 mm, and it can be used by controller when the aerial robot is doing a manipulation task. The localization method is based on optimizing the alignment of deformable contours from textureless images working from the raw vertexes of the observed contour. The algorithm optimizes the alignment of the XOR area computed by means of computer graphics clipping techniques. The method can run at 25 frames per second.Peer ReviewedPostprint (author's final draft

    Passive Resonant Coil Based Fast Registration And Tracking System For Real-Time Mri-Guided Minimally Invasive Surgery

    Get PDF
    This thesis presents a single-slice based fast stereotactic registration and tracking technique along with a corresponding modular system for guiding robotic mechanism or interventional instrument to perform needle-based interventions under live MRI guidance. The system can provide tracking of full 6 degree-of-freedom (DOF) in stereotactic interventional surgery based upon a single, rapidly acquired cross-sectional image. The whole system is constructed with a modular data transmission software framework and mechanical structure so that it supports remote supervision and manipulation between a 3D Matlab tracking user interface (UI) and an existing MRI robot controller by using the OpenIGTLink network communication protocol. It provides better closed-loop control by implementing a feedback output interface to the MRI-guided robot. A new compact fiducial frame design is presented, and the fiducial is wrapped with a passive resonant coil. The coil resonates at the Larmor frequency for 3T MRI to enhance signal strength and enable for rapid imaging. The fiducial can be attached near the distal end of the robot and coaxially with a needle so as to visualize target tissue and track the surgical tool synchronously. The MRI-compatible design of fiducial frame, robust tracking algorithm and modular interface allow this tracking system to be conveniently used on different robots or devices and in different size of MRI bores. Several iterations of the tracking fiducial and passive resonant coils were constructed and evaluated in a Phillips Achieva 3T MRI. To assess accuracy and robustness of the tracking algorithm, 25 groups of images with different poses were successively scanned along specific sequence in and MRI experiment. The translational RMS error along depth is 0.271mm with standard deviation of 0.277mm for totally 100 samples. The overall angular RMS error is less than 0.426 degree with standard deviation of 0.526 degree for totally 150 samples. The passive resonant coils were shown to significantly increase signal intensity in the fiducial relative to the surroundings and provide for rapid imaging with low flip angles

    Design and control of 3-DOF needle positioner for MRI-guided laser ablation of liver tumours

    Get PDF
    This article presents the design and control of a pneumatic needle positioner for laser ablation of liver tumours under guidance by magnetic resonance imaging (MRI). The prototype was developed to provide accurate point-to-point remote positioning of a needle guide inside an MR scanner with the aim of evaluating the potential advantages over the manual procedure. In order to minimise alterations to the MR environment, the system employs plastic pneumatic actuators and 9 m long supply lines connecting with the control hardware located outside the magnet room. An improved sliding mode control (SMC) scheme was designed for the position control of the device. Wireless micro-coil fiducials are used for automatic registration in the reference frame of the MR scanner. The MRI-compatibility and the accuracy of the prototype are demonstrated with experiments in the MR scanner

    Teleoperation of MRI-Compatible Robots with Hybrid Actuation and Haptic Feedback

    Get PDF
    Image guided surgery (IGS), which has been developing fast recently, benefits significantly from the superior accuracy of robots and magnetic resonance imaging (MRI) which is a great soft tissue imaging modality. Teleoperation is especially desired in the MRI because of the highly constrained space inside the closed-bore MRI and the lack of haptic feedback with the fully autonomous robotic systems. It also very well maintains the human in the loop that significantly enhances safety. This dissertation describes the development of teleoperation approaches and implementation on an example system for MRI with details of different key components. The dissertation firstly describes the general teleoperation architecture with modular software and hardware components. The MRI-compatible robot controller, driving technology as well as the robot navigation and control software are introduced. As a crucial step to determine the robot location inside the MRI, two methods of registration and tracking are discussed. The first method utilizes the existing Z shaped fiducial frame design but with a newly developed multi-image registration method which has higher accuracy with a smaller fiducial frame. The second method is a new fiducial design with a cylindrical shaped frame which is especially suitable for registration and tracking for needles. Alongside, a single-image based algorithm is developed to not only reach higher accuracy but also run faster. In addition, performance enhanced fiducial frame is also studied by integrating self-resonant coils. A surgical master-slave teleoperation system for the application of percutaneous interventional procedures under continuous MRI guidance is presented. The slave robot is a piezoelectric-actuated needle insertion robot with fiber optic force sensor integrated. The master robot is a pneumatic-driven haptic device which not only controls the position of the slave robot, but also renders the force associated with needle placement interventions to the surgeon. Both of master and slave robots mechanical design, kinematics, force sensing and feedback technologies are discussed. Force and position tracking results of the master-slave robot are demonstrated to validate the tracking performance of the integrated system. MRI compatibility is evaluated extensively. Teleoperated needle steering is also demonstrated under live MR imaging. A control system of a clinical grade MRI-compatible parallel 4-DOF surgical manipulator for minimally invasive in-bore prostate percutaneous interventions through the patient’s perineum is discussed in the end. The proposed manipulator takes advantage of four sliders actuated by piezoelectric motors and incremental rotary encoders, which are compatible with the MRI environment. Two generations of optical limit switches are designed to provide better safety features for real clinical use. The performance of both generations of the limit switch is tested. MRI guided accuracy and MRI-compatibility of whole robotic system is also evaluated. Two clinical prostate biopsy cases have been conducted with this assistive robot

    J-PET Framework: Software platform for PET tomography data reconstruction and analysis

    Get PDF
    J-PET Framework is an open-source software platform for data analysis, written in C++ and based on the ROOT package. It provides a common environment for implementation of reconstruction, calibration and filtering procedures, as well as for user-level analyses of Positron Emission Tomography data. The library contains a set of building blocks that can be combined by users with even little programming experience, into chains of processing tasks through a convenient, simple and well-documented API. The generic input-output interface allows processing the data from various sources: low-level data from the tomography acquisition system or from diagnostic setups such as digital oscilloscopes, as well as high-level tomography structures e.g. sinograms or a list of lines-of-response. Moreover, the environment can be interfaced with Monte Carlo simulation packages such as GEANT and GATE, which are commonly used in the medical scientific community.Comment: 14 pages, 5 figure

    Motion capture based on RGBD data from multiple sensors for avatar animation

    Get PDF
    With recent advances in technology and emergence of affordable RGB-D sensors for a wider range of users, markerless motion capture has become an active field of research both in computer vision and computer graphics. In this thesis, we designed a POC (Proof of Concept) for a new tool that enables us to perform motion capture by using a variable number of commodity RGB-D sensors of different brands and technical specifications on constraint-less layout environments. The main goal of this work is to provide a tool with motion capture capabilities by using a handful of RGB-D sensors, without imposing strong requirements in terms of lighting, background or extension of the motion capture area. Of course, the number of RGB-D sensors needed is inversely proportional to their resolution, and directly proportional to the size of the area to track to. Built on top of the OpenNI 2 library, we made this POC compatible with most of the nonhigh-end RGB-D sensors currently available in the market. Due to the lack of resources on a single computer, in order to support more than a couple of sensors working simultaneously, we need a setup composed of multiple computers. In order to keep data coherency and synchronization across sensors and computers, our tool makes use of a semi-automatic calibration method and a message-oriented network protocol. From color and depth data given by a sensor, we can also obtain a 3D pointcloud representation of the environment. By combining pointclouds from multiple sensors, we can collect a complete and animated 3D pointcloud that can be visualized from any viewpoint. Given a 3D avatar model and its corresponding attached skeleton, we can use an iterative optimization method (e.g. Simplex) to find a fit between each pointcloud frame and a skeleton configuration, resulting in 3D avatar animation when using such skeleton configurations as key frames
    corecore