74 research outputs found

    Autonomous Robotic Screening of Tubular Structures based only on Real-Time Ultrasound Imaging Feedback

    Full text link
    Ultrasound (US) imaging is widely employed for diagnosis and staging of peripheral vascular diseases (PVD), mainly due to its high availability and the fact it does not emit radiation. However, high inter-operator variability and a lack of repeatability of US image acquisition hinder the implementation of extensive screening programs. To address this challenge, we propose an end-to-end workflow for automatic robotic US screening of tubular structures using only the real-time US imaging feedback. We first train a U-Net for real-time segmentation of the vascular structure from cross-sectional US images. Then, we represent the detected vascular structure as a 3D point cloud and use it to estimate the longitudinal axis of the target tubular structure and its mean radius by solving a constrained non-linear optimization problem. Iterating the previous processes, the US probe is automatically aligned to the orientation normal to the target tubular tissue and adjusted online to center the tracked tissue based on the spatial calibration. The real-time segmentation result is evaluated both on a phantom and in-vivo on brachial arteries of volunteers. In addition, the whole process is validated both in simulation and physical phantoms. The mean absolute radius error and orientation error (±\pm SD) in the simulation are 1.16±0.1 mm1.16\pm0.1~mm and 2.7±3.3∘2.7\pm3.3^{\circ}, respectively. On a gel phantom, these errors are 1.95±2.02 mm1.95\pm2.02~mm and 3.3±2.4∘3.3\pm2.4^{\circ}. This shows that the method is able to automatically screen tubular tissues with an optimal probe orientation (i.e. normal to the vessel) and at the same to accurately estimate the mean radius, both in real-time.Comment: Accepted for publication in IEEE Transactions on Industrial Electronics Video: https://www.youtube.com/watch?v=VAaNZL0I5i

    Robot-Assisted Image-Guided Interventions

    Get PDF
    Image guidance is a common methodology of minimally invasive procedures. Depending on the type of intervention, various imaging modalities are available. Common imaging modalities are computed tomography, magnetic resonance tomography, and ultrasound. Robotic systems have been developed to enable and improve the procedures using these imaging techniques. Spatial and technological constraints limit the development of versatile robotic systems. This paper offers a brief overview of the developments of robotic systems for image-guided interventions since 2015 and includes samples of our current research in this field

    Robot-Assisted Image-Guided Interventions

    Get PDF
    Image guidance is a common methodology of minimally invasive procedures. Depending on the type of intervention, various imaging modalities are available. Common imaging modalities are computed tomography, magnetic resonance tomography, and ultrasound. Robotic systems have been developed to enable and improve the procedures using these imaging techniques. Spatial and technological constraints limit the development of versatile robotic systems. This paper offers a brief overview of the developments of robotic systems for image-guided interventions since 2015 and includes samples of our current research in this field

    Robotic Ultrasound Imaging: State-of-the-Art and Future Perspectives

    Full text link
    Ultrasound (US) is one of the most widely used modalities for clinical intervention and diagnosis due to the merits of providing non-invasive, radiation-free, and real-time images. However, free-hand US examinations are highly operator-dependent. Robotic US System (RUSS) aims at overcoming this shortcoming by offering reproducibility, while also aiming at improving dexterity, and intelligent anatomy and disease-aware imaging. In addition to enhancing diagnostic outcomes, RUSS also holds the potential to provide medical interventions for populations suffering from the shortage of experienced sonographers. In this paper, we categorize RUSS as teleoperated or autonomous. Regarding teleoperated RUSS, we summarize their technical developments, and clinical evaluations, respectively. This survey then focuses on the review of recent work on autonomous robotic US imaging. We demonstrate that machine learning and artificial intelligence present the key techniques, which enable intelligent patient and process-specific, motion and deformation-aware robotic image acquisition. We also show that the research on artificial intelligence for autonomous RUSS has directed the research community toward understanding and modeling expert sonographers' semantic reasoning and action. Here, we call this process, the recovery of the "language of sonography". This side result of research on autonomous robotic US acquisitions could be considered as valuable and essential as the progress made in the robotic US examination itself. This article will provide both engineers and clinicians with a comprehensive understanding of RUSS by surveying underlying techniques.Comment: Accepted by Medical Image Analysi

    Toward Fully Automated Robotic Platform for Remote Auscultation

    Full text link
    Since most developed countries are facing an increase in the number of patients per healthcare worker due to a declining birth rate and an aging population, relatively simple and safe diagnosis tasks may need to be performed using robotics and automation technologies, without specialists and hospitals. This study presents an automated robotic platform for remote auscultation, which is a highly cost-effective screening tool for detecting abnormal clinical signs. The developed robotic platform is composed of a 6-degree-of-freedom cooperative robotic arm, light detection and ranging (LiDAR) camera, and a spring-based mechanism holding an electric stethoscope. The platform enables autonomous stethoscope positioning based on external body information acquired using the LiDAR camera-based multi-way registration; the platform also ensures safe and flexible contact, maintaining the contact force within a certain range through the passive mechanism. Our preliminary results confirm that the robotic platform enables estimation of the landing positions required for cardiac examinations based on the depth and landmark information of the body surface. It also handles the stethoscope while maintaining the contact force without relying on the push-in displacement by the robotic arm.Comment: 8 pages, 11 figure

    Autonomous Tissue Scanning under Free-Form Motion for Intraoperative Tissue Characterisation

    Full text link
    In Minimally Invasive Surgery (MIS), tissue scanning with imaging probes is required for subsurface visualisation to characterise the state of the tissue. However, scanning of large tissue surfaces in the presence of deformation is a challenging task for the surgeon. Recently, robot-assisted local tissue scanning has been investigated for motion stabilisation of imaging probes to facilitate the capturing of good quality images and reduce the surgeon's cognitive load. Nonetheless, these approaches require the tissue surface to be static or deform with periodic motion. To eliminate these assumptions, we propose a visual servoing framework for autonomous tissue scanning, able to deal with free-form tissue deformation. The 3D structure of the surgical scene is recovered and a feature-based method is proposed to estimate the motion of the tissue in real-time. A desired scanning trajectory is manually defined on a reference frame and continuously updated using projective geometry to follow the tissue motion and control the movement of the robotic arm. The advantage of the proposed method is that it does not require the learning of the tissue motion prior to scanning and can deal with free-form deformation. We deployed this framework on the da Vinci surgical robot using the da Vinci Research Kit (dVRK) for Ultrasound tissue scanning. Since the framework does not rely on information from the Ultrasound data, it can be easily extended to other probe-based imaging modalities.Comment: 7 pages, 5 figures, ICRA 202

    Learning Robotic Ultrasound Scanning Skills via Human Demonstrations and Guided Explorations

    Full text link
    Medical ultrasound has become a routine examination approach nowadays and is widely adopted for different medical applications, so it is desired to have a robotic ultrasound system to perform the ultrasound scanning autonomously. However, the ultrasound scanning skill is considerably complex, which highly depends on the experience of the ultrasound physician. In this paper, we propose a learning-based approach to learn the robotic ultrasound scanning skills from human demonstrations. First, the robotic ultrasound scanning skill is encapsulated into a high-dimensional multi-modal model, which takes the ultrasound images, the pose/position of the probe and the contact force into account. Second, we leverage the power of imitation learning to train the multi-modal model with the training data collected from the demonstrations of experienced ultrasound physicians. Finally, a post-optimization procedure with guided explorations is proposed to further improve the performance of the learned model. Robotic experiments are conducted to validate the advantages of our proposed framework and the learned models

    A Passive Variable Impedance Control Strategy with Viscoelastic Parameters Estimation of Soft Tissues for Safe Ultrasonography

    Full text link
    In the context of telehealth, robotic approaches have proven a valuable solution to in-person visits in remote areas, with decreased costs for patients and infection risks. In particular, in ultrasonography, robots have the potential to reproduce the skills required to acquire high-quality images while reducing the sonographer's physical efforts. In this paper, we address the control of the interaction of the probe with the patient's body, a critical aspect of ensuring safe and effective ultrasonography. We introduce a novel approach based on variable impedance control, allowing real-time optimisation of a compliant controller parameters during ultrasound procedures. This optimisation is formulated as a quadratic programming problem and incorporates physical constraints derived from viscoelastic parameter estimations. Safety and passivity constraints, including an energy tank, are also integrated to minimise potential risks during human-robot interaction. The proposed method's efficacy is demonstrated through experiments on a patient dummy torso, highlighting its potential for achieving safe behaviour and accurate force control during ultrasound procedures, even in cases of contact loss.Comment: 7 pages, 7 figures, submitted to ICRA 202

    Learning Ultrasound Scanning Skills from Human Demonstrations

    Full text link
    Recently, the robotic ultrasound system has become an emerging topic owing to the widespread use of medical ultrasound. However, it is still a challenging task to model and to transfer the ultrasound skill from an ultrasound physician. In this paper, we propose a learning-based framework to acquire ultrasound scanning skills from human demonstrations. First, the ultrasound scanning skills are encapsulated into a high-dimensional multi-modal model in terms of interactions among ultrasound images, the probe pose and the contact force. The parameters of the model are learned using the data collected from skilled sonographers' demonstrations. Second, a sampling-based strategy is proposed with the learned model to adjust the extracorporeal ultrasound scanning process to guide a newbie sonographer or a robot arm. Finally, the robustness of the proposed framework is validated with the experiments on real data from sonographers
    • …
    corecore