995 research outputs found
Robotic Ultrasound Imaging: State-of-the-Art and Future Perspectives
Ultrasound (US) is one of the most widely used modalities for clinical
intervention and diagnosis due to the merits of providing non-invasive,
radiation-free, and real-time images. However, free-hand US examinations are
highly operator-dependent. Robotic US System (RUSS) aims at overcoming this
shortcoming by offering reproducibility, while also aiming at improving
dexterity, and intelligent anatomy and disease-aware imaging. In addition to
enhancing diagnostic outcomes, RUSS also holds the potential to provide medical
interventions for populations suffering from the shortage of experienced
sonographers. In this paper, we categorize RUSS as teleoperated or autonomous.
Regarding teleoperated RUSS, we summarize their technical developments, and
clinical evaluations, respectively. This survey then focuses on the review of
recent work on autonomous robotic US imaging. We demonstrate that machine
learning and artificial intelligence present the key techniques, which enable
intelligent patient and process-specific, motion and deformation-aware robotic
image acquisition. We also show that the research on artificial intelligence
for autonomous RUSS has directed the research community toward understanding
and modeling expert sonographers' semantic reasoning and action. Here, we call
this process, the recovery of the "language of sonography". This side result of
research on autonomous robotic US acquisitions could be considered as valuable
and essential as the progress made in the robotic US examination itself. This
article will provide both engineers and clinicians with a comprehensive
understanding of RUSS by surveying underlying techniques.Comment: Accepted by Medical Image Analysi
Autonomous Robotic Screening of Tubular Structures based only on Real-Time Ultrasound Imaging Feedback
Ultrasound (US) imaging is widely employed for diagnosis and staging of
peripheral vascular diseases (PVD), mainly due to its high availability and the
fact it does not emit radiation. However, high inter-operator variability and a
lack of repeatability of US image acquisition hinder the implementation of
extensive screening programs. To address this challenge, we propose an
end-to-end workflow for automatic robotic US screening of tubular structures
using only the real-time US imaging feedback. We first train a U-Net for
real-time segmentation of the vascular structure from cross-sectional US
images. Then, we represent the detected vascular structure as a 3D point cloud
and use it to estimate the longitudinal axis of the target tubular structure
and its mean radius by solving a constrained non-linear optimization problem.
Iterating the previous processes, the US probe is automatically aligned to the
orientation normal to the target tubular tissue and adjusted online to center
the tracked tissue based on the spatial calibration. The real-time segmentation
result is evaluated both on a phantom and in-vivo on brachial arteries of
volunteers. In addition, the whole process is validated both in simulation and
physical phantoms. The mean absolute radius error and orientation error (
SD) in the simulation are and ,
respectively. On a gel phantom, these errors are and
. This shows that the method is able to automatically screen
tubular tissues with an optimal probe orientation (i.e. normal to the vessel)
and at the same to accurately estimate the mean radius, both in real-time.Comment: Accepted for publication in IEEE Transactions on Industrial
Electronics Video: https://www.youtube.com/watch?v=VAaNZL0I5i
Learning Ultrasound Scanning Skills from Human Demonstrations
Recently, the robotic ultrasound system has become an emerging topic owing to
the widespread use of medical ultrasound. However, it is still a challenging
task to model and to transfer the ultrasound skill from an ultrasound
physician. In this paper, we propose a learning-based framework to acquire
ultrasound scanning skills from human demonstrations. First, the ultrasound
scanning skills are encapsulated into a high-dimensional multi-modal model in
terms of interactions among ultrasound images, the probe pose and the contact
force. The parameters of the model are learned using the data collected from
skilled sonographers' demonstrations. Second, a sampling-based strategy is
proposed with the learned model to adjust the extracorporeal ultrasound
scanning process to guide a newbie sonographer or a robot arm. Finally, the
robustness of the proposed framework is validated with the experiments on real
data from sonographers
Intelligent Robotic Sonographer: Mutual Information-based Disentangled Reward Learning from Few Demonstrations
Ultrasound (US) imaging is widely used for biometric measurement and
diagnosis of internal organs due to the advantages of being real-time and
radiation-free. However, due to high inter-operator variability, resulting
images highly depend on operators' experience. In this work, an intelligent
robotic sonographer is proposed to autonomously "explore" target anatomies and
navigate a US probe to a relevant 2D plane by learning from expert. The
underlying high-level physiological knowledge from experts is inferred by a
neural reward function, using a ranked pairwise image comparisons approach in a
self-supervised fashion. This process can be referred to as understanding the
"language of sonography". Considering the generalization capability to overcome
inter-patient variations, mutual information is estimated by a network to
explicitly extract the task-related and domain features in latent space.
Besides, a Gaussian distribution-based filter is developed to automatically
evaluate and take the quality of the expert's demonstrations into account. The
robotic localization is carried out in coarse-to-fine mode based on the
predicted reward associated to B-mode images. To demonstrate the performance of
the proposed approach, representative experiments for the "line" target and
"point" target are performed on vascular phantom and two ex-vivo animal organ
phantoms (chicken heart and lamb kidney), respectively. The results
demonstrated that the proposed advanced framework can robustly work on
different kinds of known and unseen phantoms
Learning Autonomous Ultrasound via Latent Task Representation and Robotic Skills Adaptation
As medical ultrasound is becoming a prevailing examination approach nowadays,
robotic ultrasound systems can facilitate the scanning process and prevent
professional sonographers from repetitive and tedious work. Despite the recent
progress, it is still a challenge to enable robots to autonomously accomplish
the ultrasound examination, which is largely due to the lack of a proper task
representation method, and also an adaptation approach to generalize learned
skills across different patients. To solve these problems, we propose the
latent task representation and the robotic skills adaptation for autonomous
ultrasound in this paper. During the offline stage, the multimodal ultrasound
skills are merged and encapsulated into a low-dimensional probability model
through a fully self-supervised framework, which takes clinically demonstrated
ultrasound images, probe orientations, and contact forces into account. During
the online stage, the probability model will select and evaluate the optimal
prediction. For unstable singularities, the adaptive optimizer fine-tunes them
to near and stable predictions in high-confidence regions. Experimental results
show that the proposed approach can generate complex ultrasound strategies for
diverse populations and achieve significantly better quantitative results than
our previous method
Intraoperative Navigation Systems for Image-Guided Surgery
Recent technological advancements in medical imaging equipment have resulted in
a dramatic improvement of image accuracy, now capable of providing useful information
previously not available to clinicians. In the surgical context, intraoperative
imaging provides a crucial value for the success of the operation.
Many nontrivial scientific and technical problems need to be addressed in order to
efficiently exploit the different information sources nowadays available in advanced
operating rooms. In particular, it is necessary to provide: (i) accurate tracking of
surgical instruments, (ii) real-time matching of images from different modalities, and
(iii) reliable guidance toward the surgical target. Satisfying all of these requisites
is needed to realize effective intraoperative navigation systems for image-guided
surgery.
Various solutions have been proposed and successfully tested in the field of image
navigation systems in the last ten years; nevertheless several problems still arise in
most of the applications regarding precision, usability and capabilities of the existing
systems. Identifying and solving these issues represents an urgent scientific challenge.
This thesis investigates the current state of the art in the field of intraoperative
navigation systems, focusing in particular on the challenges related to efficient and
effective usage of ultrasound imaging during surgery.
The main contribution of this thesis to the state of the art are related to:
Techniques for automatic motion compensation and therapy monitoring applied
to a novel ultrasound-guided surgical robotic platform in the context of
abdominal tumor thermoablation.
Novel image-fusion based navigation systems for ultrasound-guided neurosurgery
in the context of brain tumor resection, highlighting their applicability
as off-line surgical training instruments.
The proposed systems, which were designed and developed in the framework of
two international research projects, have been tested in real or simulated surgical
scenarios, showing promising results toward their application in clinical practice
Design and Quantitative Assessment of Teleoperation-Based HumanâRobot Collaboration Method for Robot-Assisted Sonography
Tele-echography has emerged as a promising and effective solution, leveraging the expertise of sonographers and the autonomy of robots to perform ultrasound scanning for patients residing in remote areas, without the need for in-person visits by the sonographer. Designing effective and natural human-robot interfaces for tele-echography remains challenging, with patient safety being a critical concern. In this article, we develop a teleoperation system for robot-assisted sonography with two different interfaces, a haptic device-based interface and a low-cost 3D Mouse-based interface, which can achieve continuous and intuitive telemanipulation by a leader device with a small workspace. To achieve compliant interaction with patients, we design impedance controllers in Cartesian space to track the desired position and orientation for these two teleoperation interfaces. We also propose comprehensive evaluation metrics of robot-assisted sonography, including subjective and objective evaluation, to evaluate tele-echography interfaces and control performance. We evaluate the ergonomic performance based on the estimated muscle fatigue and the acquired ultrasound image quality. We conduct user studies based on the NASA Task Load Index to evaluate the performance of these two human-robot interfaces. The tracking performance and the quantitative comparison of these two teleoperation interfaces are conducted by the Franka Emika Panda robot. The results and findings provide guidance on human-robot collaboration design and implementation for robot-assisted sonography. Note to Practitioners âRobot-assisted sonography has demonstrated efficacy in medical diagnosis during clinical trials. However, deploying fully autonomous robots for ultrasound scanning remains challenging due to various constraints in practice, such as patient safety, dynamic tasks, and environmental uncertainties. Semi-autonomous or teleoperation-based robot sonography represents a promising approach for practical deployment. Previous work has produced various expensive teleoperation interfaces but lacks user studies to guide teleoperation interface selection. In this article, we present two typical teleoperation interfaces and implement a continuous and intuitive teleoperation control system. We also propose a comprehensive evaluation metric for assessing their performance. Our findings show that the haptic device outperforms the 3D Mouse, based on operatorsâ feedback and acquired image quality. However, the haptic device requires more learning time and effort in the training stage. Furthermore, the developed teleoperation system offers a solution for shared control and human-robot skill transfer. Our results provide valuable guidance for designing and implementing human-robot interfaces for robot-assisted sonography in practice
Robotic Platforms for Ultrasound Diagnostics and Treatment
Medical imaging introduced the greatest paradigm change in the history of modern medicine, and particularly ultrasound (US) is becoming the most widespread imaging modality. The integration of digital imaging into the surgical domain opens new frontiers in diagnostics and intervention, and
the combination of robotics leads to improved accuracy and
targeting capabilities. This paper reviews the state-of-the-art in US-based robotic platforms, identifying the main research and clinical trends, reviewing current capabilities and limitations. The focus of the study includes non-autonomous US-based systems, US-based automated robotic navigation systems and US-guided autonomous tools. These areas outline future development, projecting a swarm of new applications in the computer-assisted surgical domain
DefCor-Net: Physics-Aware Ultrasound Deformation Correction
The recovery of morphologically accurate anatomical images from deformed ones
is challenging in ultrasound (US) image acquisition, but crucial to accurate
and consistent diagnosis, particularly in the emerging field of
computer-assisted diagnosis. This article presents a novel anatomy-aware
deformation correction approach based on a coarse-to-fine, multi-scale deep
neural network (DefCor-Net). To achieve pixel-wise performance, DefCor-Net
incorporates biomedical knowledge by estimating pixel-wise stiffness online
using a U-shaped feature extractor. The deformation field is then computed
using polynomial regression by integrating the measured force applied by the US
probe. Based on real-time estimation of pixel-by-pixel tissue properties, the
learning-based approach enables the potential for anatomy-aware deformation
correction. To demonstrate the effectiveness of the proposed DefCor-Net, images
recorded at multiple locations on forearms and upper arms of six volunteers are
used to train and validate DefCor-Net. The results demonstrate that DefCor-Net
can significantly improve the accuracy of deformation correction to recover the
original geometry (Dice Coefficient: from to when
the force is ).Comment: Accepted by MedIA. code is availabl
- âŠ