326 research outputs found
Machine Learning in Robotic Ultrasound Imaging: Challenges and Perspectives
This article reviews the recent advances in intelligent robotic ultrasound
(US) imaging systems. We commence by presenting the commonly employed robotic
mechanisms and control techniques in robotic US imaging, along with their
clinical applications. Subsequently, we focus on the deployment of machine
learning techniques in the development of robotic sonographers, emphasizing
crucial developments aimed at enhancing the intelligence of these systems. The
methods for achieving autonomous action reasoning are categorized into two sets
of approaches: those relying on implicit environmental data interpretation and
those using explicit interpretation. Throughout this exploration, we also
discuss practical challenges, including those related to the scarcity of
medical data, the need for a deeper understanding of the physical aspects
involved, and effective data representation approaches. Moreover, we conclude
by highlighting the open problems in the field and analyzing different possible
perspectives on how the community could move forward in this research area.Comment: Accepted by Annual Review of Control, Robotics, and Autonomous
System
Cardiac ultrasound simulation for autonomous ultrasound navigation
Ultrasound is well-established as an imaging modality for diagnostic and
interventional purposes. However, the image quality varies with operator skills
as acquiring and interpreting ultrasound images requires extensive training due
to the imaging artefacts, the range of acquisition parameters and the
variability of patient anatomies. Automating the image acquisition task could
improve acquisition reproducibility and quality but training such an algorithm
requires large amounts of navigation data, not saved in routine examinations.
Thus, we propose a method to generate large amounts of ultrasound images from
other modalities and from arbitrary positions, such that this pipeline can
later be used by learning algorithms for navigation. We present a novel
simulation pipeline which uses segmentations from other modalities, an
optimized volumetric data representation and GPU-accelerated Monte Carlo path
tracing to generate view-dependent and patient-specific ultrasound images. We
extensively validate the correctness of our pipeline with a phantom experiment,
where structures' sizes, contrast and speckle noise properties are assessed.
Furthermore, we demonstrate its usability to train neural networks for
navigation in an echocardiography view classification experiment by generating
synthetic images from more than 1000 patients. Networks pre-trained with our
simulations achieve significantly superior performance in settings where large
real datasets are not available, especially for under-represented classes. The
proposed approach allows for fast and accurate patient-specific ultrasound
image generation, and its usability for training networks for
navigation-related tasks is demonstrated.Comment: 24 pages, 10 figures, 5 table
Ultra-NeRF: Neural Radiance Fields for Ultrasound Imaging
We present a physics-enhanced implicit neural representation (INR) for
ultrasound (US) imaging that learns tissue properties from overlapping US
sweeps. Our proposed method leverages a ray-tracing-based neural rendering for
novel view US synthesis. Recent publications demonstrated that INR models could
encode a representation of a three-dimensional scene from a set of
two-dimensional US frames. However, these models fail to consider the
view-dependent changes in appearance and geometry intrinsic to US imaging. In
our work, we discuss direction-dependent changes in the scene and show that a
physics-inspired rendering improves the fidelity of US image synthesis. In
particular, we demonstrate experimentally that our proposed method generates
geometrically accurate B-mode images for regions with ambiguous representation
owing to view-dependent differences of the US images. We conduct our
experiments using simulated B-mode US sweeps of the liver and acquired US
sweeps of a spine phantom tracked with a robotic arm. The experiments
corroborate that our method generates US frames that enable consistent volume
compounding from previously unseen views. To the best of our knowledge, the
presented work is the first to address view-dependent US image synthesis using
INR.Comment: submitted to MID
Proceedings Virtual Imaging Trials in Medicine 2024
This submission comprises the proceedings of the 1st Virtual Imaging Trials in Medicine conference, organized by Duke University on April 22-24, 2024. The listed authors serve as the program directors for this conference. The VITM conference is a pioneering summit uniting experts from academia, industry and government in the fields of medical imaging and therapy to explore the transformative potential of in silico virtual trials and digital twins in revolutionizing healthcare. The proceedings are categorized by the respective days of the conference: Monday presentations, Tuesday presentations, Wednesday presentations, followed by the abstracts for the posters presented on Monday and Tuesday
Virtual clinical trials in medical imaging: a review
The accelerating complexity and variety of medical imaging devices and methods have outpaced the ability to evaluate and optimize their design and clinical use. This is a significant and increasing challenge for both scientific investigations and clinical applications. Evaluations would ideally be done using clinical imaging trials. These experiments, however, are often not practical due to ethical limitations, expense, time requirements, or lack of ground truth. Virtual clinical trials (VCTs) (also known as in silico imaging trials or virtual imaging trials) offer an alternative means to efficiently evaluate medical imaging technologies virtually. They do so by simulating the patients, imaging systems, and interpreters. The field of VCTs has been constantly advanced over the past decades in multiple areas. We summarize the major developments and current status of the field of VCTs in medical imaging. We review the core components of a VCT: computational phantoms, simulators of different imaging modalities, and interpretation models. We also highlight some of the applications of VCTs across various imaging modalities
Medical image registration using unsupervised deep neural network: A scoping literature review
In medicine, image registration is vital in image-guided interventions and
other clinical applications. However, it is a difficult subject to be addressed
which by the advent of machine learning, there have been considerable progress
in algorithmic performance has recently been achieved for medical image
registration in this area. The implementation of deep neural networks provides
an opportunity for some medical applications such as conducting image
registration in less time with high accuracy, playing a key role in countering
tumors during the operation. The current study presents a comprehensive scoping
review on the state-of-the-art literature of medical image registration studies
based on unsupervised deep neural networks is conducted, encompassing all the
related studies published in this field to this date. Here, we have tried to
summarize the latest developments and applications of unsupervised deep
learning-based registration methods in the medical field. Fundamental and main
concepts, techniques, statistical analysis from different viewpoints,
novelties, and future directions are elaborately discussed and conveyed in the
current comprehensive scoping review. Besides, this review hopes to help those
active readers, who are riveted by this field, achieve deep insight into this
exciting field
- …