38 research outputs found
Robotic Ultrasound Imaging: State-of-the-Art and Future Perspectives
Ultrasound (US) is one of the most widely used modalities for clinical
intervention and diagnosis due to the merits of providing non-invasive,
radiation-free, and real-time images. However, free-hand US examinations are
highly operator-dependent. Robotic US System (RUSS) aims at overcoming this
shortcoming by offering reproducibility, while also aiming at improving
dexterity, and intelligent anatomy and disease-aware imaging. In addition to
enhancing diagnostic outcomes, RUSS also holds the potential to provide medical
interventions for populations suffering from the shortage of experienced
sonographers. In this paper, we categorize RUSS as teleoperated or autonomous.
Regarding teleoperated RUSS, we summarize their technical developments, and
clinical evaluations, respectively. This survey then focuses on the review of
recent work on autonomous robotic US imaging. We demonstrate that machine
learning and artificial intelligence present the key techniques, which enable
intelligent patient and process-specific, motion and deformation-aware robotic
image acquisition. We also show that the research on artificial intelligence
for autonomous RUSS has directed the research community toward understanding
and modeling expert sonographers' semantic reasoning and action. Here, we call
this process, the recovery of the "language of sonography". This side result of
research on autonomous robotic US acquisitions could be considered as valuable
and essential as the progress made in the robotic US examination itself. This
article will provide both engineers and clinicians with a comprehensive
understanding of RUSS by surveying underlying techniques.Comment: Accepted by Medical Image Analysi
Surgical Tattoos in Infrared: A Dataset for Quantifying Tissue Tracking and Mapping
Quantifying performance of methods for tracking and mapping tissue in
endoscopic environments is essential for enabling image guidance and automation
of medical interventions and surgery. Datasets developed so far either use
rigid environments, visible markers, or require annotators to label salient
points in videos after collection. These are respectively: not general, visible
to algorithms, or costly and error-prone. We introduce a novel labeling
methodology along with a dataset that uses said methodology, Surgical Tattoos
in Infrared (STIR). STIR has labels that are persistent but invisible to
visible spectrum algorithms. This is done by labelling tissue points with
IR-fluorescent dye, indocyanine green (ICG), and then collecting visible light
video clips. STIR comprises hundreds of stereo video clips in both in-vivo and
ex-vivo scenes with start and end points labelled in the IR spectrum. With over
3,000 labelled points, STIR will help to quantify and enable better analysis of
tracking and mapping methods. After introducing STIR, we analyze multiple
different frame-based tracking methods on STIR using both 3D and 2D endpoint
error and accuracy metrics. STIR is available at
https://dx.doi.org/10.21227/w8g4-g548Comment: \c{opyright} 2024 IEEE. Personal use of this material is permitted.
Permission from IEEE must be obtained for all other uses, in any current or
future media, including reprinting/republishing this material for advertising
or promotional purposes, creating new collective works, for resale or
redistribution to servers or lists, or reuse of any copyrighted component of
this work in other work
Tracking and Mapping in Medical Computer Vision: A Review
As computer vision algorithms are becoming more capable, their applications
in clinical systems will become more pervasive. These applications include
diagnostics such as colonoscopy and bronchoscopy, guiding biopsies and
minimally invasive interventions and surgery, automating instrument motion and
providing image guidance using pre-operative scans. Many of these applications
depend on the specific visual nature of medical scenes and require designing
and applying algorithms to perform in this environment.
In this review, we provide an update to the field of camera-based tracking
and scene mapping in surgery and diagnostics in medical computer vision. We
begin with describing our review process, which results in a final list of 515
papers that we cover. We then give a high-level summary of the state of the art
and provide relevant background for those who need tracking and mapping for
their clinical applications. We then review datasets provided in the field and
the clinical needs therein. Then, we delve in depth into the algorithmic side,
and summarize recent developments, which should be especially useful for
algorithm designers and to those looking to understand the capability of
off-the-shelf methods. We focus on algorithms for deformable environments while
also reviewing the essential building blocks in rigid tracking and mapping
since there is a large amount of crossover in methods. Finally, we discuss the
current state of the tracking and mapping methods along with needs for future
algorithms, needs for quantification, and the viability of clinical
applications in the field. We conclude that new methods need to be designed or
combined to support clinical applications in deformable environments, and more
focus needs to be put into collecting datasets for training and evaluation.Comment: 31 pages, 17 figure
Automatic Search for Photoacoustic Marker Using Automated Transrectal Ultrasound
Real-time transrectal ultrasound (TRUS) image guidance during robot-assisted
laparoscopic radical prostatectomy has the potential to enhance surgery
outcomes. Whether conventional or photoacoustic TRUS is used, the robotic
system and the TRUS must be registered to each other. Accurate registration can
be performed using photoacoustic (PA markers). However, this requires a manual
search by an assistant [19]. This paper introduces the first automatic search
for PA markers using a transrectal ultrasound robot. This effectively reduces
the challenges associated with the da Vinci-TRUS registration. This paper
investigated the performance of three search algorithms in simulation and
experiment: Weighted Average (WA), Golden Section Search (GSS), and Ternary
Search (TS). For validation, a surgical prostate scenario was mimicked and
various ex vivo tissues were tested. As a result, the WA algorithm can achieve
0.53 degree average error after 9 data acquisitions, while the TS and GSS
algorithm can achieve 0.29 degree and 0.48 degree average errors after 28 data
acquisitions.Comment: 13 pages, 9 figure
3D Ultrafast Shear Wave Absolute Vibro-Elastography using a Matrix Array Transducer
3D ultrasound imaging provides more spatial information compared to
conventional 2D frames by considering the volumes of data. One of the main
bottlenecks of 3D imaging is the long data acquisition time which reduces
practicality and can introduce artifacts from unwanted patient or sonographer
motion. This paper introduces the first shear wave absolute vibro-elastography
(S-WAVE) method with real-time volumetric acquisition using a matrix array
transducer. In SWAVE, an external vibration source generates mechanical
vibrations inside the tissue. The tissue motion is then estimated and used in
solving a wave equation inverse problem to provide the tissue elasticity. A
matrix array transducer is used with a Verasonics ultrasound machine and frame
rate of 2000 volumes/s to acquire 100 radio frequency (RF) volumes in 0.05 s.
Using plane wave (PW) and compounded diverging wave (CDW) imaging methods, we
estimate axial, lateral and elevational displacements over 3D volumes. The curl
of the displacements is used with local frequency estimation to estimate
elasticity in the acquired volumes. Ultrafast acquisition extends substantially
the possible S-WAVE excitation frequency range, now up to 800 Hz, enabling new
tissue modeling and characterization. The method was validated on three
homogeneous liver fibrosis phantoms and on four different inclusions within a
heterogeneous phantom. The homogeneous phantom results show less than 8% (PW)
and 5% (CDW) difference between the manufacturer values and the corresponding
estimated values over a frequency range of 80 Hz to 800 Hz. The estimated
elasticity values for the heterogeneous phantom at 400 Hz excitation frequency
show average errors of 9% (PW) and 6% (CDW) compared to the provided average
values by MRE. Furthermore, both imaging methods were able to detect the
inclusions within the elasticity volumes
Arc-to-line frame registration method for ultrasound and photoacoustic image-guided intraoperative robot-assisted laparoscopic prostatectomy
Purpose: To achieve effective robot-assisted laparoscopic prostatectomy, the
integration of transrectal ultrasound (TRUS) imaging system which is the most
widely used imaging modelity in prostate imaging is essential. However, manual
manipulation of the ultrasound transducer during the procedure will
significantly interfere with the surgery. Therefore, we propose an image
co-registration algorithm based on a photoacoustic marker method, where the
ultrasound / photoacoustic (US/PA) images can be registered to the endoscopic
camera images to ultimately enable the TRUS transducer to automatically track
the surgical instrument Methods: An optimization-based algorithm is proposed to
co-register the images from the two different imaging modalities. The
principles of light propagation and an uncertainty in PM detection were assumed
in this algorithm to improve the stability and accuracy of the algorithm. The
algorithm is validated using the previously developed US/PA image-guided system
with a da Vinci surgical robot. Results: The target-registration-error (TRE) is
measured to evaluate the proposed algorithm. In both simulation and
experimental demonstration, the proposed algorithm achieved a sub-centimeter
accuracy which is acceptable in practical clinics. The result is also
comparable with our previous approach, and the proposed method can be
implemented with a normal white light stereo camera and doesn't require highly
accurate localization of the PM. Conclusion: The proposed frame registration
algorithm enabled a simple yet efficient integration of commercial US/PA
imaging system into laparoscopic surgical setting by leveraging the
characteristic properties of acoustic wave propagation and laser excitation,
contributing to automated US/PA image-guided surgical intervention
applications.Comment: 12 pages, 9 figure
Fast Elastic Registration of Soft Tissues under Large Deformations
International audienceA fast and accurate fusion of intra-operative images with a pre-operative data is a key component of computer-aided interventions which aim at improving the outcomes of the intervention while reducing the patient's discomfort. In this paper, we focus on the problematic of the intra-operative navigation during abdominal surgery, which requires an accurate registration of tissues undergoing large deformations. Such a scenario occurs in the case of partial hepatectomy: to facilitate the access to the pathology, e.g. a tumor located in the posterior part of the right lobe, the surgery is performed on a patient in lateral position. Due to the change in patient's position, the resection plan based on the pre-operative CT scan acquired in the supine position must be updated to account for the deformations. We suppose that an imaging modality, such as the cone-beam CT, provides the information about the intra-operative shape of an organ, however, due to the reduced radiation dose and contrast, the actual locations of the internal structures necessary to update the planning are not available. To this end, we propose a method allowing for fast registration of the pre-operative data represented by a detailed 3D model of the liver and its internal structure and the actual configuration given by the organ surface extracted from the intra-operative image. The algorithm behind the method combines the iterative closest point technique with a biomechanical model based on a co-rotational formulation of linear elasticity which accounts for large deformations of the tissue. The performance, robustness and accuracy of the method is quantitatively assessed on a control semi-synthetic dataset with known ground truth and a real dataset composed of nine pairs of abdominal CT scans acquired in supine and flank positions. It is shown that the proposed surface-matching method is capable of reducing the target registration error evaluated of the internal structures of the organ from more than 40 mm to less then 10 mm. Moreover, the control data is used to demonstrate the compatibility of the method with intra-operative clinical scenario, while the real datasets are utilized to study the impact of parametrization on the accuracy of the method. The method is also compared to a state-of-the art intensity-based registration technique in terms of accuracy and performance
Towards Transcervical Ultrasound Image Guidance for Transoral Robotic Surgery
Purpose: Trans-oral robotic surgery (TORS) using the da Vinci surgical robot
is a new minimally-invasive surgery method to treat oropharyngeal tumors, but
it is a challenging operation. Augmented reality (AR) based on intra-operative
ultrasound (US) has the potential to enhance the visualization of the anatomy
and cancerous tumors to provide additional tools for decision-making in
surgery. Methods: We propose and carry out preliminary evaluations of a
US-guided AR system for TORS, with the transducer placed on the neck for a
transcervical view. Firstly, we perform a novel MRI-transcervical 3D US
registration study. Secondly, we develop a US-robot calibration method with an
optical tracker and an AR system to display the anatomy mesh model in the
real-time endoscope images inside the surgeon console. Results: Our AR system
reaches a mean projection error of 26.81 and 27.85 pixels for the projection
from the US to stereo cameras in a water bath experiment. The average target
registration error for MRI to 3D US is 8.90 mm for the 3D US transducer and
5.85 mm for freehand 3D US, and the average distance between the vessel
centerlines is 2.32 mm. Conclusion: We demonstrate the first proof-of-concept
transcervical US-guided AR system for TORS and the feasibility of
trans-cervical 3D US-MRI registration. Our results show that trans-cervical 3D
US is a promising technique for TORS image guidance.Comment: 12 pages, 8 figures. Accepted by Information Processing for Computer
Assisted Interventions (IPCAI 2023