54 research outputs found

    Deep Learning for Ultrasound Image Formation:CUBDL Evaluation Framework and Open Datasets

    Get PDF
    Deep learning for ultrasound image formation is rapidly garnering research support and attention, quickly rising as the latest frontier in ultrasound image formation, with much promise to balance both image quality and display speed. Despite this promise, one challenge with identifying optimal solutions is the absence of unified evaluation methods and datasets that are not specific to a single research group. This article introduces the largest known international database of ultrasound channel data and describes the associated evaluation methods that were initially developed for the challenge on ultrasound beamforming with deep learning (CUBDL), which was offered as a component of the 2020 IEEE International Ultrasonics Symposium. We summarize the challenge results and present qualitative and quantitative assessments using both the initially closed CUBDL evaluation test dataset (which was crowd-sourced from multiple groups around the world) and additional in vivo breast ultrasound data contributed after the challenge was completed. As an example quantitative assessment, single plane wave images from the CUBDL Task 1 dataset produced a mean generalized contrast-to-noise ratio (gCNR) of 0.67 and a mean lateral resolution of 0.42 mm when formed with delay-and-sum beamforming, compared with a mean gCNR as high as 0.81 and a mean lateral resolution as low as 0.32 mm when formed with networks submitted by the challenge winners. We also describe contributed CUBDL data that may be used for training of future networks. The compiled database includes a total of 576 image acquisition sequences. We additionally introduce a neural-network-based global sound speed estimator implementation that was necessary to fairly evaluate the results obtained with this international database. The integration of CUBDL evaluation methods, evaluation code, network weights from the challenge winners, and all datasets described herein are publicly available (visit https://cubdl.jhu.edu for details). </p

    Online 4D ultrasound guidance for real-time motion compensation by MLC tracking

    Get PDF
    PURPOSE: With the trend in radiotherapy moving toward dose escalation and hypofractionation, the need for highly accurate targeting increases. While MLC tracking is already being successfully used for motion compensation of moving targets in the prostate, current real-time target localization methods rely on repeated x-ray imaging and implanted fiducial markers or electromagnetic transponders rather than direct target visualization. In contrast, ultrasound imaging can yield volumetric data in real-time (3D + time = 4D) without ionizing radiation. The authors report the first results of combining these promising techniques-online 4D ultrasound guidance and MLC tracking-in a phantom. METHODS: A software framework for real-time target localization was installed directly on a 4D ultrasound station and used to detect a 2 mm spherical lead marker inside a water tank. The lead marker was rigidly attached to a motion stage programmed to reproduce nine characteristic tumor trajectories chosen from large databases (five prostate, four lung). The 3D marker position detected by ultrasound was transferred to a computer program for MLC tracking at a rate of 21.3 Hz and used for real-time MLC aperture adaption on a conventional linear accelerator. The tracking system latency was measured using sinusoidal trajectories and compensated for by applying a kernel density prediction algorithm for the lung traces. To measure geometric accuracy, static anterior and lateral conformal fields as well as a 358° arc with a 10 cm circular aperture were delivered for each trajectory. The two-dimensional (2D) geometric tracking error was measured as the difference between marker position and MLC aperture center in continuously acquired portal images. For dosimetric evaluation, VMAT treatment plans with high and low modulation were delivered to a biplanar diode array dosimeter using the same trajectories. Dose measurements with and without MLC tracking were compared to a static reference dose using 3%/3 mm and 2%/2 mm γ-tests. RESULTS: The overall tracking system latency was 172 ms. The mean 2D root-mean-square tracking error was 1.03 mm (0.80 mm prostate, 1.31 mm lung). MLC tracking improved the dose delivery in all cases with an overall reduction in the γ-failure rate of 91.2% (3%/3 mm) and 89.9% (2%/2 mm) compared to no motion compensation. Low modulation VMAT plans had no (3%/3 mm) or minimal (2%/2 mm) residual γ-failures while tracking reduced the γ-failure rate from 17.4% to 2.8% (3%/3 mm) and from 33.9% to 6.5% (2%/2 mm) for plans with high modulation. CONCLUSIONS: Real-time 4D ultrasound tracking was successfully integrated with online MLC tracking for the first time. The developed framework showed an accuracy and latency comparable with other MLC tracking methods while holding the potential to measure and adapt to target motion, including rotation and deformation, noninvasively

    A Review of Deep Learning Applications in Lung Ultrasound Imaging of COVID-19 Patients

    No full text
    The massive and continuous spread of COVID-19 has motivated researchers around the world to intensely explore, understand, and develop new techniques for diagnosis and treatment. Although lung ultrasound imaging is a less established approach when compared to other medical imaging modalities such as X-ray and CT, multiple studies have demonstrated its promise to diagnose COVID-19 patients. At the same time, many deep learning models have been built to improve the diagnostic efficiency of medical imaging. The integration of these initially parallel efforts has led multiple researchers to report deep learning applications in medical imaging of COVID-19 patients, most of which demonstrate the outstanding potential of deep learning to aid in the diagnosis of COVID-19. This invited review is focused on deep learning applications in lung ultrasound imaging of COVID-19 and provides a comprehensive overview of ultrasound systems utilized for data acquisition, associated datasets, deep learning models, and comparative performance

    Improved Endocardial Border Definition with Short-Lag Spatial Coherence (SLSC) Imaging

    No full text
    <p>Clutter is a problematic noise artifact in a variety of ultrasound applications. Clinical tasks complicated by the presence of clutter include detecting cancerous lesions in abdominal organs (e.g. livers, bladders) and visualizing endocardial borders to assess cardiovascular health. In this dissertation, an analytical expression for contrast loss due to clutter is derived, clutter is quantified in abdominal images, and sources of abdominal clutter are identified. Novel clutter reduction methods are also presented and tested in abdominal and cardiac images. </p><p>One of the novel clutter reduction methods is Short-Lag Spatial Coherence (SLSC) imaging. Instead of applying a conventional delay-and-sum beamformer to measure the amplitude of received echoes and form B-mode images, the spatial coherence of received echoes are measured to form SLSC images. The world's first SLSC images of simulated, phantom, and <italic>in vivo</italic> data are presented herein. They demonstrate reduced clutter and improved contrast, contrast-to-noise, and signal-to-noise ratios compared to conventional B-mode images. In addition, the resolution characteristics of SLSC images are quantified and compared to resolution in B-mode images. </p><p>A clinical study with 14 volunteers was conducted to demonstrate that SLSC imaging offers 19-33% improvement in the visualization of endocardial borders when the quality of B-mode images formed from the same echo data was poor. There were no statistically significant improvements in endocardial border visualization with SLSC imaging when the quality of matched B-mode images was medium to good.</p>Dissertatio
    • …
    corecore