10 research outputs found

    Lung ultrasound education: simulation and hands-on

    Get PDF
    COVID-19 can cause damage to the lung, which can result in progressive respiratory failure and potential death. Chest radiography and CT are the imaging tools used to diagnose and monitor patients with COVID-19. Lung ultrasound (LUS) during COVID-19 is being used in some areas to aid decision-making and improve patient care. However, its increased use could help improve existing practice for patients with suspected COVID-19, or other lung disease. A limitation of LUS is that it requires practitioners with sufficient competence to ensure timely, safe, and diagnostic clinical/imaging assessments. This commentary discusses the role and governance of LUS during and beyond the COVID-19 pandemic, and how increased education and training in this discipline can be undertaken given the restrictions in imaging highly infectious patients. The use of simulation, although numerical methods or dedicated scan trainers, and machine learning algorithms could further improve the accuracy of LUS, whilst helping to reduce its learning curve for greater uptake in clinical practice

    Detection of Line Artefacts in Lung Ultrasound Images of COVID-19 Patients via Non-Convex Regularization

    Get PDF
    In this paper, we present a novel method for line artefacts quantification in lung ultrasound (LUS) images of COVID-19 patients. We formulate this as a non-convex regularisation problem involving a sparsity-enforcing, Cauchy-based penalty function, and the inverse Radon transform. We employ a simple local maxima detection technique in the Radon transform domain, associated with known clinical definitions of line artefacts. Despite being non-convex, the proposed technique is guaranteed to convergence through our proposed Cauchy proximal splitting (CPS) method and accurately identifies both horizontal and vertical line artefacts in LUS images. In order to reduce the number of false and missed detection, our method includes a two-stage validation mechanism, which is performed in both Radon and image domains. We evaluate the performance of the proposed method in comparison to the current state-of-the-art B-line identification method and show a considerable performance gain with 87% correctly detected B-lines in LUS images of nine COVID-19 patients. In addition, owing to its fast convergence, our proposed method is readily applicable for processing LUS image sequences.Comment: 16 pages, 9 figure

    Localizing B-lines in lung ultrasonography by weakly-supervised deep learning, in-vivo results

    No full text
    Lung ultrasound (LUS) is nowadays gaining growing attention from both the clinical and technical world. Of particular interest are several imaging-artifacts, e.g., A- and B- line artifacts. While A-lines are a visual pattern which essentially represent a healthy lung surface, B-line artifacts correlate with a wide range of pathological conditions affecting the lung parenchyma. In fact, the appearance of B-lines correlates to an increase in extravascular lung water, interstitial lung diseases, cardiogenic and non-cardiogenic lung edema, interstitial pneumonia and lung contusion. Detection and localization of B-lines in a LUS video are therefore tasks of great clinical interest, with accurate, objective and timely evaluation being critical. This is particularly true in environments such as the emergency units, where timely decision may be crucial. In this work, we present and describe a method aimed at supporting clinicians by automatically detecting and localizing B-lines in an ultrasound scan. To this end, we employ modern deep learning strategies and train a fully convolutional neural network to perform this task on B-mode images of dedicated ultrasound phantoms in-vitro, and on patients in-vivo. An accuracy, sensitivity, specificity, negative and positive predictive value equal to 0.917, 0.915, 0.918, 0.950 and 0.864 were achieved in-vitro, respectively. Using a clinical system in-vivo, these statistics were 0.892, 0.871, 0.930, 0.798 and 0.958, respectively. We moreover calculate neural attention maps that visualize which components in the image triggered the network, thereby offering simultaneous weakly-supervised localization. These promising results confirm the capability of the proposed method to identify and localize the presence of B-lines in clinical lung ultrasonography

    Localizing B-lines in lung ultrasonography by weakly-supervised deep learning, in-vivo results

    No full text
    \u3cp\u3eLung ultrasound (LUS) is nowadays gaining growing attention from both the clinical and technical world. Of particular interest are several imaging-artifacts, e.g., A- and B- line artifacts. While A-lines are a visual pattern which essentially represent a healthy lung surface, B-line artifacts correlate with a wide range of pathological conditions affecting the lung parenchyma. In fact, the appearance of B-lines correlates to an increase in extravascular lung water, interstitial lung diseases, cardiogenic and non-cardiogenic lung edema, interstitial pneumonia and lung contusion. Detection and localization of B-lines in a LUS video are therefore tasks of great clinical interest, with accurate, objective and timely evaluation being critical. This is particularly true in environments such as the emergency units, where timely decision may be crucial. In this work, we present and describe a method aimed at supporting clinicians by automatically detecting and localizing B-lines in an ultrasound scan. To this end, we employ modern deep learning strategies and train a fully convolutional neural network to perform this task on B-mode images of dedicated ultrasound phantoms in-vitro, and on patients in-vivo. An accuracy, sensitivity, specificity, negative and positive predictive value equal to 0.917, 0.915, 0.918, 0.950 and 0.864 were achieved in-vitro, respectively. Using a clinical system in-vivo, these statistics were 0.892, 0.871, 0.930, 0.798 and 0.958, respectively. We moreover calculate neural attention maps that visualize which components in the image triggered the network, thereby offering simultaneous weakly-supervised localization. These promising results confirm the capability of the proposed method to identify and localize the presence of B-lines in clinical lung ultrasonography.\u3c/p\u3

    Explainable artificial intelligence (XAI) in deep learning-based medical image analysis

    Get PDF
    With an increase in deep learning-based methods, the call for explainability of such methods grows, especially in high-stakes decision making areas such as medical image analysis. This survey presents an overview of eXplainable Artificial Intelligence (XAI) used in deep learning-based medical image analysis. A framework of XAI criteria is introduced to classify deep learning-based medical image analysis methods. Papers on XAI techniques in medical image analysis are then surveyed and categorized according to the framework and according to anatomical location. The paper concludes with an outlook of future opportunities for XAI in medical image analysis.Comment: Submitted for publication. Comments welcome by email to first autho

    TOWARD CLINICAL TRANSLATION OF MICROVASCULAR ULTRASOUND IMAGING: ADVANCEMENTS IN SUPERHARMONIC ULTRASOUND TECHNOLOGY

    Get PDF
    Ultrasound imaging is perhaps the safest, most affordable, and most available biomedical imaging modality. However, it suffers from poor specificity for cancer detection, particularly in breast cancer, which affects one in eight women and leads to a high incidence of unnecessary biopsies from inconclusive screening. It is well-known that malignant cancers are accompanied by abnormal angiogenesis, leading to tortuous and disorganized vasculature. Acoustic angiography, a microvascular contrast-enhanced ultrasound technique, was developed to visualize and harness this aberrant vasculature as a biomarker of malignancy. This technique applies a dual-frequency superharmonic strategy to isolate intravascular microbubble contrast from the surrounding tissue with low-frequency transmit and high-frequency receive, resulting in high-resolution microvascular maps. Preclinically, acoustic angiography has been a valuable tool for differentiating tumors from healthy tissue by quantifying vascular features like tortuosity. The preclinical success of this technique is attributed to the single-element dual-frequency transducers used, which provide contrast sensitivity and focal depth best suited for imaging small animals at high microbubble doses. In an exploratory clinical study in which these transducers were used to image the human breast, imaging depth, low sensitivity, and motion artifacts significantly degraded image quality. For acoustic angiography to be successfully translated to clinical use, the technique must be optimized for clinical imaging. In this dissertation, we explore three ways in which acoustic angiography may be improved for the clinic. First, we evaluate microbubble contrast agents to determine the composition that maximizes superharmonic generation. The results indicate that lipid-shelled microbubbles with perfluorocarbon cores, like the commercial agent, DEFINITY, produce the greatest superharmonic signal. Then, we present a novel transducer, a stacked dual-frequency array, as the next-generation device for acoustic angiography and demonstrate improvements in imaging depth and sensitivity up to 10 mm and 13 dB, respectively. We go on to apply this device in a clinical pilot study and elucidate the challenges that remain to be overcome for clinical acoustic angiography. Finally, we propose custom simulations for superharmonic imaging and identify optimal frequency combinations for imaging at depths up to 8 cm, which can be used to design dedicated clinical dual-frequency arrays in the future.Doctor of Philosoph
    corecore