3,005 research outputs found

    Deep Learning in Cardiology

    Full text link
    The medical field is creating large amount of data that physicians are unable to decipher and use efficiently. Moreover, rule-based expert systems are inefficient in solving complicated medical tasks or for creating insights using big data. Deep learning has emerged as a more accurate and effective technology in a wide range of medical problems such as diagnosis, prediction and intervention. Deep learning is a representation learning method that consists of layers that transform the data non-linearly, thus, revealing hierarchical relationships and structures. In this review we survey deep learning application papers that use structured data, signal and imaging modalities from cardiology. We discuss the advantages and limitations of applying deep learning in cardiology that also apply in medicine in general, while proposing certain directions as the most viable for clinical use.Comment: 27 pages, 2 figures, 10 table

    Super-Resolution Contrast Enhanced Ultrasound Methodology for the Identification of in-Vivo Vascular Dynamics in 2D

    Get PDF
    \u3cp\u3eObjectives The aim of this study was to provide an ultrasound-based super-resolution methodology that can be implemented using clinical 2-dimensional ultrasound equipment and standard contrast-enhanced ultrasound modes. In addition, the aim is to achieve this for true-to-life patient imaging conditions, including realistic examination times of a few minutes and adequate image penetration depths that can be used to scan entire organs without sacrificing current super-resolution ultrasound imaging performance. Methods Standard contrast-enhanced ultrasound was used along with bolus or infusion injections of SonoVue (Bracco, Geneva, Switzerland) microbubble (MB) suspensions. An image analysis methodology, translated from light microscopy algorithms, was developed for use with ultrasound contrast imaging video data. New features that are tailored for ultrasound contrast image data were developed for MB detection and segmentation, so that the algorithm can deal with single and overlapping MBs. The method was tested initially on synthetic data, then with a simple microvessel phantom, and then with in vivo ultrasound contrast video loops from sheep ovaries. Tracks detailing the vascular structure and corresponding velocity map of the sheep ovary were reconstructed. Images acquired from light microscopy, optical projection tomography, and optical coherence tomography were compared with the vasculature network that was revealed in the ultrasound contrast data. The final method was applied to clinical prostate data as a proof of principle. Results Features of the ovary identified in optical modalities mentioned previously were also identified in the ultrasound super-resolution density maps. Follicular areas, follicle wall, vessel diameter, and tissue dimensions were very similar. An approximately 8.5-fold resolution gain was demonstrated in vessel width, as vessels of width down to 60 μm were detected and verified (λ = 514 μm). Best agreement was found between ultrasound measurements and optical coherence tomography with 10% difference in the measured vessel widths, whereas ex vivo microscopy measurements were significantly lower by 43% on average. The results were mostly achieved using video loops of under 2-minute duration that included respiratory motion. A feasibility study on a human prostate showed good agreement between density and velocity ultrasound maps with the histological evaluation of the location of a tumor. Conclusions The feasibility of a 2-dimensional contrast-enhanced ultrasound-based super-resolution method was demonstrated using in vitro, synthetic and in vivo animal data. The method reduces the examination times to a few minutes using state-of-the-art ultrasound equipment and can provide super-resolution maps for an entire prostate with similar resolution to that achieved in other studies.\u3c/p\u3

    Learning-based Framework for US Signals Super-resolution

    Full text link
    We propose a novel deep-learning framework for super-resolution ultrasound images and videos in terms of spatial resolution and line reconstruction. We up-sample the acquired low-resolution image through a vision-based interpolation method; then, we train a learning-based model to improve the quality of the up-sampling. We qualitatively and quantitatively test our model on different anatomical districts (e.g., cardiac, obstetric) images and with different up-sampling resolutions (i.e., 2X, 4X). Our method improves the PSNR median value with respect to SOTA methods of 1.7%1.7\% on obstetric 2X raw images, 6.1%6.1\% on cardiac 2X raw images, and 4.4%4.4\% on abdominal raw 4X images; it also improves the number of pixels with a low prediction error of 9.0%9.0\% on obstetric 4X raw images, 5.2%5.2\% on cardiac 4X raw images, and 6.2%6.2\% on abdominal 4X raw images. The proposed method is then applied to the spatial super-resolution of 2D videos, by optimising the sampling of lines acquired by the probe in terms of the acquisition frequency. Our method specialises trained networks to predict the high-resolution target through the design of the network architecture and the loss function, taking into account the anatomical district and the up-sampling factor and exploiting a large ultrasound data set. The use of deep learning on large data sets overcomes the limitations of vision-based algorithms that are general and do not encode the characteristics of the data. Furthermore, the data set can be enriched with images selected by medical experts to further specialise the individual networks. Through learning and high-performance computing, our super-resolution is specialised to different anatomical districts by training multiple networks. Furthermore, the computational demand is shifted to centralised hardware resources with a real-time execution of the network's prediction on local devices

    Hemodynamic Quantifications By Contrast-Enhanced Ultrasound:From In-Vitro Modelling To Clinical Validation

    Get PDF

    Hemodynamic Quantifications By Contrast-Enhanced Ultrasound:From In-Vitro Modelling To Clinical Validation

    Get PDF

    Constrained CycleGAN for Effective Generation of Ultrasound Sector Images of Improved Spatial Resolution

    Full text link
    Objective. A phased or a curvilinear array produces ultrasound (US) images with a sector field of view (FOV), which inherently exhibits spatially-varying image resolution with inferior quality in the far zone and towards the two sides azimuthally. Sector US images with improved spatial resolutions are favorable for accurate quantitative analysis of large and dynamic organs, such as the heart. Therefore, this study aims to translate US images with spatially-varying resolution to ones with less spatially-varying resolution. CycleGAN has been a prominent choice for unpaired medical image translation; however, it neither guarantees structural consistency nor preserves backscattering patterns between input and generated images for unpaired US images. Approach. To circumvent this limitation, we propose a constrained CycleGAN (CCycleGAN), which directly performs US image generation with unpaired images acquired by different ultrasound array probes. In addition to conventional adversarial and cycle-consistency losses of CycleGAN, CCycleGAN introduces an identical loss and a correlation coefficient loss based on intrinsic US backscattered signal properties to constrain structural consistency and backscattering patterns, respectively. Instead of post-processed B-mode images, CCycleGAN uses envelope data directly obtained from beamformed radio-frequency signals without any other non-linear postprocessing. Main Results. In vitro phantom results demonstrate that CCycleGAN successfully generates images with improved spatial resolution as well as higher peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) compared with benchmarks. Significance. CCycleGAN-generated US images of the in vivo human beating heart further facilitate higher quality heart wall motion estimation than benchmarks-generated ones, particularly in deep regions
    • …
    corecore