87 research outputs found

    Improving ultrasound video classification: an evaluation of novel deep learning methods in echocardiography

    Get PDF
    Echocardiography is the commonest medical ultrasound examination, but automated interpretation is challenging and hinges on correct recognition of the ā€˜viewā€™ (imaging plane and orientation). Current state-of-the-art methods for identifying the view computationally involve 2-dimensional convolutional neural networks (CNNs), but these merely classify individual frames of a video in isolation, and ignore information describing the movement of structures throughout the cardiac cycle. Here we explore the efficacy of novel CNN architectures, including time-distributed networks and two-stream networks, which are inspired by advances in human action recognition. We demonstrate that these new architectures more than halve the error rate of traditional CNNs from 8.1% to 3.9%. These advances in accuracy may be due to these networksā€™ ability to track the movement of specific structures such as heart valves throughout the cardiac cycle. Finally, we show the accuracies of these new state-of-the-art networks are approaching expert agreement (3.6% discordance), with a similar pattern of discordance between views

    Neural architecture search of echocardiography view classifiers

    Get PDF
    Purpose: Echocardiography is the most commonly used modality for assessing the heart in clinical practice. In an echocardiographic exam, an ultrasound probe samples the heart from different orientations and positions, thereby creating different viewpoints for assessing the cardiac function. The determination of the probe viewpoint forms an essential step in automatic echocardiographic image analysis. Approach: In this study, convolutional neural networks are used for the automated identification of 14 different anatomical echocardiographic views (larger than any previous study) in a dataset of 8732 videos acquired from 374 patients. Differentiable architecture search approach was utilized to design small neural network architectures for rapid inference while maintaining high accuracy. The impact of the image quality and resolution, size of the training dataset, and number of echocardiographic view classes on the efficacy of the models were also investigated. Results: In contrast to the deeper classification architectures, the proposed models had significantly lower number of trainable parameters (up to 99.9% reduction), achieved comparable classification performance (accuracy 88.4% to 96%, precision 87.8% to 95.2%, recall 87.1% to 95.1%) and real-time performance with inference time per image of 3.6 to 12.6 ms. Conclusion: Compared with the standard classification neural network architectures, the proposed models are faster and achieve comparable classification performance. They also require less training data. Such models can be used for real-time detection of the standard views

    Detecting Heart Disease from Multi-View Ultrasound Images via Supervised Attention Multiple Instance Learning

    Full text link
    Aortic stenosis (AS) is a degenerative valve condition that causes substantial morbidity and mortality. This condition is under-diagnosed and under-treated. In clinical practice, AS is diagnosed with expert review of transthoracic echocardiography, which produces dozens of ultrasound images of the heart. Only some of these views show the aortic valve. To automate screening for AS, deep networks must learn to mimic a human expert's ability to identify views of the aortic valve then aggregate across these relevant images to produce a study-level diagnosis. We find previous approaches to AS detection yield insufficient accuracy due to relying on inflexible averages across images. We further find that off-the-shelf attention-based multiple instance learning (MIL) performs poorly. We contribute a new end-to-end MIL approach with two key methodological innovations. First, a supervised attention technique guides the learned attention mechanism to favor relevant views. Second, a novel self-supervised pretraining strategy applies contrastive learning on the representation of the whole study instead of individual images as commonly done in prior literature. Experiments on an open-access dataset and an external validation set show that our approach yields higher accuracy while reducing model size.Comment: multiple-instance learning; self-supervised learning; semi-supervised learning; medical imagin

    Automated assessment of echocardiographic image quality using deep convolutional neural networks

    Get PDF
    Myocardial ischemia tops the list of causes of death around the globe, but its diagnosis and early detection thrives on clinical echocardiography. Although echocardiography presents a huge advantage of a non-intrusive, low-cost point of care diagnosis, its image quality is inherently subjective with strong dependence on operatorsā€™ experience level and acquisition skill. In some countries, echo specialists are mandated to supplementary years of training to achieve ā€˜gold standardā€™ free-hand acquisition skill without which exacerbates the reliability of echocardiogram and increases possibility for misdiagnosis. These drawbacks pose significant challenges to adopting echocardiography as authoritative modalities for cardiac diagnosis. However, the prevailing and currently adopted solution is to manually carry out quality evaluation where an echocardiography specialist visually inspects several acquired images to make clinical decisions of its perceived quality and prognosis. This is a lengthening process and laced with variability of opinion consequently affection diagnostic responses. The goal of the research is to provide a multi-discipline, state-of-the-art solution that allows objective quality assessment of echocardiogram and to guarantee the reliability of clinical quantification processes. Computer graphic processing unit simulations, medical imaging analysis and deep convolutional neural network models were employed to achieve this goal. From a finite pool of echocardiographic patient datasets, 1650 random samples of echocardiogram cine-loops from different patients with age ranges from 17 and 85 years, who had undergone echocardiography between 2010 and 2020 were evaluated. We defined a set of pathological and anatomical criteria of image quality by which apical-four and parasternal long axis frames can be evaluated with feasibility for real-time optimization. The selected samples were annotated for multivariate model developments and validation of predicted quality score per frame. The outcome presents a robust artificial intelligence algorithm that indicate framesā€™ quality rating, real-time visualisation of element of quality and updates quality optimization in real-time. A prediction errors of 0.052, 0.062, 0.069, 0.056 for visibility, clarity, depth-gain, and foreshortening attributes were achieved, respectively. The model achieved a combined error rate of 3.6% with average prediction speed of 4.24 ms per frame. The novel method established a superior approach to two-dimensional image quality estimation, assessment, and clinical adequacy on acquisition of echocardiogram prior to quantification and diagnosis of myocardial infarction

    A Study of Spatio-Temporal Learning Approaches Using Echocardiograms for Risk Assessment of Thoracic Aortic Aneurysms

    Get PDF
    Aortic dissection and rupture are fatal complications that happen when the aortic tissueā€™s integrity is compromised, leading to fatal consequences. Once an aortic dissection takes place, 41% of patients do not even make it to the hospital. Unfortunately, the diagnostic outlook is not much brighter. It is estimated that 40% of patients presenting with aortic dissection do not meet the current diagnostic criteria. This thesis aims to assess the risk levels of thoracic aortic aneurysmsā€™ dissection and rupture from patientsā€™ echocardiograms. To do this, we study the effects of spatial and temporal learning of the heartā€™s movement in the echocardiograms. We investigate the pure visual learning from still 2D images extracted from the echocardiogramā€™s sequence, then assess the temporal learning across frames in the echocardiogram video by incorporating 3D convolutions over the whole sequence, and in terms of aggregating the visually learned content of each frame in the echocardiogram over the sequence length. We also experiment with implementing a visual attention mechanism to filter out the visual context. Finally, we study the effect of adding a tabular data learning stream to our architecture that learns from the patientā€™s tabular data information and incorporates it into the best-performing model. The results of this thesis - although not conclusive- suggest that temporal dependencies are present between echocardiogram frames throughout the video, which points out the diagnostic importance of analyzing the movement of the beating heart tissue through time

    Automated Echocardiographic Image Interpretation Using Artificial Intelligence

    Get PDF
    In addition to remaining as one of the leading causes of global mortality, cardio vascular disease has a significant impact on overall health, well-being, and life expectancy. Therefore, early detection of anomalies in cardiac function has become essential for early treatment, and therefore reduction in mortalities. Echocardiography is the most commonly used modality for evaluating the structure and function of the heart. Analysis of echocardiographic images has an important role in the clinical practice in assessing the cardiac morphology and function and thereby reaching a diagnosis. The process of interpretation of echocardiographic images is considered challenging for several reasons. The manual annotation is still a daily work in the clinical routine due to the lack of reliable automatic interpretation methods. This can lead to time-consuming tasks that are prone to intra- and inter-observer variability. Echocardiographic images inherently suffer from a high level of noise and poor qualities. Therefore, although several studies have attempted automating the process, this re-mains a challenging task, and improving the accuracy of automatic echocardiography interpretation is an ongoing field. Advances in Artificial Intelligence and Deep Learning can help to construct an auto-mated, scalable pipeline for echocardiographic image interpretation steps, includingview classification, phase-detection, image segmentation with a focus on border detection, quantification of structure, and measurement of the clinical markers. This thesis aims to develop optimised automated methods for the three individual steps forming part of an echocardiographic exam, namely view classification, left ventricle segmentation, quantification, and measurement of left ventricle structure. Various Neural Architecture Search methods were employed to design efficient neural network architectures for the above tasks. Finally, an optimisation-based speckle tracking echocardiography algorithm was proposed to estimate the myocardial tissue velocities and cardiac deformation. The algorithm was adopted to measure cardiac strain which is used for detecting myocardial ischaemia. All proposed techniques were compared with the existing state-of-the-art methods. To this end, publicly available patients datasets, as well as two private datasets provided by the clinical partners to this project, were used for developments and comprehensive performance evaluations of the proposed techniques. Results demonstrated the feasibility of using automated tools for reliable echocardiographic image interpretations, which can be used as assistive tools to clinicians in obtaining clinical measurements

    Artificial Intelligence and Echocardiography

    Get PDF
    Artificial intelligence (AI) is evolving in the field of diagnostic medical imaging, including echocardiography. Although the dynamic nature of echocardiography presents challenges beyond those of static images from X-ray, computed tomography, magnetic resonance, and radioisotope imaging, AI has influenced all steps of echocardiography, from image acquisition to automatic measurement and interpretation. Considering that echocardiography often is affected by inter-observer variability and shows a strong dependence on the level of experience, AI could be extremely advantageous in minimizing observer variation and providing reproducible measures, enabling accurate diagnosis. Currently, most reported AI applications in echocardiographic measurement have focused on improved image acquisition and automation of repetitive and tedious tasks; however, the role of AI applications should not be limited to conventional processes. Rather, AI could provide clinically important insights from subtle and non-specific data, such as changes in myocardial texture in patients with myocardial disease. Recent initiatives to develop large echocardiographic databases can facilitate development of AI applications. The ultimate goal of applying AI to echocardiography is automation of the entire process of echocardiogram analysis. Once automatic analysis becomes reliable, workflows in clinical echocardiographic will change radically. The human expert will remain the master controlling the overall diagnostic process, will not be replaced by AI, and will obtain significant support from AI systems to guide acquisition, perform measurements, and integrate and compare data on request.ope

    Intelligent Biosignal Processing in Wearable and Implantable Sensors

    Get PDF
    This reprint provides a collection of papers illustrating the state-of-the-art of smart processing of data coming from wearable, implantable or portable sensors. Each paper presents the design, databases used, methodological background, obtained results, and their interpretation for biomedical applications. Revealing examples are brainā€“machine interfaces for medical rehabilitation, the evaluation of sympathetic nerve activity, a novel automated diagnostic tool based on ECG data to diagnose COVID-19, machine learning-based hypertension risk assessment by means of photoplethysmography and electrocardiography signals, Parkinsonian gait assessment using machine learning tools, thorough analysis of compressive sensing of ECG signals, development of a nanotechnology application for decoding vagus-nerve activity, detection of liver dysfunction using a wearable electronic nose system, prosthetic hand control using surface electromyography, epileptic seizure detection using a CNN, and premature ventricular contraction detection using deep metric learning. Thus, this reprint presents significant clinical applications as well as valuable new research issues, providing current illustrations of this new field of research by addressing the promises, challenges, and hurdles associated with the synergy of biosignal processing and AI through 16 different pertinent studies. Covering a wide range of research and application areas, this book is an excellent resource for researchers, physicians, academics, and PhD or master students working on (bio)signal and image processing, AI, biomaterials, biomechanics, and biotechnology with applications in medicine
    • ā€¦
    corecore