46 research outputs found

    The Joke is on Darrel

    Get PDF

    Milk

    Get PDF
    PDF pages: 3

    Automated speckle tracking algorithm to aid on-axis imaging in echocardiography

    Get PDF
    Obtaining a “correct” view in echocardiography is a subjective process in which an operator attempts to obtain images conforming to consensus standard views. Real-time objective quantification of image alignment may assist less experienced operators, but no reliable index yet exists. We present a fully automated algorithm for detecting incorrect medial/lateral translation of an ultrasound probe by image analysis. The ability of the algorithm to distinguish optimal from sub-optimal four-chamber images was compared to that of specialists—the current “gold-standard.” The orientation assessments produced by the automated algorithm correlated well with consensus visual assessments of the specialists (r=0.87r=0.87) and compared favourably with the correlation between individual specialists and the consensus, 0.82±0.09. Each individual specialist’s assessments were within the consensus of other specialists, 75±14% of the time, and the algorithm’s assessments were within the consensus of specialists 85% of the time. The mean discrepancy in probe translation values between individual specialists and their consensus was 0.97±0.87  cm, and between the automated algorithm and specialists’ consensus was 0.92±0.70  cm. This technology could be incorporated into hardware to provide real-time guidance for image optimisation—a potentially valuable tool both for training and quality control

    Frame rate required for speckle tracking echocardiography: A quantitative clinical study with open-source, vendor-independent software

    Get PDF
    Background Assessing left ventricular function with speckle tracking is useful in patient diagnosis but requires a temporal resolution that can follow myocardial motion. In this study we investigated the effect of different frame rates on the accuracy of speckle tracking results, highlighting the temporal resolution where reliable results can be obtained. Material and methods 27 patients were scanned at two different frame rates at their resting heart rate. From all acquired loops, lower temporal resolution image sequences were generated by dropping frames, decreasing the frame rate by up to 10-fold. Results Tissue velocities were estimated by automated speckle tracking. Above 40 frames/s the peak velocity was reliably measured. When frame rate was lower, the inter-frame interval containing the instant of highest velocity also contained lower velocities, and therefore the average velocity in that interval was an underestimate of the clinically desired instantaneous maximum velocity. Conclusions The higher the frame rate, the more accurately maximum velocities are identified by speckle tracking, until the frame rate drops below 40 frames/s, beyond which there is little increase in peak velocity. We provide in an online supplement the vendor-independent software we used for automatic speckle-tracked velocity assessment to help others working in this field

    Automated multi-beat tissue Doppler echocardiography analysis using deep neural networks

    Get PDF
    Tissue Doppler imaging is an essential echocardiographic technique for the non-invasive assessment of myocardial blood velocity. Image acquisition and interpretation are performed by trained operators who visually localise landmarks representing Doppler peak velocities. Current clinical guidelines recommend averaging measurements over several heartbeats. However, this manual process is both time-consuming and disruptive to workflow. An automated system for accurate beat isolation and landmark identification would be highly desirable. A dataset of tissue Doppler images was annotated by three cardiologist experts, providing a gold standard and allowing for observer variability comparisons. Deep neural networks were trained for fully automated predictions on multiple heartbeats and tested on tissue Doppler strips of arbitrary length. Automated measurements of peak Doppler velocities show good Bland–Altman agreement (average standard deviation of 0.40 cm/s) with consensus expert values; less than the inter-observer variability (0.65 cm/s). Performance is akin to individual experts (standard deviation of 0.40 to 0.75 cm/s). Our approach allows for > 26 times as many heartbeats to be analysed, compared to a manual approach. The proposed automated models can accurately and reliably make measurements on tissue Doppler images spanning several heartbeats, with performance indistinguishable from that of human experts, but with significantly shorter processing time

    Multibeat echocardiographic phase detection using deep neural networks

    Get PDF
    Background Accurate identification of end-diastolic and end-systolic frames in echocardiographic cine loops is important, yet challenging, for human experts. Manual frame selection is subject to uncertainty, affecting crucial clinical measurements, such as myocardial strain. Therefore, the ability to automatically detect frames of interest is highly desirable. Methods We have developed deep neural networks, trained and tested on multi-centre patient data, for the accurate identification of end-diastolic and end-systolic frames in apical four-chamber 2D multibeat cine loop recordings of arbitrary length. Seven experienced cardiologist experts independently labelled the frames of interest, thereby providing infallible annotations, allowing for observer variability measurements. Results When compared with the ground-truth, our model shows an average frame difference of −0.09 ± 1.10 and 0.11 ± 1.29 frames for end-diastolic and end-systolic frames, respectively. When applied to patient datasets from a different clinical site, to which the model was blind during its development, average frame differences of −1.34 ± 3.27 and −0.31 ± 3.37 frames were obtained for both frames of interest. All detection errors fall within the range of inter-observer variability: [-0.87, −5.51]±[2.29, 4.26] and [-0.97, −3.46]±[3.67, 4.68] for ED and ES events, respectively. Conclusions The proposed automated model can identify multiple end-systolic and end-diastolic frames in echocardiographic videos of arbitrary length with performance indistinguishable from that of human experts, but with significantly shorter processing time

    Improving ultrasound video classification: an evaluation of novel deep learning methods in echocardiography

    Get PDF
    Echocardiography is the commonest medical ultrasound examination, but automated interpretation is challenging and hinges on correct recognition of the ‘view’ (imaging plane and orientation). Current state-of-the-art methods for identifying the view computationally involve 2-dimensional convolutional neural networks (CNNs), but these merely classify individual frames of a video in isolation, and ignore information describing the movement of structures throughout the cardiac cycle. Here we explore the efficacy of novel CNN architectures, including time-distributed networks and two-stream networks, which are inspired by advances in human action recognition. We demonstrate that these new architectures more than halve the error rate of traditional CNNs from 8.1% to 3.9%. These advances in accuracy may be due to these networks’ ability to track the movement of specific structures such as heart valves throughout the cardiac cycle. Finally, we show the accuracies of these new state-of-the-art networks are approaching expert agreement (3.6% discordance), with a similar pattern of discordance between views

    Neural architecture search of echocardiography view classifiers

    Get PDF
    Purpose: Echocardiography is the most commonly used modality for assessing the heart in clinical practice. In an echocardiographic exam, an ultrasound probe samples the heart from different orientations and positions, thereby creating different viewpoints for assessing the cardiac function. The determination of the probe viewpoint forms an essential step in automatic echocardiographic image analysis. Approach: In this study, convolutional neural networks are used for the automated identification of 14 different anatomical echocardiographic views (larger than any previous study) in a dataset of 8732 videos acquired from 374 patients. Differentiable architecture search approach was utilized to design small neural network architectures for rapid inference while maintaining high accuracy. The impact of the image quality and resolution, size of the training dataset, and number of echocardiographic view classes on the efficacy of the models were also investigated. Results: In contrast to the deeper classification architectures, the proposed models had significantly lower number of trainable parameters (up to 99.9% reduction), achieved comparable classification performance (accuracy 88.4% to 96%, precision 87.8% to 95.2%, recall 87.1% to 95.1%) and real-time performance with inference time per image of 3.6 to 12.6 ms. Conclusion: Compared with the standard classification neural network architectures, the proposed models are faster and achieve comparable classification performance. They also require less training data. Such models can be used for real-time detection of the standard views

    Automated aortic Doppler flow tracing for reproducible research and clinical measurements

    Get PDF
    In clinical practice, echocardiographers are often unkeen to make the significant time investment to make additional multiple measurements of Doppler velocity. Main hurdle to obtaining multiple measurements is the time required to manually trace a series of Doppler traces. To make it easier to analyze more beats, we present the description of an application system for automated aortic Doppler envelope quantification, compatible with a range of hardware platforms. It analyses long Doppler strips, spanning many heartbeats, and does not require electrocardiogram to separate individual beats. We tested its measurement of velocity-time-integral and peak-velocity against the reference standard defined as the average of three experts who each made three separate measurements. The automated measurements of velocity-time-integral showed strong correspondence (R2 = 0.94) and good Bland-Altman agreement (SD = 1.39 cm) with the reference consensus expert values, and indeed performed as well as the individual experts ( R2 = 0.90 to 0.96, SD = 1.05 to 1.53 cm). The same performance was observed for peak-velocities; ( R2 = 0.98, SD = 3.07 cm/s) and ( R2 = 0.93 to 0.98, SD = 2.96 to 5.18 cm/s). This automated technology allows > 10 times as many beats to be analyzed compared to the conventional manual approach. This would make clinical and research protocols more precise for the same operator effort
    corecore