31 research outputs found

    Accelerating magnetic induction tomography‐based imaging through heterogeneous parallel computing

    Get PDF
    Magnetic Induction Tomography (MIT) is a non‐invasive imaging technique, which has applications in both industrial and clinical settings. In essence, it is capable of reconstructing the electromagnetic parameters of an object from measurements made on its surface. With the exploitation of parallelism, it is possible to achieve high quality inexpensive MIT images for biomedical applications on clinically relevant time scales. In this paper we investigate the performance of different parallel implementations of the forward eddy current problem, which is the main computational component of the inverse problem through which measured voltages are converted into images. We show that a heterogeneous parallel method that exploits multiple CPUs and GPUs can provide a high level of parallel scaling, leading to considerably improved runtimes. We also show how multiple GPUs can be used in conjunction with deal.II, a widely‐used open source finite element library

    Energy Efficiency Improvements in Dry Drilling with Optimised Diamond-Like Carbon Coating

    Get PDF
    We demonstrate enhancements of performance and energy efficiency of cutting tools by deposition of diamond-like carbon (DLC) coatings on machine parts. DLC was deposited on steel drill bits, using plasma enhanced chemical vapour deposition (PECVD) with the acetylene precursor diluted with argon, to produce a surface with low friction and low wear rate. Drill bit performance in dry drilling of aluminium was quantified by analysis of power consumption and swarf flow. Optimised deposition conditions produced drill bits with greatly enhanced performance over uncoated drill bits, showing a 25% reduction in swarf clogging, a 36% reduction in power consumption and a greater than five-fold increase in lifetime. Surface analysis with scanning electron microscopy shows that DLC coated drills exhibit much lower aluminium build up on the trailing shank of the drill, enhancing the anti-adhering properties of the drill and reducing heat generation during operation, resulting in the observed improvements in efficiency. Variation of drilling efficiency with argon dilution of precursor is related to changes in the microstructure of the DLC coating

    Automated Assessment of Image Quality in 2D Echocardiography Using Deep Learning

    Get PDF
    Echocardiography is the most used modality for assessing cardiac functions. The reliability of the echocardiographic measurements, however, depends on the quality of the images. Currently, the method of image quality assessment is a subjective process, where an echocardiography specialist visually inspects the images. An automated image quality assessment system is thus required. Here, we have reported on the feasibility of using deep learning for developing such automated quality scoring systems. A scoring system was proposed to include specific quality attributes for on-axis, contrast/gain and left ventricular (LV) foreshortening of the apical view. We prepared and used 1,039 echocardiographic patient datasets for model development and testing. Average accuracy of at least 86% was obtained with computation speed at 0.013ms per frame which indicated the feasibility for real-time deployment

    Doppler assessment of aortic stenosis: a 25-operator study demonstrating why reading the peak velocity is superior to velocity time integral

    Get PDF
    Aims Measurements with superior reproducibility are useful clinically and research purposes. Previous reproducibility studies of Doppler assessment of aortic stenosis (AS) have compared only a pair of observers and have not explored the mechanism by which disagreement between operators occurs. Using custom-designed software which stored operators’ traces, we investigated the reproducibility of peak and velocity time integral (VTI) measurements across a much larger group of operators and explored the mechanisms by which disagreement arose. Methods and results Twenty-five observers reviewed continuous wave (CW) aortic valve (AV) and pulsed wave (PW) left ventricular outflow tract (LVOT) Doppler traces from 20 sequential cases of AS in random order. Each operator unknowingly measured each peak velocity and VTI twice. VTI tracings were stored for comparison. Measuring the peak is much more reproducible than VTI for both PW (coefficient of variation 10.1 vs. 18.0%; P < 0.001) and CW traces (coefficient of variation 4.0 vs. 10.2%; P < 0.001). VTI is inferior because the steep early and late parts of the envelope are difficult to trace reproducibly. Dimensionless index improves reproducibility because operators tended to consistently over-read or under-read on LVOT and AV traces from the same patient (coefficient of variation 9.3 vs. 17.1%; P < 0.001). Conclusion It is far more reproducible to measure the peak of a Doppler trace than the VTI, a strategy that reduces measurement variance by approximately six-fold. Peak measurements are superior to VTI because tracing the steep slopes in the early and late part of the VTI envelope is difficult to achieve reproducibly

    Automated speckle tracking algorithm to aid on-axis imaging in echocardiography

    Get PDF
    Obtaining a “correct” view in echocardiography is a subjective process in which an operator attempts to obtain images conforming to consensus standard views. Real-time objective quantification of image alignment may assist less experienced operators, but no reliable index yet exists. We present a fully automated algorithm for detecting incorrect medial/lateral translation of an ultrasound probe by image analysis. The ability of the algorithm to distinguish optimal from sub-optimal four-chamber images was compared to that of specialists—the current “gold-standard.” The orientation assessments produced by the automated algorithm correlated well with consensus visual assessments of the specialists (r=0.87r=0.87) and compared favourably with the correlation between individual specialists and the consensus, 0.82±0.09. Each individual specialist’s assessments were within the consensus of other specialists, 75±14% of the time, and the algorithm’s assessments were within the consensus of specialists 85% of the time. The mean discrepancy in probe translation values between individual specialists and their consensus was 0.97±0.87  cm, and between the automated algorithm and specialists’ consensus was 0.92±0.70  cm. This technology could be incorporated into hardware to provide real-time guidance for image optimisation—a potentially valuable tool both for training and quality control

    Frame rate required for speckle tracking echocardiography: A quantitative clinical study with open-source, vendor-independent software

    Get PDF
    Background Assessing left ventricular function with speckle tracking is useful in patient diagnosis but requires a temporal resolution that can follow myocardial motion. In this study we investigated the effect of different frame rates on the accuracy of speckle tracking results, highlighting the temporal resolution where reliable results can be obtained. Material and methods 27 patients were scanned at two different frame rates at their resting heart rate. From all acquired loops, lower temporal resolution image sequences were generated by dropping frames, decreasing the frame rate by up to 10-fold. Results Tissue velocities were estimated by automated speckle tracking. Above 40 frames/s the peak velocity was reliably measured. When frame rate was lower, the inter-frame interval containing the instant of highest velocity also contained lower velocities, and therefore the average velocity in that interval was an underestimate of the clinically desired instantaneous maximum velocity. Conclusions The higher the frame rate, the more accurately maximum velocities are identified by speckle tracking, until the frame rate drops below 40 frames/s, beyond which there is little increase in peak velocity. We provide in an online supplement the vendor-independent software we used for automatic speckle-tracked velocity assessment to help others working in this field

    Automated aortic Doppler flow tracing for reproducible research and clinical measurements

    Get PDF
    In clinical practice, echocardiographers are often unkeen to make the significant time investment to make additional multiple measurements of Doppler velocity. Main hurdle to obtaining multiple measurements is the time required to manually trace a series of Doppler traces. To make it easier to analyze more beats, we present the description of an application system for automated aortic Doppler envelope quantification, compatible with a range of hardware platforms. It analyses long Doppler strips, spanning many heartbeats, and does not require electrocardiogram to separate individual beats. We tested its measurement of velocity-time-integral and peak-velocity against the reference standard defined as the average of three experts who each made three separate measurements. The automated measurements of velocity-time-integral showed strong correspondence (R2 = 0.94) and good Bland-Altman agreement (SD = 1.39 cm) with the reference consensus expert values, and indeed performed as well as the individual experts ( R2 = 0.90 to 0.96, SD = 1.05 to 1.53 cm). The same performance was observed for peak-velocities; ( R2 = 0.98, SD = 3.07 cm/s) and ( R2 = 0.93 to 0.98, SD = 2.96 to 5.18 cm/s). This automated technology allows > 10 times as many beats to be analyzed compared to the conventional manual approach. This would make clinical and research protocols more precise for the same operator effort

    Open-source, vendor-independent, automated multi-beat tissue Doppler echocardiography analysis

    Get PDF
    Current guidelines for measuring cardiac function by tissue Doppler recommend using multiple beats, but this has a time cost for human operators. We present an open-source, vendor-independent, drag-and-drop software capable of automating the measurement process. A database of ~8000 tissue Doppler beats (48 patients) from the septal and lateral annuli were analyzed by three expert echocardiographers. We developed an intensity- and gradient-based automated algorithm to measure tissue Doppler velocities. We tested its performance against manual measurements from the expert human operators. Our algorithm showed strong agreement with expert human operators. Performance was indistinguishable from a human operator: for algorithm, mean difference and SDD from the mean of human operators’ estimates 0.48 ± 1.12 cm/s (R2= 0.82); for the humans individually this was 0.43 ± 1.11 cm/s (R2= 0.84), −0.88 ± 1.12 cm/s (R2= 0.84) and 0.41 ± 1.30 cm/s (R2= 0.78). Agreement between operators and the automated algorithm was preserved when measuring at either the edge or middle of the trace. The algorithm was 10-fold quicker than manual measurements (p < 0.001). This open-source, vendor-independent, drag-and-drop software can make peak velocity measurements from pulsed wave tissue Doppler traces as accurately as human experts. This automation permits rapid, bias-resistant multi-beat analysis from spectral tissue Doppler images.European Research Council and British Heart Foundatio

    Early esophageal adenocarcinoma detection using deep learning methods

    Get PDF
    Purpose This study aims to adapt and evaluate the performance of different state-of-the-art deep learning object detection methods to automatically identify esophageal adenocarcinoma (EAC) regions from high-definition white light endoscopy (HD-WLE) images. Method Several state-of-the-art object detection methods using Convolutional Neural Networks (CNNs) were adapted to automatically detect abnormal regions in the esophagus HD-WLE images, utilizing VGG’16 as the backbone architecture for feature extraction. Those methods are Regional-based Convolutional Neural Network (R-CNN), Fast R-CNN, Faster R-CNN and Single-Shot Multibox Detector (SSD). For the evaluation of the different methods, 100 images from 39 patients that have been manually annotated by five experienced clinicians as ground truth have been tested. Results Experimental results illustrate that the SSD and Faster R-CNN networks show promising results, and the SSD outperforms other methods achieving a sensitivity of 0.96, specificity of 0.92 and F-measure of 0.94. Additionally, the Average Recall Rate of the Faster R-CNN in locating the EAC region accurately is 0.83. Conclusion In this paper, recent deep learning object detection methods are adapted to detect esophageal abnormalities automatically. The evaluation of the methods proved its ability to locate abnormal regions in the esophagus from endoscopic images. The automatic detection is a crucial step that may help early detection and treatment of EAC and also can improve automatic tumor segmentation to monitor its growth and treatment outcome
    corecore