2,930 research outputs found
Deep Learning in Cardiology
The medical field is creating large amount of data that physicians are unable
to decipher and use efficiently. Moreover, rule-based expert systems are
inefficient in solving complicated medical tasks or for creating insights using
big data. Deep learning has emerged as a more accurate and effective technology
in a wide range of medical problems such as diagnosis, prediction and
intervention. Deep learning is a representation learning method that consists
of layers that transform the data non-linearly, thus, revealing hierarchical
relationships and structures. In this review we survey deep learning
application papers that use structured data, signal and imaging modalities from
cardiology. We discuss the advantages and limitations of applying deep learning
in cardiology that also apply in medicine in general, while proposing certain
directions as the most viable for clinical use.Comment: 27 pages, 2 figures, 10 table
Data fusion for NDE signal characterization
The primary objective of multi-sensor data fusion, which offers both quantitative and qualitative benefits, is to be able to draw inferences that may not be feasible with data from a single sensor alone. In this study, data from two sets of sensors are fused to estimate the defect profile from magnetic flux leakage (MFL) inspection data. The two sensors measure the axial and circumferential components of the MFL field. Data is fused at the signal level. The two signals are combined as the real and imaginary components of a complex valued signal. Signals from an array of sensors are arranged in contiguous rows to obtain a complex valued image. Signals from the defect regions are then processed to minimize noise and the effects of lift-off. A boundary extraction algorithm is used not only to estimate the defect size more accurately, but also to segment the defect area. A wavelet basis function neural network (WBFNN) is then employed to map the complex valued image appropriately to obtain the geometric profile of the defect. The feasibility of the approach was evaluated using the data obtained from the MFL inspection of natural gas transmission pipelines. The results obtained by fusing the axial and circumferential component appear to be better than those obtained using the axial component alone. Finally, a WBFNN based boundary extraction scheme is employed for the proposed fusion approach. The boundary based adaptive weighted average (BBAWA) offers superior performance compared to three alternative different fusion methods employing weighted average (WA), principal component analysis (PCA), and adaptive weighted average (AWA) methods
Recommended from our members
LV Volume Quantification via Spatiotemporal Analysis of Real-Time 3-D Echocardiography
This paper presents a method of four-dimensional (4-D) (3-D+Time) space-frequency analysis for directional denoising and enhancement of real-time three-dimensional (RT3D) ultrasound and quantitative measures in diagnostic cardiac ultrasound. Expansion of echocardiographic volumes is performed with complex exponential wavelet-like basis functions called brushlets. These functions offer good localization in time and frequency and decompose a signal into distinct patterns of oriented harmonics, which are invariant to intensity and contrast range. Deformable-model segmentation is carried out on denoised data after thresholding of transform coefficients. This process attenuates speckle noise while preserving cardiac structure location. The superiority of 4-D over 3-D analysis for decorrelating additive white noise and multiplicative speckle noise on a 4-D phantom volume expanding in time is demonstrated. Quantitative validation, computed for contours and volumes, is performed on in vitro balloon phantoms. Clinical applications of this spatiotemporal analysis tool are reported for six patient cases providing measures of left ventricular volumes and ejection fraction
Reconstructive Sparse Code Transfer for Contour Detection and Semantic Labeling
We frame the task of predicting a semantic labeling as a sparse
reconstruction procedure that applies a target-specific learned transfer
function to a generic deep sparse code representation of an image. This
strategy partitions training into two distinct stages. First, in an
unsupervised manner, we learn a set of generic dictionaries optimized for
sparse coding of image patches. We train a multilayer representation via
recursive sparse dictionary learning on pooled codes output by earlier layers.
Second, we encode all training images with the generic dictionaries and learn a
transfer function that optimizes reconstruction of patches extracted from
annotated ground-truth given the sparse codes of their corresponding image
patches. At test time, we encode a novel image using the generic dictionaries
and then reconstruct using the transfer function. The output reconstruction is
a semantic labeling of the test image.
Applying this strategy to the task of contour detection, we demonstrate
performance competitive with state-of-the-art systems. Unlike almost all prior
work, our approach obviates the need for any form of hand-designed features or
filters. To illustrate general applicability, we also show initial results on
semantic part labeling of human faces.
The effectiveness of our approach opens new avenues for research on deep
sparse representations. Our classifiers utilize this representation in a novel
manner. Rather than acting on nodes in the deepest layer, they attach to nodes
along a slice through multiple layers of the network in order to make
predictions about local patches. Our flexible combination of a generatively
learned sparse representation with discriminatively trained transfer
classifiers extends the notion of sparse reconstruction to encompass arbitrary
semantic labeling tasks.Comment: to appear in Asian Conference on Computer Vision (ACCV), 201
Lv volume quantification via spatiotemporal analysis of real-time 3-d echocardiography
Abstract—This paper presents a method of four-dimensional (4-D) (3-D + Time) space–frequency analysis for directional denoising and enhancement of real-time three-dimensional (RT3D) ultrasound and quantitative measures in diagnostic cardiac ultrasound. Expansion of echocardiographic volumes is performed with complex exponential wavelet-like basis functions called brushlets. These functions offer good localization in time and frequency and decompose a signal into distinct patterns of oriented harmonics, which are invariant to intensity and contrast range. Deformable-model segmentation is carried out on denoised data after thresholding of transform coefficients. This process attenuates speckle noise while preserving cardiac structure location. The superiority of 4-D over 3-D analysis for decorrelating additive white noise and multiplicative speckle noise on a 4-D phantom volume expanding in time is demonstrated. Quantitative validation, computed for contours and volumes, is performed on in vitro balloon phantoms. Clinical applications of this spaciotemporal analysis tool are reported for six patient cases providing measures of left ventricular volumes and ejection fraction. Index Terms—Echocardiography, LV volume, spaciotemporal analysis, speckle denoising. I
Sparse Representation-Based Framework for Preprocessing Brain MRI
This thesis addresses the use of sparse representations, specifically Dictionary Learning and Sparse Coding, for pre-processing brain MRI, so that the processed image retains the fine details of the original image, to improve the segmentation of brain structures, to assess whether there is any relationship between alterations in brain structures and the behavior of young offenders. Denoising an MRI while keeping fine details is a difficult task; however, the proposed method, based on sparse representations, NLM, and SVD can filter noise while prevents blurring, artifacts, and residual noise. Segmenting an MRI is a non-trivial task; because normally the limits between regions in these images may be neither clear nor well defined, due to the problems which affect MRI. However, this method, from both the label matrix of the segmented MRI and the original image, yields a new improved label matrix in which improves the limits among regions.DoctoradoDoctor en IngenierÃa de Sistemas y Computació
- …