8,810 research outputs found
Transfer learning in ECG classification from human to horse using a novel parallel neural network architecture
Automatic or semi-automatic analysis of the equine electrocardiogram (eECG) is currently not possible because human or small animal ECG analysis software is unreliable due to a different ECG morphology in horses resulting from a different cardiac innervation. Both filtering, beat detection to classification for eECGs are currently poorly or not described in the literature. There are also no public databases available for eECGs as is the case for human ECGs. In this paper we propose the use of wavelet transforms for both filtering and QRS detection in eECGs. In addition, we propose a novel robust deep neural network using a parallel convolutional neural network architecture for ECG beat classification. The network was trained and tested using both the MIT-BIH arrhythmia and an own made eECG dataset with 26.440 beats on 4 classes: normal, premature ventricular contraction, premature atrial contraction and noise. The network was optimized using a genetic algorithm and an accuracy of 97.7% and 92.6% was achieved for the MIT-BIH and eECG database respectively. Afterwards, transfer learning from the MIT-BIH dataset to the eECG database was applied after which the average accuracy, recall, positive predictive value and F1 score of the network increased with an accuracy of 97.1%
Spatio-temporal wavelet regularization for parallel MRI reconstruction: application to functional MRI
Parallel MRI is a fast imaging technique that enables the acquisition of
highly resolved images in space or/and in time. The performance of parallel
imaging strongly depends on the reconstruction algorithm, which can proceed
either in the original k-space (GRAPPA, SMASH) or in the image domain
(SENSE-like methods). To improve the performance of the widely used SENSE
algorithm, 2D- or slice-specific regularization in the wavelet domain has been
deeply investigated. In this paper, we extend this approach using 3D-wavelet
representations in order to handle all slices together and address
reconstruction artifacts which propagate across adjacent slices. The gain
induced by such extension (3D-Unconstrained Wavelet Regularized -SENSE:
3D-UWR-SENSE) is validated on anatomical image reconstruction where no temporal
acquisition is considered. Another important extension accounts for temporal
correlations that exist between successive scans in functional MRI (fMRI). In
addition to the case of 2D+t acquisition schemes addressed by some other
methods like kt-FOCUSS, our approach allows us to deal with 3D+t acquisition
schemes which are widely used in neuroimaging. The resulting 3D-UWR-SENSE and
4D-UWR-SENSE reconstruction schemes are fully unsupervised in the sense that
all regularization parameters are estimated in the maximum likelihood sense on
a reference scan. The gain induced by such extensions is illustrated on both
anatomical and functional image reconstruction, and also measured in terms of
statistical sensitivity for the 4D-UWR-SENSE approach during a fast
event-related fMRI protocol. Our 4D-UWR-SENSE algorithm outperforms the SENSE
reconstruction at the subject and group levels (15 subjects) for different
contrasts of interest (eg, motor or computation tasks) and using different
parallel acceleration factors (R=2 and R=4) on 2x2x3mm3 EPI images.Comment: arXiv admin note: substantial text overlap with arXiv:1103.353
Source detection using a 3D sparse representation: application to the Fermi gamma-ray space telescope
The multiscale variance stabilization Transform (MSVST) has recently been
proposed for Poisson data denoising. This procedure, which is nonparametric, is
based on thresholding wavelet coefficients. We present in this paper an
extension of the MSVST to 3D data (in fact 2D-1D data) when the third dimension
is not a spatial dimension, but the wavelength, the energy, or the time. We
show that the MSVST can be used for detecting and characterizing astrophysical
sources of high-energy gamma rays, using realistic simulated observations with
the Large Area Telescope (LAT). The LAT was launched in June 2008 on the Fermi
Gamma-ray Space Telescope mission. The MSVST algorithm is very fast relative to
traditional likelihood model fitting, and permits efficient detection across
the time dimension and immediate estimation of spectral properties.
Astrophysical sources of gamma rays, especially active galaxies, are typically
quite variable, and our current work may lead to a reliable method to quickly
characterize the flaring properties of newly-detected sources.Comment: Accepted. Full paper will figures available at
http://jstarck.free.fr/aa08_msvst.pd
Multiresolution analysis using wavelet, ridgelet, and curvelet transforms for medical image segmentation
Copyright @ 2011 Shadi AlZubi et al. This article has been made available through the Brunel Open Access Publishing Fund.The experimental study presented in this paper is aimed at the development of an automatic image segmentation system for classifying region of interest (ROI) in medical images which are obtained from different medical scanners such as PET, CT, or MRI. Multiresolution analysis (MRA) using wavelet, ridgelet, and curvelet transforms has been used in the proposed segmentation system. It is particularly a challenging task to classify cancers in human organs in scanners output using shape or gray-level information; organs shape changes throw different slices in medical stack and the gray-level intensity overlap in soft tissues. Curvelet transform is a new extension of wavelet and ridgelet transforms which aims to deal with interesting phenomena occurring along curves. Curvelet transforms has been tested on medical data sets, and results are compared with those obtained from the other transforms. Tests indicate that using curvelet significantly improves the classification of abnormal tissues in the scans and reduce the surrounding noise
Watermarking for multimedia security using complex wavelets
This paper investigates the application of complex wavelet transforms to the field of digital data hiding. Complex wavelets offer improved directional selectivity and shift invariance over their discretely sampled counterparts allowing for better adaptation of watermark distortions to the host media. Two methods of deriving visual models for the watermarking system are adapted to the complex wavelet transforms and their performances are compared. To produce improved capacity a spread transform embedding algorithm is devised, this combines the robustness of spread spectrum methods with the high capacity of quantization based methods. Using established information theoretic methods, limits of watermark capacity are derived that demonstrate the superiority of complex wavelets over discretely sampled wavelets. Finally results for the algorithm against commonly used attacks demonstrate its robustness and the improved performance offered by complex wavelet transforms
Automated analysis of quantitative image data using isomorphic functional mixed models, with application to proteomics data
Image data are increasingly encountered and are of growing importance in many
areas of science. Much of these data are quantitative image data, which are
characterized by intensities that represent some measurement of interest in the
scanned images. The data typically consist of multiple images on the same
domain and the goal of the research is to combine the quantitative information
across images to make inference about populations or interventions. In this
paper we present a unified analysis framework for the analysis of quantitative
image data using a Bayesian functional mixed model approach. This framework is
flexible enough to handle complex, irregular images with many local features,
and can model the simultaneous effects of multiple factors on the image
intensities and account for the correlation between images induced by the
design. We introduce a general isomorphic modeling approach to fitting the
functional mixed model, of which the wavelet-based functional mixed model is
one special case. With suitable modeling choices, this approach leads to
efficient calculations and can result in flexible modeling and adaptive
smoothing of the salient features in the data. The proposed method has the
following advantages: it can be run automatically, it produces inferential
plots indicating which regions of the image are associated with each factor, it
simultaneously considers the practical and statistical significance of
findings, and it controls the false discovery rate.Comment: Published in at http://dx.doi.org/10.1214/10-AOAS407 the Annals of
Applied Statistics (http://www.imstat.org/aoas/) by the Institute of
Mathematical Statistics (http://www.imstat.org
- âŠ