809 research outputs found
Improving the performance of translation wavelet transform using BMICA
Research has shown Wavelet Transform to be one of the best methods for denoising biosignals. Translation-Invariant
form of this method has been found to be the best performance. In this paper however we utilize this method and merger with our newly created Independent Component Analysis method â BMICA. Different EEG signals are used to verify the method within the MATLAB environment. Results are then compared with those of the actual Translation-Invariant algorithm and evaluated using the performance measures Mean Square Error (MSE), Peak Signal to Noise Ratio (PSNR), Signal to Distortion Ratio (SDR), and Signal to Interference Ratio (SIR). Experiments revealed that the BMICA Translation-Invariant Wavelet Transform out performed in all four measures. This indicates that it performed superior to the basic Translation- Invariant Wavelet Transform algorithm producing cleaner EEG signals which can influence diagnosis as well as clinical studies of the brain
Robust Cardiac Motion Estimation using Ultrafast Ultrasound Data: A Low-Rank-Topology-Preserving Approach
Cardiac motion estimation is an important diagnostic tool to detect heart
diseases and it has been explored with modalities such as MRI and conventional
ultrasound (US) sequences. US cardiac motion estimation still presents
challenges because of the complex motion patterns and the presence of noise. In
this work, we propose a novel approach to estimate the cardiac motion using
ultrafast ultrasound data. -- Our solution is based on a variational
formulation characterized by the L2-regularized class. The displacement is
represented by a lattice of b-splines and we ensure robustness by applying a
maximum likelihood type estimator. While this is an important part of our
solution, the main highlight of this paper is to combine a low-rank data
representation with topology preservation. Low-rank data representation
(achieved by finding the k-dominant singular values of a Casorati Matrix
arranged from the data sequence) speeds up the global solution and achieves
noise reduction. On the other hand, topology preservation (achieved by
monitoring the Jacobian determinant) allows to radically rule out distortions
while carefully controlling the size of allowed expansions and contractions.
Our variational approach is carried out on a realistic dataset as well as on a
simulated one. We demonstrate how our proposed variational solution deals with
complex deformations through careful numerical experiments. While maintaining
the accuracy of the solution, the low-rank preprocessing is shown to speed up
the convergence of the variational problem. Beyond cardiac motion estimation,
our approach is promising for the analysis of other organs that experience
motion.Comment: 15 pages, 10 figures, Physics in Medicine and Biology, 201
The curvelet transform for image denoising
We describe approximate digital implementations of two new mathematical transforms, namely, the ridgelet transform and the curvelet transform. Our implementations offer exact reconstruction, stability against perturbations, ease of implementation, and low computational complexity. A central tool is Fourier-domain computation of an approximate digital Radon transform. We introduce a very simple interpolation in the Fourier space which takes Cartesian samples and yields samples on a rectopolar grid, which is a pseudo-polar sampling set based on a concentric squares geometry. Despite the crudeness of our interpolation, the visual performance is surprisingly good. Our ridgelet transform applies to the Radon transform a special overcomplete wavelet pyramid whose wavelets have compact support in the frequency domain. Our curvelet transform uses our ridgelet transform as a component step, and implements curvelet subbands using a filter bank of a` trous wavelet filters. Our philosophy throughout is that transforms should be overcomplete, rather than critically sampled. We apply these digital transforms to the denoising of some standard images embedded in white noise. In the tests reported here, simple thresholding of the curvelet coefficients is very competitive with "state of the art" techniques based on wavelets, including thresholding of decimated or undecimated wavelet transforms and also including tree-based Bayesian posterior mean methods. Moreover, the curvelet reconstructions exhibit higher perceptual quality than wavelet-based reconstructions, offering visually sharper images and, in particular, higher quality recovery of edges and of faint linear and curvilinear features. Existing theory for curvelet and ridgelet transforms suggests that these new approaches can outperform wavelet methods in certain image reconstruction problems. The empirical results reported here are in encouraging agreement
Image interpolation using Shearlet based iterative refinement
This paper proposes an image interpolation algorithm exploiting sparse
representation for natural images. It involves three main steps: (a) obtaining
an initial estimate of the high resolution image using linear methods like FIR
filtering, (b) promoting sparsity in a selected dictionary through iterative
thresholding, and (c) extracting high frequency information from the
approximation to refine the initial estimate. For the sparse modeling, a
shearlet dictionary is chosen to yield a multiscale directional representation.
The proposed algorithm is compared to several state-of-the-art methods to
assess its objective as well as subjective performance. Compared to the cubic
spline interpolation method, an average PSNR gain of around 0.8 dB is observed
over a dataset of 200 images
Audio Source Separation Using Sparse Representations
This is the author's final version of the article, first published as A. Nesbit, M. G. Jafari, E. Vincent and M. D. Plumbley. Audio Source Separation Using Sparse Representations. In W. Wang (Ed), Machine Audition: Principles, Algorithms and Systems. Chapter 10, pp. 246-264. IGI Global, 2011. ISBN 978-1-61520-919-4. DOI: 10.4018/978-1-61520-919-4.ch010file: NesbitJafariVincentP11-audio.pdf:n\NesbitJafariVincentP11-audio.pdf:PDF owner: markp timestamp: 2011.02.04file: NesbitJafariVincentP11-audio.pdf:n\NesbitJafariVincentP11-audio.pdf:PDF owner: markp timestamp: 2011.02.04The authors address the problem of audio source separation, namely, the recovery of audio signals from recordings of mixtures of those signals. The sparse component analysis framework is a powerful method for achieving this. Sparse orthogonal transforms, in which only few transform coefficients differ significantly from zero, are developed; once the signal has been transformed, energy is apportioned from each transform coefficient to each estimated source, and, finally, the signal is reconstructed using the inverse transform. The overriding aim of this chapter is to demonstrate how this framework, as exemplified here by two different decomposition methods which adapt to the signal to represent it sparsely, can be used to solve different problems in different mixing scenarios. To address the instantaneous (neither delays nor echoes) and underdetermined (more sources than mixtures) mixing model, a lapped orthogonal transform is adapted to the signal by selecting a basis from a library of predetermined bases. This method is highly related to the windowing methods used in the MPEG audio coding framework. In considering the anechoic (delays but no echoes) and determined (equal number of sources and mixtures) mixing case, a greedy adaptive transform is used based on orthogonal basis functions that are learned from the observed data, instead of being selected from a predetermined library of bases. This is found to encode the signal characteristics, by introducing a feedback system between the bases and the observed data. Experiments on mixtures of speech and music signals demonstrate that these methods give good signal approximations and separation performance, and indicate promising directions for future research
Minimizing the residual topography effect on interferograms to improve DInSAR results: estimating land subsidence in Port-Said City, Egypt
The accurate detection of land subsidence rates in urban areas is important to identify damage-prone areas and provide decision-makers with useful information. Meanwhile, no precise measurements of land subsidence have been undertaken within the coastal Port-Said City in Egypt to evaluate its hazard in relationship to sea-level rise. In order to address this shortcoming, this work introduces and evaluates a methodology that substantially improves small subsidence rate estimations in an urban setting. Eight ALOS/PALSAR-1 scenes were used to estimate the land subsidence rates in Port-Said City, using the Small BAse line Subset (SBAS) DInSAR technique. A stereo pair of ALOS/PRISM was used to generate an accurate DEM to minimize the residual topography effect on the generated interferograms. A total of 347 well distributed ground control points (GCP) were collected in Port-Said City using the leveling instrument to calibrate the generated DEM. Moreover, the eight PALSAR scenes were co-registered using 50 well-distributed GCPs and used to generate 22 interferogram pairs. These PALSAR interferograms were subsequently filtered and used together with the coherence data to calculate the phase unwrapping. The phase-unwrapped interferogram-pairs were then evaluated to discard four interferograms that were affected by phase jumps and phase ramps. Results confirmed that using an accurate DEM (ALOS/PRISM) was essential for accurately detecting small deformations. The vertical displacement rate during the investigated period (2007â2010) was estimated to be â28 mm. The results further indicate that the northern area of Port-Said City has been subjected to higher land subsidence rates compared to the southern area. Such land subsidence rates might induce significant environmental changes with respect to sea-level rise
BMICA-independent component analysis based on B-spline mutual information estimator
The information theoretic concept of mutual information provides a general framework to evaluate dependencies between variables. Its estimation however using B-Spline has not been used before in creating an approach for Independent Component Analysis. In this paper we present a B-Spline estimator for mutual information to find the independent components in mixed signals. Tested using electroencephalography (EEG) signals the resulting BMICA (B-Spline Mutual Information Independent Component Analysis)
exhibits better performance than the standard Independent Component Analysis algorithms of FastICA, JADE, SOBI and EFICA in similar simulations. BMICA was found to be also more reliable than the 'renown' FastICA
Wavelets and Imaging Informatics: A Review of the Literature
AbstractModern medicine is a field that has been revolutionized by the emergence of computer and imaging technology. It is increasingly difficult, however, to manage the ever-growing enormous amount of medical imaging information available in digital formats. Numerous techniques have been developed to make the imaging information more easily accessible and to perform analysis automatically. Among these techniques, wavelet transforms have proven prominently useful not only for biomedical imaging but also for signal and image processing in general. Wavelet transforms decompose a signal into frequency bands, the width of which are determined by a dyadic scheme. This particular way of dividing frequency bands matches the statistical properties of most images very well. During the past decade, there has been active research in applying wavelets to various aspects of imaging informatics, including compression, enhancements, analysis, classification, and retrieval. This review represents a survey of the most significant practical and theoretical advances in the field of wavelet-based imaging informatics
Bayesian nonparametric models for peak identification in MALDI-TOF mass spectroscopy
We present a novel nonparametric Bayesian approach based on L\'{e}vy Adaptive
Regression Kernels (LARK) to model spectral data arising from MALDI-TOF (Matrix
Assisted Laser Desorption Ionization Time-of-Flight) mass spectrometry. This
model-based approach provides identification and quantification of proteins
through model parameters that are directly interpretable as the number of
proteins, mass and abundance of proteins and peak resolution, while having the
ability to adapt to unknown smoothness as in wavelet based methods. Informative
prior distributions on resolution are key to distinguishing true peaks from
background noise and resolving broad peaks into individual peaks for multiple
protein species. Posterior distributions are obtained using a reversible jump
Markov chain Monte Carlo algorithm and provide inference about the number of
peaks (proteins), their masses and abundance. We show through simulation
studies that the procedure has desirable true-positive and false-discovery
rates. Finally, we illustrate the method on five example spectra: a blank
spectrum, a spectrum with only the matrix of a low-molecular-weight substance
used to embed target proteins, a spectrum with known proteins, and a single
spectrum and average of ten spectra from an individual lung cancer patient.Comment: Published in at http://dx.doi.org/10.1214/10-AOAS450 the Annals of
Applied Statistics (http://www.imstat.org/aoas/) by the Institute of
Mathematical Statistics (http://www.imstat.org
- âŠ