1,100 research outputs found
Sparse Bayesian mass-mapping with uncertainties: hypothesis testing of structure
A crucial aspect of mass-mapping, via weak lensing, is quantification of the
uncertainty introduced during the reconstruction process. Properly accounting
for these errors has been largely ignored to date. We present results from a
new method that reconstructs maximum a posteriori (MAP) convergence maps by
formulating an unconstrained Bayesian inference problem with Laplace-type
-norm sparsity-promoting priors, which we solve via convex
optimization. Approaching mass-mapping in this manner allows us to exploit
recent developments in probability concentration theory to infer theoretically
conservative uncertainties for our MAP reconstructions, without relying on
assumptions of Gaussianity. For the first time these methods allow us to
perform hypothesis testing of structure, from which it is possible to
distinguish between physical objects and artifacts of the reconstruction. Here
we present this new formalism, demonstrate the method on illustrative examples,
before applying the developed formalism to two observational datasets of the
Abel-520 cluster. In our Bayesian framework it is found that neither Abel-520
dataset can conclusively determine the physicality of individual local massive
substructure at significant confidence. However, in both cases the recovered
MAP estimators are consistent with both sets of data
Image Decomposition and Separation Using Sparse Representations: An Overview
This paper gives essential insights into the use of sparsity and morphological diversity in image decomposition and source separation by reviewing our recent work in this field. The idea to morphologically decompose a signal into its building blocks is an important problem in signal processing and has far-reaching applications in science and technology. Starck , proposed a novel decomposition methodâmorphological component analysis (MCA)âbased on sparse representation of signals. MCA assumes that each (monochannel) signal is the linear mixture of several layers, the so-called morphological components, that are morphologically distinct, e.g., sines and bumps. The success of this method relies on two tenets: sparsity and morphological diversity. That is, each morphological component is sparsely represented in a specific transform domain, and the latter is highly inefficient in representing the other content in the mixture. Once such transforms are identified, MCA is an iterative thresholding algorithm that is capable of decoupling the signal content. Sparsity and morphological diversity have also been used as a novel and effective source of diversity for blind source separation (BSS), hence extending the MCA to multichannel data. Building on these ingredients, we will provide an overview the generalized MCA introduced by the authors in and as a fast and efficient BSS method. We will illustrate the application of these algorithms on several real examples. We conclude our tour by briefly describing our software toolboxes made available for download on the Internet for sparse signal and image decomposition and separation
Analysis, Visualization, and Transformation of Audio Signals Using Dictionary-based Methods
date-added: 2014-01-07 09:15:58 +0000 date-modified: 2014-01-07 09:15:58 +0000date-added: 2014-01-07 09:15:58 +0000 date-modified: 2014-01-07 09:15:58 +000
MAGMA: Multi-level accelerated gradient mirror descent algorithm for large-scale convex composite minimization
Composite convex optimization models arise in several applications, and are
especially prevalent in inverse problems with a sparsity inducing norm and in
general convex optimization with simple constraints. The most widely used
algorithms for convex composite models are accelerated first order methods,
however they can take a large number of iterations to compute an acceptable
solution for large-scale problems. In this paper we propose to speed up first
order methods by taking advantage of the structure present in many applications
and in image processing in particular. Our method is based on multi-level
optimization methods and exploits the fact that many applications that give
rise to large scale models can be modelled using varying degrees of fidelity.
We use Nesterov's acceleration techniques together with the multi-level
approach to achieve convergence rate, where
denotes the desired accuracy. The proposed method has a better
convergence rate than any other existing multi-level method for convex
problems, and in addition has the same rate as accelerated methods, which is
known to be optimal for first-order methods. Moreover, as our numerical
experiments show, on large-scale face recognition problems our algorithm is
several times faster than the state of the art
Enhancing face recognition at a distance using super resolution
The characteristics of surveillance video generally include low-resolution images and blurred images. Decreases in image resolution lead to loss of high frequency facial components, which is expected to adversely affect recognition rates. Super resolution (SR) is a technique used to generate a higher resolution image from a given low-resolution, degraded image. Dictionary based super resolution pre-processing techniques have been developed to overcome the problem of low-resolution images in face recognition. However, super resolution reconstruction process, being ill-posed, and results in visual artifacts that can be visually distracting to humans and/or affect machine feature extraction and face recognition algorithms. In this paper, we investigate the impact of two existing super-resolution methods to reconstruct a high resolution from single/multiple low-resolution images on face recognition. We propose an alternative scheme that is based on dictionaries in high frequency wavelet subbands. The performance of the proposed method will be evaluated on databases of high and low-resolution images captured under different illumination conditions and at different distances. We shall demonstrate that the proposed approach at level 3 DWT decomposition has superior performance in comparison to the other super resolution methods
DWT and SWT based Image Super Resolution without Degrading Clarity
This project presents a self-similarity-based approach that is able to use large groups of similar patches extracted from the input image to solve the SISR problem. It introduce a novel prior leading to the collaborative filtering of patch groups in a 1D similarity domain and couple it with an iterative back-projection framework. The performance of the proposed algorithm is evaluated on a number of SISR benchmark data sets. Without using any external data, the proposed approach outperforms the current non-convolutional neural network-based methods on the tested data sets for various scaling factors. As an extension of this project, Discrete and Stationary Wavelet Decomposition is proposed to improve accuracy levels
Effective sparse representation of X-Ray medical images
Effective sparse representation of X-Ray medical images within the context of data reduction is considered. The proposed framework is shown to render an enormous reduction in the cardinality of the data set required to represent this class of images at very good quality. The goal is achieved by a) creating a dictionary of suitable elements for the image decomposition in the wavelet domain and b) applying effective greedy strategies for selecting the particular elements which enable the sparse decomposition of the wavelet coefficients. The particularity of the approach is that it can be implemented at very competitive processing time and low memory requirements
- âŚ