8,862 research outputs found
A multi-resolution, non-parametric, Bayesian framework for identification of spatially-varying model parameters
This paper proposes a hierarchical, multi-resolution framework for the
identification of model parameters and their spatially variability from noisy
measurements of the response or output. Such parameters are frequently
encountered in PDE-based models and correspond to quantities such as density or
pressure fields, elasto-plastic moduli and internal variables in solid
mechanics, conductivity fields in heat diffusion problems, permeability fields
in fluid flow through porous media etc. The proposed model has all the
advantages of traditional Bayesian formulations such as the ability to produce
measures of confidence for the inferences made and providing not only
predictive estimates but also quantitative measures of the predictive
uncertainty. In contrast to existing approaches it utilizes a parsimonious,
non-parametric formulation that favors sparse representations and whose
complexity can be determined from the data. The proposed framework in
non-intrusive and makes use of a sequence of forward solvers operating at
various resolutions. As a result, inexpensive, coarse solvers are used to
identify the most salient features of the unknown field(s) which are
subsequently enriched by invoking solvers operating at finer resolutions. This
leads to significant computational savings particularly in problems involving
computationally demanding forward models but also improvements in accuracy. It
is based on a novel, adaptive scheme based on Sequential Monte Carlo sampling
which is embarrassingly parallelizable and circumvents issues with slow mixing
encountered in Markov Chain Monte Carlo schemes
Constrained Approximation of Effective Generators for Multiscale Stochastic Reaction Networks and Application to Conditioned Path Sampling
Efficient analysis and simulation of multiscale stochastic systems of
chemical kinetics is an ongoing area for research, and is the source of many
theoretical and computational challenges. In this paper, we present a
significant improvement to the constrained approach, which is a method for
computing effective dynamics of slowly changing quantities in these systems,
but which does not rely on the quasi-steady-state assumption (QSSA). The QSSA
can cause errors in the estimation of effective dynamics for systems where the
difference in timescales between the "fast" and "slow" variables is not so
pronounced.
This new application of the constrained approach allows us to compute the
effective generator of the slow variables, without the need for expensive
stochastic simulations. This is achieved by finding the null space of the
generator of the constrained system. For complex systems where this is not
possible, or where the constrained subsystem is itself multiscale, the
constrained approach can then be applied iteratively. This results in breaking
the problem down into finding the solutions to many small eigenvalue problems,
which can be efficiently solved using standard methods.
Since this methodology does not rely on the quasi steady-state assumption,
the effective dynamics that are approximated are highly accurate, and in the
case of systems with only monomolecular reactions, are exact. We will
demonstrate this with some numerics, and also use the effective generators to
sample paths of the slow variables which are conditioned on their endpoints, a
task which would be computationally intractable for the generator of the full
system.Comment: 31 pages, 7 figure
Polarized wavelets and curvelets on the sphere
The statistics of the temperature anisotropies in the primordial cosmic
microwave background radiation field provide a wealth of information for
cosmology and for estimating cosmological parameters. An even more acute
inference should stem from the study of maps of the polarization state of the
CMB radiation. Measuring the extremely weak CMB polarization signal requires
very sensitive instruments. The full-sky maps of both temperature and
polarization anisotropies of the CMB to be delivered by the upcoming Planck
Surveyor satellite experiment are hence being awaited with excitement.
Multiscale methods, such as isotropic wavelets, steerable wavelets, or
curvelets, have been proposed in the past to analyze the CMB temperature map.
In this paper, we contribute to enlarging the set of available transforms for
polarized data on the sphere. We describe a set of new multiscale
decompositions for polarized data on the sphere, including decimated and
undecimated Q-U or E-B wavelet transforms and Q-U or E-B curvelets. The
proposed transforms are invertible and so allow for applications in data
restoration and denoising.Comment: Accepted. Full paper will figures available at
http://jstarck.free.fr/aa08_pola.pd
Sparse component separation for accurate CMB map estimation
The Cosmological Microwave Background (CMB) is of premier importance for the
cosmologists to study the birth of our universe. Unfortunately, most CMB
experiments such as COBE, WMAP or Planck do not provide a direct measure of the
cosmological signal; CMB is mixed up with galactic foregrounds and point
sources. For the sake of scientific exploitation, measuring the CMB requires
extracting several different astrophysical components (CMB, Sunyaev-Zel'dovich
clusters, galactic dust) form multi-wavelength observations. Mathematically
speaking, the problem of disentangling the CMB map from the galactic
foregrounds amounts to a component or source separation problem. In the field
of CMB studies, a very large range of source separation methods have been
applied which all differ from each other in the way they model the data and the
criteria they rely on to separate components. Two main difficulties are i) the
instrument's beam varies across frequencies and ii) the emission laws of most
astrophysical components vary across pixels. This paper aims at introducing a
very accurate modeling of CMB data, based on sparsity, accounting for beams
variability across frequencies as well as spatial variations of the components'
spectral characteristics. Based on this new sparse modeling of the data, a
sparsity-based component separation method coined Local-Generalized
Morphological Component Analysis (L-GMCA) is described. Extensive numerical
experiments have been carried out with simulated Planck data. These experiments
show the high efficiency of the proposed component separation methods to
estimate a clean CMB map with a very low foreground contamination, which makes
L-GMCA of prime interest for CMB studies.Comment: submitted to A&
The curvelet transform for image denoising
We describe approximate digital implementations of two new mathematical transforms, namely, the ridgelet transform and the curvelet transform. Our implementations offer exact reconstruction, stability against perturbations, ease of implementation, and low computational complexity. A central tool is Fourier-domain computation of an approximate digital Radon transform. We introduce a very simple interpolation in the Fourier space which takes Cartesian samples and yields samples on a rectopolar grid, which is a pseudo-polar sampling set based on a concentric squares geometry. Despite the crudeness of our interpolation, the visual performance is surprisingly good. Our ridgelet transform applies to the Radon transform a special overcomplete wavelet pyramid whose wavelets have compact support in the frequency domain. Our curvelet transform uses our ridgelet transform as a component step, and implements curvelet subbands using a filter bank of a` trous wavelet filters. Our philosophy throughout is that transforms should be overcomplete, rather than critically sampled. We apply these digital transforms to the denoising of some standard images embedded in white noise. In the tests reported here, simple thresholding of the curvelet coefficients is very competitive with "state of the art" techniques based on wavelets, including thresholding of decimated or undecimated wavelet transforms and also including tree-based Bayesian posterior mean methods. Moreover, the curvelet reconstructions exhibit higher perceptual quality than wavelet-based reconstructions, offering visually sharper images and, in particular, higher quality recovery of edges and of faint linear and curvilinear features. Existing theory for curvelet and ridgelet transforms suggests that these new approaches can outperform wavelet methods in certain image reconstruction problems. The empirical results reported here are in encouraging agreement
A proximal iteration for deconvolving Poisson noisy images using sparse representations
We propose an image deconvolution algorithm when the data is contaminated by
Poisson noise. The image to restore is assumed to be sparsely represented in a
dictionary of waveforms such as the wavelet or curvelet transforms. Our key
contributions are: First, we handle the Poisson noise properly by using the
Anscombe variance stabilizing transform leading to a {\it non-linear}
degradation equation with additive Gaussian noise. Second, the deconvolution
problem is formulated as the minimization of a convex functional with a
data-fidelity term reflecting the noise properties, and a non-smooth
sparsity-promoting penalties over the image representation coefficients (e.g.
-norm). Third, a fast iterative backward-forward splitting algorithm is
proposed to solve the minimization problem. We derive existence and uniqueness
conditions of the solution, and establish convergence of the iterative
algorithm. Finally, a GCV-based model selection procedure is proposed to
objectively select the regularization parameter. Experimental results are
carried out to show the striking benefits gained from taking into account the
Poisson statistics of the noise. These results also suggest that using
sparse-domain regularization may be tractable in many deconvolution
applications with Poisson noise such as astronomy and microscopy
- âŠ