660 research outputs found
Data augmentation for galaxy density map reconstruction
The matter density is an important knowledge for today cosmology as many
phenomena are linked to matter fluctuations. However, this density is not
directly available, but estimated through lensing maps or galaxy surveys. In
this article, we focus on galaxy surveys which are incomplete and noisy
observations of the galaxy density. Incomplete, as part of the sky is
unobserved or unreliable. Noisy as they are count maps degraded by Poisson
noise. Using a data augmentation method, we propose a two-step method for
recovering the density map, one step for inferring missing data and one for
estimating of the density. The results show that the missing areas are
efficiently inferred and the statistical properties of the maps are very well
preserved
A Few Photons Among Many: Unmixing Signal and Noise for Photon-Efficient Active Imaging
Conventional LIDAR systems require hundreds or thousands of photon detections
to form accurate depth and reflectivity images. Recent photon-efficient
computational imaging methods are remarkably effective with only 1.0 to 3.0
detected photons per pixel, but they are not demonstrated at
signal-to-background ratio (SBR) below 1.0 because their imaging accuracies
degrade significantly in the presence of high background noise. We introduce a
new approach to depth and reflectivity estimation that focuses on unmixing
contributions from signal and noise sources. At each pixel in an image,
short-duration range gates are adaptively determined and applied to remove
detections likely to be due to noise. For pixels with too few detections to
perform this censoring accurately, we borrow data from neighboring pixels to
improve depth estimates, where the neighborhood formation is also adaptive to
scene content. Algorithm performance is demonstrated on experimental data at
varying levels of noise. Results show improved performance of both reflectivity
and depth estimates over state-of-the-art methods, especially at low
signal-to-background ratios. In particular, accurate imaging is demonstrated
with SBR as low as 0.04. This validation of a photon-efficient, noise-tolerant
method demonstrates the viability of rapid, long-range, and low-power LIDAR
imaging
A hybrid approach to CMB lensing reconstruction on all-sky intensity maps
Based on realistic simulations, we propose an hybrid method to reconstruct
the lensing potential power spectrum, directly on PLANCK-like CMB frequency
maps. It implies using a large galactic mask and dealing with a strong
inhomogeneous noise. For l < 100, we show that a full-sky inpainting method,
already described in a previous work, still allows a minimal variance
reconstruction, with a bias that must be accounted for by a Monte-Carlo method,
but that does not couple to the deflection field. For l>100 we develop a method
based on tiling the cut-sky with local 10x10 degrees overlapping tangent planes
(referred to in the following as "patches"). It requires to solve various
issues concerning their size/position, non-periodic boundaries and irregularly
sampled data after the sphere-to-plane projection. We show how the leading
noise term of the quadratic lensing estimator applied onto an apodized patch
can still be taken directly from the data. To not loose spatial accuracy, we
developed a tool that allows the fast determination of the complex Fourier
series coefficients from a bi-dimensional irregularly sampled dataset, without
performing an interpolation. We show that the multi-patch approach allows the
lensing power spectrum reconstruction with a very small bias, thanks to
avoiding the galactic mask and lowering the noise inhomogeneities, while still
having almost a minimal variance. The data quality can be assessed at each
stage and simple bi-dimensional spectra build, which allows the control of
local systematic errors.Comment: A&A version. Mostly english correction
Monte Carlo guided Diffusion for Bayesian linear inverse problems
Ill-posed linear inverse problems that combine knowledge of the forward
measurement model with prior models arise frequently in various applications,
from computational photography to medical imaging. Recent research has focused
on solving these problems with score-based generative models (SGMs) that
produce perceptually plausible images, especially in inpainting problems. In
this study, we exploit the particular structure of the prior defined in the SGM
to formulate recovery in a Bayesian framework as a Feynman--Kac model adapted
from the forward diffusion model used to construct score-based diffusion. To
solve this Feynman--Kac problem, we propose the use of Sequential Monte Carlo
methods. The proposed algorithm, MCGdiff, is shown to be theoretically grounded
and we provide numerical simulations showing that it outperforms competing
baselines when dealing with ill-posed inverse problems.Comment: preprin
Recommended from our members
Efficient Variational Inference for Hierarchical Models of Images, Text, and Networks
Variational inference provides a general optimization framework to approximate the posterior distributions of latent variables in probabilistic models. Although effective in simple scenarios, variational inference may be inaccurate or infeasible when the data is high-dimensional, the model structure is complicated, or variable relationships are non-conjugate. We propose solutions to these problems through the smart design and leverage of model structures, the rigorous derivation of variational bounds, and the creation of flexible algorithms for various models with rich, non-conjugate dependencies.Concretely, we first design an interpretable generative model for natural images, in which the hundreds of thousands of pixels per image are split into small patches represented by Gaussian mixture models. Through structured variational inference, the evidence lower bound of this model automatically recovers the popular expected patch log-likelihood method for image processing. A nonparametric extension using hierarchical Dirichlet processes further enables self-similarities to be captured and image-specific clusters created during inference, boosting image denoising and inpainting accuracy.Then we move on to text data, and design hierarchical topic graphs that generalize the bipartite noisy-OR models previously used for medical diagnosis. We derive auxiliary bounds to overcome the non-conjugacy of noisy-OR conditionals, and use stochastic variational inference to efficiently train on datasets with hundreds of thousands of documents. We dramatically increase the algorithm speed through a constrained family of variational bounds, so that only the ancestors of the sparse observed tokens of each document need to be considered.Finally, we propose a general-purpose Monte Carlo variational inference strategy that is directly applicable to any model with discrete variables. Compared to REINFORCE-style stochastic gradient updates, our coordinate-ascent updates have lower variance and converge much faster. Compared to auxiliary-variable bounds crafted for each individual model, our algorithm is simpler to derive and may be easily integrated into probabilistic programming languages for broader use. By avoiding auxiliary variables, we also tighten likelihood bounds and increase robustness to local optima. Extensive experiments on real-world models of images, text, and networks illustrate these appealing advantages
Image processing techniques for high-speed atomic force microscopy
Atomic force microscopy (AFM) is a powerful tool for imaging topography or other characteristics of sample surfaces at nanometer-scale spatial resolution by recording the interaction of a sharp probe with the surface. Dispute its excellent spatial resolution, one of the enduring challenges in AFM imaging is its poor temporal resolution relative to the rate of dynamics in many systems of interest. This has led to a large research effort on the development of high-speed AFM (HS-AFM). Most of these efforts focus on mechanical improvement and control algorithm design. This dissertation investigates a complementary HS-AFM approach based on the idea of undersampling which aims at increasing the imaging rate of the instrument by reducing the number of pixels in the sample surface that need to be acquired to create a high-quality image.
The first part of this work focuses on the reconstruction of images sub-sampled according to a scheme known as μ path patterns. These patterns consist of randomly placed short and disjoint scans and are designed specifically for fast, efficient, and consistent data acquisition in AFM. We compare compressive sensing (CS) reconstruction methods with inpainting methods on recovering μ-path undersampled images. The results illustrate that the reconstruction quality depends on the choice of reconstruction methods and the sample under study, with CS generally producing a superior result for samples with sparse frequency content and inpainting performing better for samples with information limited to low frequencies. Motivated by the comparison, a basis pursuit vertical variation (BPVV) method, combing CS and inpainting, is proposed. Based on single image reconstruction results, we also extend our analysis to the problem of multiple AFM frames, in which higher overall video reconstruction quality is achieved by pixel sharing among different frames.
The second part of the thesis considers patterns for sub-sampling in AFM. The allocation of measurements plays an important role in producing accurate reconstructions of the sample surface. We analyze the expected image reconstruction error using a greedy CS algorithm of our design, termed simplified matching pursuit (SMP), and propose a Monte Carlo-based strategy to create μ-path patterns that minimize the expected error. Because these μ path patterns involve a collection of disjoint scan paths, they require the tip of the instrument to be repeatedly lifted from and re-engaged to the surface. In many cases, the re-engagements make up a significant portion of the total data acquisition time. We therefore extend our Monte Carlo design strategy to find continuous scan patterns that minimize the reconstruction error without requiring the tip to be lifted from the surface.
For the final part of the work, we provide a hardware demonstration on a commercial AFM. We describe hardware implementation details and image a calibration grating using the proposed μ-path and continuous scan patterns. The sample surface is reconstructed from acquired data using CS and inpainting methods. The recovered image quality and achievable imaging rate are compared to full raster-scans of the sample. The experimental results show that the proposed scanning combining with reconstruction methods can produce higher image quality with less imaging time
Adaptive weighting of Bayesian physics informed neural networks for multitask and multiscale forward and inverse problems
In this paper, we present a novel methodology for automatic adaptive
weighting of Bayesian Physics-Informed Neural Networks (BPINNs), and we
demonstrate that this makes it possible to robustly address multi-objective and
multi-scale problems. BPINNs are a popular framework for data assimilation,
combining the constraints of Uncertainty Quantification (UQ) and Partial
Differential Equation (PDE). The relative weights of the BPINN target
distribution terms are directly related to the inherent uncertainty in the
respective learning tasks. Yet, they are usually manually set a-priori, that
can lead to pathological behavior, stability concerns, and to conflicts between
tasks which are obstacles that have deterred the use of BPINNs for inverse
problems with multi-scale dynamics. The present weighting strategy
automatically tunes the weights by considering the multi-task nature of target
posterior distribution. We show that this remedies the failure modes of BPINNs
and provides efficient exploration of the optimal Pareto front. This leads to
better convergence and stability of BPINN training while reducing sampling
bias. The determined weights moreover carry information about task
uncertainties, reflecting noise levels in the data and adequacy of the PDE
model. We demonstrate this in numerical experiments in Sobolev training, and
compare them to analytically -optimal baseline, and in a multi-scale
Lokta-Volterra inverse problem. We eventually apply this framework to an
inpainting task and an inverse problem, involving latent field recovery for
incompressible flow in complex geometries
- …