1,602 research outputs found
Bayesian reconstruction of the cosmological large-scale structure: methodology, inverse algorithms and numerical optimization
We address the inverse problem of cosmic large-scale structure reconstruction
from a Bayesian perspective. For a linear data model, a number of known and
novel reconstruction schemes, which differ in terms of the underlying signal
prior, data likelihood, and numerical inverse extra-regularization schemes are
derived and classified. The Bayesian methodology presented in this paper tries
to unify and extend the following methods: Wiener-filtering, Tikhonov
regularization, Ridge regression, Maximum Entropy, and inverse regularization
techniques. The inverse techniques considered here are the asymptotic
regularization, the Jacobi, Steepest Descent, Newton-Raphson,
Landweber-Fridman, and both linear and non-linear Krylov methods based on
Fletcher-Reeves, Polak-Ribiere, and Hestenes-Stiefel Conjugate Gradients. The
structures of the up-to-date highest-performing algorithms are presented, based
on an operator scheme, which permits one to exploit the power of fast Fourier
transforms. Using such an implementation of the generalized Wiener-filter in
the novel ARGO-software package, the different numerical schemes are
benchmarked with 1-, 2-, and 3-dimensional problems including structured white
and Poissonian noise, data windowing and blurring effects. A novel numerical
Krylov scheme is shown to be superior in terms of performance and fidelity.
These fast inverse methods ultimately will enable the application of sampling
techniques to explore complex joint posterior distributions. We outline how the
space of the dark-matter density field, the peculiar velocity field, and the
power spectrum can jointly be investigated by a Gibbs-sampling process. Such a
method can be applied for the redshift distortions correction of the observed
galaxies and for time-reversal reconstructions of the initial density field.Comment: 40 pages, 11 figure
End-to-End Kernel Learning with Supervised Convolutional Kernel Networks
In this paper, we introduce a new image representation based on a multilayer
kernel machine. Unlike traditional kernel methods where data representation is
decoupled from the prediction task, we learn how to shape the kernel with
supervision. We proceed by first proposing improvements of the
recently-introduced convolutional kernel networks (CKNs) in the context of
unsupervised learning; then, we derive backpropagation rules to take advantage
of labeled training data. The resulting model is a new type of convolutional
neural network, where optimizing the filters at each layer is equivalent to
learning a linear subspace in a reproducing kernel Hilbert space (RKHS). We
show that our method achieves reasonably competitive performance for image
classification on some standard "deep learning" datasets such as CIFAR-10 and
SVHN, and also for image super-resolution, demonstrating the applicability of
our approach to a large variety of image-related tasks.Comment: to appear in Advances in Neural Information Processing Systems (NIPS
Matrix probing: a randomized preconditioner for the wave-equation Hessian
This paper considers the problem of approximating the inverse of the
wave-equation Hessian, also called normal operator, in seismology and other
types of wave-based imaging. An expansion scheme for the pseudodifferential
symbol of the inverse Hessian is set up. The coefficients in this expansion are
found via least-squares fitting from a certain number of applications of the
normal operator on adequate randomized trial functions built in curvelet space.
It is found that the number of parameters that can be fitted increases with the
amount of information present in the trial functions, with high probability.
Once an approximate inverse Hessian is available, application to an image of
the model can be done in very low complexity. Numerical experiments show that
randomized operator fitting offers a compelling preconditioner for the
linearized seismic inversion problem.Comment: 21 pages, 6 figure
First principles studies of band offsets at heterojunctions and of surface reconstruction using Gaussian dual-space density functional theory
The use of localized Gaussian basis functions for large scale first principles density functional calculations with periodic boundary conditions (PBC) in 2 dimensions and 3 dimensions has been made possible by using a dual space approach. This new method is applied to the study of electronic properties of II–VI (II=Zn, Cd, Hg; VI=S, Se, Te, Po) and III–V (III=Al, Ga; V=As, N) semiconductors. Valence band offsets of heterojunctions are calculated including both bulk contributions and interfacial contributions. The results agree very well with available experimental data. The p(2 × 1) cation terminated surface reconstructions of CdTe and HgTe (100) are calculated using the local density approximation (LDA) with two-dimensional PBC and also using the ab initio Hartree–Fock (HF) method with a finite cluster. The LDA and HF results do not agree very well
Fast minimum variance wavefront reconstruction for extremely large telescopes
We present a new algorithm, FRiM (FRactal Iterative Method), aiming at the
reconstruction of the optical wavefront from measurements provided by a
wavefront sensor. As our application is adaptive optics on extremely large
telescopes, our algorithm was designed with speed and best quality in mind. The
latter is achieved thanks to a regularization which enforces prior statistics.
To solve the regularized problem, we use the conjugate gradient method which
takes advantage of the sparsity of the wavefront sensor model matrix and avoids
the storage and inversion of a huge matrix. The prior covariance matrix is
however non-sparse and we derive a fractal approximation to the Karhunen-Loeve
basis thanks to which the regularization by Kolmogorov statistics can be
computed in O(N) operations, N being the number of phase samples to estimate.
Finally, we propose an effective preconditioning which also scales as O(N) and
yields the solution in 5-10 conjugate gradient iterations for any N. The
resulting algorithm is therefore O(N). As an example, for a 128 x 128
Shack-Hartmann wavefront sensor, FRiM appears to be more than 100 times faster
than the classical vector-matrix multiplication method.Comment: to appear in the Journal of the Optical Society of America
- …