2,159 research outputs found
A Primer on Reproducing Kernel Hilbert Spaces
Reproducing kernel Hilbert spaces are elucidated without assuming prior
familiarity with Hilbert spaces. Compared with extant pedagogic material,
greater care is placed on motivating the definition of reproducing kernel
Hilbert spaces and explaining when and why these spaces are efficacious. The
novel viewpoint is that reproducing kernel Hilbert space theory studies
extrinsic geometry, associating with each geometric configuration a canonical
overdetermined coordinate system. This coordinate system varies continuously
with changing geometric configurations, making it well-suited for studying
problems whose solutions also vary continuously with changing geometry. This
primer can also serve as an introduction to infinite-dimensional linear algebra
because reproducing kernel Hilbert spaces have more properties in common with
Euclidean spaces than do more general Hilbert spaces.Comment: Revised version submitted to Foundations and Trends in Signal
Processin
Bayesian reconstruction of the cosmological large-scale structure: methodology, inverse algorithms and numerical optimization
We address the inverse problem of cosmic large-scale structure reconstruction
from a Bayesian perspective. For a linear data model, a number of known and
novel reconstruction schemes, which differ in terms of the underlying signal
prior, data likelihood, and numerical inverse extra-regularization schemes are
derived and classified. The Bayesian methodology presented in this paper tries
to unify and extend the following methods: Wiener-filtering, Tikhonov
regularization, Ridge regression, Maximum Entropy, and inverse regularization
techniques. The inverse techniques considered here are the asymptotic
regularization, the Jacobi, Steepest Descent, Newton-Raphson,
Landweber-Fridman, and both linear and non-linear Krylov methods based on
Fletcher-Reeves, Polak-Ribiere, and Hestenes-Stiefel Conjugate Gradients. The
structures of the up-to-date highest-performing algorithms are presented, based
on an operator scheme, which permits one to exploit the power of fast Fourier
transforms. Using such an implementation of the generalized Wiener-filter in
the novel ARGO-software package, the different numerical schemes are
benchmarked with 1-, 2-, and 3-dimensional problems including structured white
and Poissonian noise, data windowing and blurring effects. A novel numerical
Krylov scheme is shown to be superior in terms of performance and fidelity.
These fast inverse methods ultimately will enable the application of sampling
techniques to explore complex joint posterior distributions. We outline how the
space of the dark-matter density field, the peculiar velocity field, and the
power spectrum can jointly be investigated by a Gibbs-sampling process. Such a
method can be applied for the redshift distortions correction of the observed
galaxies and for time-reversal reconstructions of the initial density field.Comment: 40 pages, 11 figure
Locally adaptive smoothing with Markov random fields and shrinkage priors
We present a locally adaptive nonparametric curve fitting method that
operates within a fully Bayesian framework. This method uses shrinkage priors
to induce sparsity in order-k differences in the latent trend function,
providing a combination of local adaptation and global control. Using a scale
mixture of normals representation of shrinkage priors, we make explicit
connections between our method and kth order Gaussian Markov random field
smoothing. We call the resulting processes shrinkage prior Markov random fields
(SPMRFs). We use Hamiltonian Monte Carlo to approximate the posterior
distribution of model parameters because this method provides superior
performance in the presence of the high dimensionality and strong parameter
correlations exhibited by our models. We compare the performance of three prior
formulations using simulated data and find the horseshoe prior provides the
best compromise between bias and precision. We apply SPMRF models to two
benchmark data examples frequently used to test nonparametric methods. We find
that this method is flexible enough to accommodate a variety of data generating
models and offers the adaptive properties and computational tractability to
make it a useful addition to the Bayesian nonparametric toolbox.Comment: 38 pages, to appear in Bayesian Analysi
Linear State Models for Volatility Estimation and Prediction
This report covers the important topic of stochastic volatility modelling with an emphasis on linear state models. The approach taken focuses on comparing models based on their ability to fit the data and their forecasting performance. To this end several parsimonious stochastic volatility models are estimated using realised volatility, a volatility proxy from high frequency stock price data. The results indicate that a hidden state space model performs the best among the realised volatility-based models under consideration. For the state space model different sampling intervals are compared based on in-sample prediction performance. The comparisons are partly based on the multi-period prediction results that are derived in this report
- …