5 research outputs found
Probabilistic Auto-Encoder
We introduce the Probabilistic Auto-Encoder (PAE), a generative model with a
lower dimensional latent space that is based on an Auto-Encoder which is
interpreted probabilistically after training using a Normalizing Flow. The PAE
combines the advantages of an Auto-Encoder, i.e. it is fast and easy to train
and achieves small reconstruction error, with the desired properties of a
generative model, such as high sample quality and good performance in
downstream tasks. Compared to a VAE and its common variants, the PAE trains
faster, reaches lower reconstruction error and achieves state of the art
samples without parameter fine-tuning or annealing schemes. We demonstrate that
the PAE is further a powerful model for performing the downstream tasks of
outlier detection and probabilistic image reconstruction: 1) Starting from the
Laplace approximation to the marginal likelihood, we identify a PAE-based
outlier detection metric which achieves state of the art results in
Out-of-Distribution detection outperforming other likelihood based estimators.
2) Using posterior analysis in the PAE latent space we perform high dimensional
data inpainting and denoising with uncertainty quantification.Comment: 11 pages, 6 figures. Code available at
https://github.com/VMBoehm/PAE. Updated version with additional references
and appendi
RSD measurements from BOSS galaxy power spectrum using the halo perturbation theory model
We present growth of structure constraints from the cosmological analysis of
the power spectrum multipoles of SDSS-III BOSS DR12 galaxies. We use the galaxy
power spectrum model of Hand et al. (2017), which decomposes the galaxies into
halo mass bins, each of which is modeled separately using the relations between
halo biases and halo mass. The model combines Eulerian perturbation theory and
halo model calibrated on -body simulations to model the halo clustering. In
this work, we also generate the covariance matrix by combining the analytic
disconnected part with the empirical connected part: we smooth the connected
component by selecting a few principal components and show that it achieves
good agreement with the mock covariance. Our analysis differs from recent
analyses in that we constrain a single parameter fixing everything
else to Planck+BAO prior, thereby reducing the effects of prior volume and
mismodeling. We find tight constraints on :
and
at $k_{\mathrm{max}} = 0.2\
h^{-1}P_4(k)k_{\mathrm{max}} = 0.4\ h^{-1}k_{\mathrm{max}}$
consistently and reliably remains the main challenge of RSD analysis methods.Comment: 21 pages, 13 figure