22,849 research outputs found
pySPT: a package dedicated to the source position transformation
The modern time-delay cosmography aims to infer the cosmological parameters
with a competitive precision from observing a multiply imaged quasar. The
success of this technique relies upon a robust modeling of the lens mass
distribution. Unfortunately strong degeneracies between density profiles that
lead to almost the same lensing observables may bias precise estimate for the
Hubble constant. The source position transformation (SPT), which covers the
well-known mass sheet transformation (MST) as a special case, defines a new
framework to investigate these degeneracies. In this paper, we present pySPT, a
Python package dedicated to the SPT. We describe how it can be used to evaluate
the impact of the SPT on lensing observables. We review most of its
capabilities and elaborate on key features that we used in a companion paper
regarding SPT and time delays. pySPT also comes with a sub-package dedicated to
simple lens modeling. It can be used to generate lensing related quantities for
a wide variety of lens models, independently from any SPT analysis. As a first
practical application, we present a correction to the first estimate of the
impact on time delays of the SPT, which has been experimentally found in
Schneider and Sluse (2013) between a softened power-law and a composite
(baryons + dark matter) lenses. We find that the large deviations predicted in
Schneider and Sluse (2014) have been overestimated due to a minor bug (now
fixed) in the public lens modeling code lensmodel (v1.99). We conclude that the
predictions for the Hubble constant deviate by \%, first and foremost
caused by an MST. The latest version of pySPT is available at
https://github.com/owertz/pySPT. We also provide tutorials to describe in
detail how making the best use of pySPT at
https://github.com/owertz/pySPT_tutorials.Comment: 9 pages, 5 figure
On color image quality assessment using natural image statistics
Color distortion can introduce a significant damage in visual quality
perception, however, most of existing reduced-reference quality measures are
designed for grayscale images. In this paper, we consider a basic extension of
well-known image-statistics based quality assessment measures to color images.
In order to evaluate the impact of color information on the measures
efficiency, two color spaces are investigated: RGB and CIELAB. Results of an
extensive evaluation using TID 2013 benchmark demonstrates that significant
improvement can be achieved for a great number of distortion type when the
CIELAB color representation is used
Adaptive Density Estimation for Generative Models
Unsupervised learning of generative models has seen tremendous progress over
recent years, in particular due to generative adversarial networks (GANs),
variational autoencoders, and flow-based models. GANs have dramatically
improved sample quality, but suffer from two drawbacks: (i) they mode-drop,
i.e., do not cover the full support of the train data, and (ii) they do not
allow for likelihood evaluations on held-out data. In contrast,
likelihood-based training encourages models to cover the full support of the
train data, but yields poorer samples. These mutual shortcomings can in
principle be addressed by training generative latent variable models in a
hybrid adversarial-likelihood manner. However, we show that commonly made
parametric assumptions create a conflict between them, making successful hybrid
models non trivial. As a solution, we propose to use deep invertible
transformations in the latent variable decoder. This approach allows for
likelihood computations in image space, is more efficient than fully invertible
models, and can take full advantage of adversarial training. We show that our
model significantly improves over existing hybrid models: offering GAN-like
samples, IS and FID scores that are competitive with fully adversarial models,
and improved likelihood scores
Mass-sheet degeneracy, power-law models and external convergence: Impact on the determination of the Hubble constant from gravitational lensing
The light travel time differences in strong gravitational lensing systems
allows an independent determination of the Hubble constant. This method has
been successfully applied to several lens systems. The formally most precise
measurements are, however, in tension with the recent determination of
from the Planck satellite for a spatially flat six-parameters
cosmology. We reconsider the uncertainties of the method, concerning the mass
profile of the lens galaxies, and show that the formal precision relies on the
assumption that the mass profile is a perfect power law. Simple analytical
arguments and numerical experiments reveal that mass-sheet like transformations
yield significant freedom in choosing the mass profile, even when exquisite
Einstein rings are observed. Furthermore, the characterization of the
environment of the lens does not break that degeneracy which is not physically
linked to extrinsic convergence. We present an illustrative example where the
multiple imaging properties of a composite (baryons + dark matter) lens can be
extremely well reproduced by a power-law model having the same velocity
dispersion, but with predictions for the Hubble constant that deviate by . Hence we conclude that the impact of degeneracies between parametrized
models have been underestimated in current measurements from lensing, and
need to be carefully reconsidered.Comment: Accepted for publication in Astronomy and Astrophysics. Discussion
expanded (MSD and velocity dispersion, MSD and free form lens models, MSD and
multiple source redshifts
- …