6,149 research outputs found
Quantum Generative Adversarial Networks for Learning and Loading Random Distributions
Quantum algorithms have the potential to outperform their classical
counterparts in a variety of tasks. The realization of the advantage often
requires the ability to load classical data efficiently into quantum states.
However, the best known methods require gates to
load an exact representation of a generic data structure into an -qubit
state. This scaling can easily predominate the complexity of a quantum
algorithm and, thereby, impair potential quantum advantage. Our work presents a
hybrid quantum-classical algorithm for efficient, approximate quantum state
loading. More precisely, we use quantum Generative Adversarial Networks (qGANs)
to facilitate efficient learning and loading of generic probability
distributions -- implicitly given by data samples -- into quantum states.
Through the interplay of a quantum channel, such as a variational quantum
circuit, and a classical neural network, the qGAN can learn a representation of
the probability distribution underlying the data samples and load it into a
quantum state. The loading requires
gates and can, thus, enable the
use of potentially advantageous quantum algorithms, such as Quantum Amplitude
Estimation. We implement the qGAN distribution learning and loading method with
Qiskit and test it using a quantum simulation as well as actual quantum
processors provided by the IBM Q Experience. Furthermore, we employ quantum
simulation to demonstrate the use of the trained quantum channel in a quantum
finance application.Comment: 14 pages, 13 figure
Between War and Peace: Humanitarian Assistance in Violent Urban Settings
Cities are fast becoming new territories of violence. The humanitarian consequences of many criminally violent urban settings are comparable to those of more traditional wars, yet despite the intensity of the needs, humanitarian aid to such settings is limited. The way in which humanitarian needs are typically defined, fails to address the problems of these contexts, the suffering they produce and the populations affected. Distinctions between formal armed conflicts, regulated by international humanitarian law, and other violent settings, as well as those between emergency and developmental assistance, can lead to the neglect of populations in distress. It can take a lot of time and effort to access vulnerable communities and implement programmes in urban settings, but experience shows that it is possible to provide humanitarian assistance with a significant focus on the direct and indirect health consequences of violence outside a traditional conflict setting. This paper considers the situation of Port-au-Prince (Haiti), Rio de Janeiro (Brazil) and Guatemala City (Guatemala)
Variance Reduced Stochastic Gradient Descent with Neighbors
Stochastic Gradient Descent (SGD) is a workhorse in machine learning, yet its
slow convergence can be a computational bottleneck. Variance reduction
techniques such as SAG, SVRG and SAGA have been proposed to overcome this
weakness, achieving linear convergence. However, these methods are either based
on computations of full gradients at pivot points, or on keeping per data point
corrections in memory. Therefore speed-ups relative to SGD may need a minimal
number of epochs in order to materialize. This paper investigates algorithms
that can exploit neighborhood structure in the training data to share and
re-use information about past stochastic gradients across data points, which
offers advantages in the transient optimization phase. As a side-product we
provide a unified convergence analysis for a family of variance reduction
algorithms, which we call memorization algorithms. We provide experimental
results supporting our theory.Comment: Appears in: Advances in Neural Information Processing Systems 28
(NIPS 2015). 13 page
Fast Point Spread Function Modeling with Deep Learning
Modeling the Point Spread Function (PSF) of wide-field surveys is vital for
many astrophysical applications and cosmological probes including weak
gravitational lensing. The PSF smears the image of any recorded object and
therefore needs to be taken into account when inferring properties of galaxies
from astronomical images. In the case of cosmic shear, the PSF is one of the
dominant sources of systematic errors and must be treated carefully to avoid
biases in cosmological parameters. Recently, forward modeling approaches to
calibrate shear measurements within the Monte-Carlo Control Loops ()
framework have been developed. These methods typically require simulating a
large amount of wide-field images, thus, the simulations need to be very fast
yet have realistic properties in key features such as the PSF pattern. Hence,
such forward modeling approaches require a very flexible PSF model, which is
quick to evaluate and whose parameters can be estimated reliably from survey
data. We present a PSF model that meets these requirements based on a fast
deep-learning method to estimate its free parameters. We demonstrate our
approach on publicly available SDSS data. We extract the most important
features of the SDSS sample via principal component analysis. Next, we
construct our model based on perturbations of a fixed base profile, ensuring
that it captures these features. We then train a Convolutional Neural Network
to estimate the free parameters of the model from noisy images of the PSF. This
allows us to render a model image of each star, which we compare to the SDSS
stars to evaluate the performance of our method. We find that our approach is
able to accurately reproduce the SDSS PSF at the pixel level, which, due to the
speed of both the model evaluation and the parameter estimation, offers good
prospects for incorporating our method into the framework.Comment: 25 pages, 8 figures, 1 tabl
Cosmological constraints from noisy convergence maps through deep learning
Deep learning is a powerful analysis technique that has recently been
proposed as a method to constrain cosmological parameters from weak lensing
mass maps. Due to its ability to learn relevant features from the data, it is
able to extract more information from the mass maps than the commonly used
power spectrum, and thus achieve better precision for cosmological parameter
measurement. We explore the advantage of Convolutional Neural Networks (CNN)
over the power spectrum for varying levels of shape noise and different
smoothing scales applied to the maps. We compare the cosmological constraints
from the two methods in the plane for sets of 400 deg
convergence maps. We find that, for a shape noise level corresponding to 8.53
galaxies/arcmin and the smoothing scale of arcmin, the
network is able to generate 45% tighter constraints. For smaller smoothing
scale of the improvement can reach , while for
larger smoothing scale of , the improvement decreases to 19%.
The advantage generally decreases when the noise level and smoothing scales
increase. We present a new training strategy to train the neural network with
noisy data, as well as considerations for practical applications of the deep
learning approach.Comment: 17 pages, 12 figure
- …
