29,422 research outputs found
Determination of alpha_s from scaling violations of truncated moments of structure functions
We determine the strong coupling alpha_s(M_Z) from scaling violations of
truncated moments of the nonsinglet deep inelastic structure function F_2.
Truncated moments are determined from BCDMS and NMC data using a neural network
parametrization which retains the full experimental information on errors and
correlations. Our method minimizes all sources of theoretical uncertainty and
bias which characterize extractions of alpha_s from scaling violations. We
obtain alpha_s(M_Z) = 0.124 +0.004-0.007 (exp.) + 0.003- 0.004 (th.).Comment: 24 pages, 4 figures, latex with epsfig; neural network
parametrization available from http://sophia.ecm.ub.es/f2neura
Input Prioritization for Testing Neural Networks
Deep neural networks (DNNs) are increasingly being adopted for sensing and
control functions in a variety of safety and mission-critical systems such as
self-driving cars, autonomous air vehicles, medical diagnostics, and industrial
robotics. Failures of such systems can lead to loss of life or property, which
necessitates stringent verification and validation for providing high
assurance. Though formal verification approaches are being investigated,
testing remains the primary technique for assessing the dependability of such
systems. Due to the nature of the tasks handled by DNNs, the cost of obtaining
test oracle data---the expected output, a.k.a. label, for a given input---is
high, which significantly impacts the amount and quality of testing that can be
performed. Thus, prioritizing input data for testing DNNs in meaningful ways to
reduce the cost of labeling can go a long way in increasing testing efficacy.
This paper proposes using gauges of the DNN's sentiment derived from the
computation performed by the model, as a means to identify inputs that are
likely to reveal weaknesses. We empirically assessed the efficacy of three such
sentiment measures for prioritization---confidence, uncertainty, and
surprise---and compare their effectiveness in terms of their fault-revealing
capability and retraining effectiveness. The results indicate that sentiment
measures can effectively flag inputs that expose unacceptable DNN behavior. For
MNIST models, the average percentage of inputs correctly flagged ranged from
88% to 94.8%
A Likelihood-Free Inference Framework for Population Genetic Data using Exchangeable Neural Networks
An explosion of high-throughput DNA sequencing in the past decade has led to
a surge of interest in population-scale inference with whole-genome data.
Recent work in population genetics has centered on designing inference methods
for relatively simple model classes, and few scalable general-purpose inference
techniques exist for more realistic, complex models. To achieve this, two
inferential challenges need to be addressed: (1) population data are
exchangeable, calling for methods that efficiently exploit the symmetries of
the data, and (2) computing likelihoods is intractable as it requires
integrating over a set of correlated, extremely high-dimensional latent
variables. These challenges are traditionally tackled by likelihood-free
methods that use scientific simulators to generate datasets and reduce them to
hand-designed, permutation-invariant summary statistics, often leading to
inaccurate inference. In this work, we develop an exchangeable neural network
that performs summary statistic-free, likelihood-free inference. Our framework
can be applied in a black-box fashion across a variety of simulation-based
tasks, both within and outside biology. We demonstrate the power of our
approach on the recombination hotspot testing problem, outperforming the
state-of-the-art.Comment: 9 pages, 8 figure
Galaxy shape measurement with convolutional neural networks
We present our results from training and evaluating a convolutional neural
network (CNN) to predict galaxy shapes from wide-field survey images of the
first data release of the Dark Energy Survey (DES DR1). We use conventional
shape measurements as ground truth from an overlapping, deeper survey with less
sky coverage, the Canada-France Hawaii Telescope Lensing Survey (CFHTLenS). We
demonstrate that CNN predictions from single band DES images reproduce the
results of CFHTLenS at bright magnitudes and show higher correlation with
CFHTLenS at fainter magnitudes than maximum likelihood model fitting estimates
in the DES Y1 im3shape catalogue. Prediction of shape parameters with a CNN is
also extremely fast, it takes only 0.2 milliseconds per galaxy, improving more
than 4 orders of magnitudes over forward model fitting. The CNN can also
accurately predict shapes when using multiple images of the same galaxy, even
in different color bands, with no additional computational overhead. The CNN is
again more precise for faint objects, and the advantage of the CNN is more
pronounced for blue galaxies than red ones when compared to the DES Y1
metacalibration catalogue, which fits a single Gaussian profile using riz band
images. We demonstrate that CNN shape predictions within the metacalibration
self-calibrating framework yield shear estimates with negligible multiplicative
bias, , and no significant PSF leakage. Our proposed setup is
applicable to current and next generation weak lensing surveys where higher
quality ground truth shapes can be measured in dedicated deep fields
- …