214 research outputs found
Learning Probabilistic Termination Proofs
We present the first machine learning approach to the termination analysis
of probabilistic programs. Ranking supermartingales (RSMs) prove that
probabilistic programs halt, in expectation, within a finite number of steps.
While previously RSMs were directly synthesised from source code,
our method learns them from sampled execution traces.
We introduce the neural ranking supermartingale:
we let a neural network fit
an RSM over execution traces and then we
verify it over the source code using satisfiability modulo theories (SMT);
if the latter step produces a counterexample, we generate from it
new sample traces and repeat learning in a
counterexample-guided inductive synthesis loop,
until the SMT solver confirms the validity of the RSM.
The result is thus a sound witness of probabilistic termination.
Our learning strategy is agnostic to the source code and its
verification counterpart supports the widest range of probabilistic
single-loop programs that any existing tool can handle to date.
We demonstrate the efficacy of our method over a range of benchmarks that
include linear and polynomial programs with discrete, continuous, state-dependent,
multi-variate, hierarchical distributions, and distributions with
undefined moments
On the Trade-off Between Efficiency and Precision of Neural Abstraction
Neural abstractions have been recently introduced as formal approximations of
complex, nonlinear dynamical models. They comprise a neural ODE and a certified
upper bound on the error between the abstract neural network and the concrete
dynamical model. So far neural abstractions have exclusively been obtained as
neural networks consisting entirely of activation functions, resulting
in neural ODE models that have piecewise affine dynamics, and which can be
equivalently interpreted as linear hybrid automata. In this work, we observe
that the utility of an abstraction depends on its use: some scenarios might
require coarse abstractions that are easier to analyse, whereas others might
require more complex, refined abstractions. We therefore consider neural
abstractions of alternative shapes, namely either piecewise constant or
nonlinear non-polynomial (specifically, obtained via sigmoidal activations). We
employ formal inductive synthesis procedures to generate neural abstractions
that result in dynamical models with these semantics. Empirically, we
demonstrate the trade-off that these different neural abstraction templates
have vis-a-vis their precision and synthesis time, as well as the time required
for their safety verification (done via reachability computation). We improve
existing synthesis techniques to enable abstraction of higher-dimensional
models, and additionally discuss the abstraction of complex neural ODEs to
improve the efficiency of reachability analysis for these models.Comment: To appear at QEST 202
Formal Synthesis of Lyapunov Neural Networks
We propose an automatic and formally sound method for synthesising Lyapunov
functions for the asymptotic stability of autonomous non-linear systems.
Traditional methods are either analytical and require manual effort or are
numerical but lack of formal soundness. Symbolic computational methods for
Lyapunov functions, which are in between, give formal guarantees but are
typically semi-automatic because they rely on the user to provide appropriate
function templates. We propose a method that finds Lyapunov functions fully
automaticallyusing machine learningwhile also providing formal
guaranteesusing satisfiability modulo theories (SMT). We employ a
counterexample-guided approach where a numerical learner and a symbolic
verifier interact to construct provably correct Lyapunov neural networks
(LNNs). The learner trains a neural network that satisfies the Lyapunov
criteria for asymptotic stability over a samples set; the verifier proves via
SMT solving that the criteria are satisfied over the whole domain or augments
the samples set with counterexamples. Our method supports neural networks with
polynomial activation functions and multiple depth and width, which display
wide learning capabilities. We demonstrate our method over several non-trivial
benchmarks and compare it favourably against a numerical optimisation-based
approach, a symbolic template-based approach, and a cognate LNN-based approach.
Our method synthesises Lyapunov functions faster and over wider spatial domains
than the alternatives, yet providing stronger or equal guarantees
Quantitative Verification with Neural Networks
We present a data-driven approach to the quantitative verification of
probabilistic programs and stochastic dynamical models. Our approach leverages
neural networks to compute tight and sound bounds for the probability that a
stochastic process hits a target condition within finite time. This problem
subsumes a variety of quantitative verification questions, from the
reachability and safety analysis of discrete-time stochastic dynamical models,
to the study of assertion-violation and termination analysis of probabilistic
programs. We rely on neural networks to represent supermartingale certificates
that yield such probability bounds, which we compute using a
counterexample-guided inductive synthesis loop: we train the neural certificate
while tightening the probability bound over samples of the state space using
stochastic optimisation, and then we formally check the certificate's validity
over every possible state using satisfiability modulo theories; if we receive a
counterexample, we add it to our set of samples and repeat the loop until
validity is confirmed. We demonstrate on a diverse set of benchmarks that,
thanks to the expressive power of neural networks, our method yields smaller or
comparable probability bounds than existing symbolic methods in all cases, and
that our approach succeeds on models that are entirely beyond the reach of such
alternative techniques.Comment: The conference version of this manuscript appeared at CONCUR 202
SIOUX project: a simultaneous multiband camera for exoplanet atmospheres studies
The exoplanet revolution is well underway. The last decade has seen
order-of-magnitude increases in the number of known planets beyond the Solar
system. Detailed characterization of exoplanetary atmospheres provide the best
means for distinguishing the makeup of their outer layers, and the only hope
for understanding the interplay between initial composition chemistry,
temperature-pressure atmospheric profiles, dynamics and circulation. While
pioneering work on the observational side has produced the first important
detections of atmospheric molecules for the class of transiting exoplanets,
important limitations are still present due to the lack of sys- tematic,
repeated measurements with optimized instrumentation at both visible (VIS) and
near-infrared (NIR) wavelengths. It is thus of fundamental importance to
explore quantitatively possible avenues for improvements. In this paper we
report initial results of a feasibility study for the prototype of a versatile
multi-band imaging system for very high-precision differential photometry that
exploits the choice of specifically selected narrow-band filters and novel
ideas for the execution of simultaneous VIS and NIR measurements. Starting from
the fundamental system requirements driven by the science case at hand, we
describe a set of three opto-mechanical solutions for the instrument prototype:
1) a radial distribution of the optical flux using dichroic filters for the
wavelength separation and narrow-band filters or liquid crystal filters for the
observations; 2) a tree distribution of the optical flux (implying 2 separate
foci), with the same technique used for the beam separation and filtering; 3)
an exotic solution consisting of the study of a complete optical system (i.e. a
brand new telescope) that exploits the chromatic errors of a reflecting surface
for directing the different wavelengths at different foci
A low-mass planet candidate orbiting Proxima Centauri at a distance of 1.5 AU
Copyright © 2020 The Authors, some rights reserved; exclusive licensee American Association for the Advancement of Science. No claim to original U.S. Government Works. Distributed under a Creative Commons Attribution NonCommercial License 4.0 (CC BY-NC).Our nearest neighbor, Proxima Centauri, hosts a temperate terrestrial planet. We detected in radial velocities evidence of a possible second planet with minimum mass m c sin i c = 5.8 ± 1.9 M ⊕ and orbital period P c = 5.21 - 0.22 + 0.26 years. The analysis of photometric data and spectro-scopic activity diagnostics does not explain the signal in terms of a stellar activity cycle, but follow-up is required in the coming years for confirming its planetary origin. We show that the existence of the planet can be ascertained, and its true mass can be determined with high accuracy, by combining Gaia astrometry and radial velocities. Proxima c could become a prime target for follow-up and characterization with next-generation direct imaging instrumentation due to the large maximum angular separation of ~1 arc second from the parent star. The candidate planet represents a challenge for the models of super-Earth formation and evolution.Peer reviewedFinal Published versio
New Variable Stars Discovered by the APACHE Survey. II. Results After the Second Observing Season
Routinely operating since July 2012, the APACHE survey has celebrated its
second birthday. While the main goal of the Project is the detection of
transiting planets around a large sample of bright, nearby M dwarfs in the
northern hemisphere, the APACHE large photometric database for hundreds of
different fields represents a relevant resource to search for and provide a
first characterization of new variable stars. We celebrate here the conclusion
of the second year of observations by reporting the discovery of 14 new
variables.Comment: 25 pages, accepted for publication on The Journal of the American
Association of Variable Star Observers (JAVVSO
Validation of an Automated System for the Extraction of a Wide Dataset for Clinical Studies Aimed at Improving the Early Diagnosis of Candidemia
: There is increasing interest in assessing whether machine learning (ML) techniques could further improve the early diagnosis of candidemia among patients with a consistent clinical picture. The objective of the present study is to validate the accuracy of a system for the automated extraction from a hospital laboratory software of a large number of features from candidemia and/or bacteremia episodes as the first phase of the AUTO-CAND project. The manual validation was performed on a representative and randomly extracted subset of episodes of candidemia and/or bacteremia. The manual validation of the random extraction of 381 episodes of candidemia and/or bacteremia, with automated organization in structured features of laboratory and microbiological data resulted in ≥99% correct extractions (with confidence interval < ±1%) for all variables. The final automatically extracted dataset consisted of 1338 episodes of candidemia (8%), 14,112 episodes of bacteremia (90%), and 302 episodes of mixed candidemia/bacteremia (2%). The final dataset will serve to assess the performance of different ML models for the early diagnosis of candidemia in the second phase of the AUTO-CAND project
Analysis of charmonium production at fixed-target experiments in the NRQCD approach
We present an analysis of the existing data on charmonium hadro-production
based on non-relativistic QCD (NRQCD) calculations at the next-to-leading order
(NLO). All the data on J/psi and psi' production in fixed-target experiments
and on pp collisions at low energy are included. We find that the amount of
color octet contribution needed to describe the data is about 1/10 of that
found at the Tevatron
- …