1,356 research outputs found
Formal analysis techniques for gossiping protocols
We give a survey of formal verification techniques that can be used to corroborate existing experimental results for gossiping protocols in a rigorous manner. We present properties of interest for gossiping protocols and discuss how various formal evaluation techniques can be employed to predict them
Certified Reinforcement Learning with Logic Guidance
This paper proposes the first model-free Reinforcement Learning (RL)
framework to synthesise policies for unknown, and continuous-state Markov
Decision Processes (MDPs), such that a given linear temporal property is
satisfied. We convert the given property into a Limit Deterministic Buchi
Automaton (LDBA), namely a finite-state machine expressing the property.
Exploiting the structure of the LDBA, we shape a synchronous reward function
on-the-fly, so that an RL algorithm can synthesise a policy resulting in traces
that probabilistically satisfy the linear temporal property. This probability
(certificate) is also calculated in parallel with policy learning when the
state space of the MDP is finite: as such, the RL algorithm produces a policy
that is certified with respect to the property. Under the assumption of finite
state space, theoretical guarantees are provided on the convergence of the RL
algorithm to an optimal policy, maximising the above probability. We also show
that our method produces ''best available'' control policies when the logical
property cannot be satisfied. In the general case of a continuous state space,
we propose a neural network architecture for RL and we empirically show that
the algorithm finds satisfying policies, if there exist such policies. The
performance of the proposed framework is evaluated via a set of numerical
examples and benchmarks, where we observe an improvement of one order of
magnitude in the number of iterations required for the policy synthesis,
compared to existing approaches whenever available.Comment: This article draws from arXiv:1801.08099, arXiv:1809.0782
Pathwise Accuracy and Ergodicity of Metropolized Integrators for SDEs
Metropolized integrators for ergodic stochastic differential equations (SDE)
are proposed which (i) are ergodic with respect to the (known) equilibrium
distribution of the SDE and (ii) approximate pathwise the solutions of the SDE
on finite time intervals. Both these properties are demonstrated in the paper
and precise strong error estimates are obtained. It is also shown that the
Metropolized integrator retains these properties even in situations where the
drift in the SDE is nonglobally Lipschitz, and vanilla explicit integrators for
SDEs typically become unstable and fail to be ergodic.Comment: 46 pages, 5 figure
A Nonparametric Adaptive Nonlinear Statistical Filter
We use statistical learning methods to construct an adaptive state estimator
for nonlinear stochastic systems. Optimal state estimation, in the form of a
Kalman filter, requires knowledge of the system's process and measurement
uncertainty. We propose that these uncertainties can be estimated from
(conditioned on) past observed data, and without making any assumptions of the
system's prior distribution. The system's prior distribution at each time step
is constructed from an ensemble of least-squares estimates on sub-sampled sets
of the data via jackknife sampling. As new data is acquired, the state
estimates, process uncertainty, and measurement uncertainty are updated
accordingly, as described in this manuscript.Comment: Accepted at the 2014 IEEE Conference on Decision and Contro
Adaptive importance sampling technique for markov chains using stochastic approximation
For a discrete-time finite-state Markov chain, we develop an adaptive importance sampling scheme to estimate the expected total cost before hitting a set of terminal states. This scheme updates the change of measure at every transition using constant or decreasing step-size stochastic approximation. The updates are shown to concentrate asymptotically in a neighborhood of the desired zero-variance estimator. Through simulation experiments on simple Markovian queues, we observe that the proposed technique performs very well in estimating performance measures related to rare events associated with queue lengths exceeding prescribed thresholds. We include performance comparisons of the proposed algorithm with existing adaptive importance sampling algorithms on some examples. We also discuss the extension of the technique to estimate the infinite horizon expected discounted cost and the expected average cost
Mean Field description of and propagation of chaos in recurrent multipopulation networks of Hodgkin-Huxley and Fitzhugh-Nagumo neurons
We derive the mean-field equations arising as the limit of a network of
interacting spiking neurons, as the number of neurons goes to infinity. The
neurons belong to a fixed number of populations and are represented either by
the Hodgkin-Huxley model or by one of its simplified version, the
Fitzhugh-Nagumo model. The synapses between neurons are either electrical or
chemical. The network is assumed to be fully connected. The maximum
conductances vary randomly. Under the condition that all neurons initial
conditions are drawn independently from the same law that depends only on the
population they belong to, we prove that a propagation of chaos phenomenon
takes places, namely that in the mean-field limit, any finite number of neurons
become independent and, within each population, have the same probability
distribution. This probability distribution is solution of a set of implicit
equations, either nonlinear stochastic differential equations resembling the
McKean-Vlasov equations, or non-local partial differential equations resembling
the McKean-Vlasov-Fokker- Planck equations. We prove the well-posedness of
these equations, i.e. the existence and uniqueness of a solution. We also show
the results of some preliminary numerical experiments that indicate that the
mean-field equations are a good representation of the mean activity of a finite
size network, even for modest sizes. These experiment also indicate that the
McKean-Vlasov-Fokker- Planck equations may be a good way to understand the
mean-field dynamics through, e.g., a bifurcation analysis.Comment: 55 pages, 9 figure
- ā¦