29,544 research outputs found
Disjunctive Probabilistic Modal Logic is Enough for Bisimilarity on Reactive Probabilistic Systems
Larsen and Skou characterized probabilistic bisimilarity over reactive
probabilistic systems with a logic including true, negation, conjunction, and a
diamond modality decorated with a probabilistic lower bound. Later on,
Desharnais, Edalat, and Panangaden showed that negation is not necessary to
characterize the same equivalence. In this paper, we prove that the logical
characterization holds also when conjunction is replaced by disjunction, with
negation still being not necessary. To this end, we introduce reactive
probabilistic trees, a fully abstract model for reactive probabilistic systems
that allows us to demonstrate expressiveness of the disjunctive probabilistic
modal logic, as well as of the previously mentioned logics, by means of a
compactness argument.Comment: Aligned content with version accepted at ICTCS 2016: fixed minor
typos, added reference, improved definitions in Section 3. Still 10 pages in
sigplanconf forma
The Structure of Differential Invariants and Differential Cut Elimination
The biggest challenge in hybrid systems verification is the handling of
differential equations. Because computable closed-form solutions only exist for
very simple differential equations, proof certificates have been proposed for
more scalable verification. Search procedures for these proof certificates are
still rather ad-hoc, though, because the problem structure is only understood
poorly. We investigate differential invariants, which define an induction
principle for differential equations and which can be checked for invariance
along a differential equation just by using their differential structure,
without having to solve them. We study the structural properties of
differential invariants. To analyze trade-offs for proof search complexity, we
identify more than a dozen relations between several classes of differential
invariants and compare their deductive power. As our main results, we analyze
the deductive power of differential cuts and the deductive power of
differential invariants with auxiliary differential variables. We refute the
differential cut elimination hypothesis and show that, unlike standard cuts,
differential cuts are fundamental proof principles that strictly increase the
deductive power. We also prove that the deductive power increases further when
adding auxiliary differential variables to the dynamics
An Axiomatic Approach to Liveness for Differential Equations
This paper presents an approach for deductive liveness verification for
ordinary differential equations (ODEs) with differential dynamic logic.
Numerous subtleties complicate the generalization of well-known discrete
liveness verification techniques, such as loop variants, to the continuous
setting. For example, ODE solutions may blow up in finite time or their
progress towards the goal may converge to zero. Our approach handles these
subtleties by successively refining ODE liveness properties using ODE
invariance properties which have a well-understood deductive proof theory. This
approach is widely applicable: we survey several liveness arguments in the
literature and derive them all as special instances of our axiomatic refinement
approach. We also correct several soundness errors in the surveyed arguments,
which further highlights the subtlety of ODE liveness reasoning and the utility
of our deductive approach. The library of common refinement steps identified
through our approach enables both the sound development and justification of
new ODE liveness proof rules from our axioms.Comment: FM 2019: 23rd International Symposium on Formal Methods, Porto,
Portugal, October 9-11, 201
Learning Tuple Probabilities
Learning the parameters of complex probabilistic-relational models from
labeled training data is a standard technique in machine learning, which has
been intensively studied in the subfield of Statistical Relational Learning
(SRL), but---so far---this is still an under-investigated topic in the context
of Probabilistic Databases (PDBs). In this paper, we focus on learning the
probability values of base tuples in a PDB from labeled lineage formulas. The
resulting learning problem can be viewed as the inverse problem to confidence
computations in PDBs: given a set of labeled query answers, learn the
probability values of the base tuples, such that the marginal probabilities of
the query answers again yield in the assigned probability labels. We analyze
the learning problem from a theoretical perspective, cast it into an
optimization problem, and provide an algorithm based on stochastic gradient
descent. Finally, we conclude by an experimental evaluation on three real-world
and one synthetic dataset, thus comparing our approach to various techniques
from SRL, reasoning in information extraction, and optimization
Regularity of the Einstein Equations at Future Null Infinity
When Einstein's equations for an asymptotically flat, vacuum spacetime are
reexpressed in terms of an appropriate conformal metric that is regular at
(future) null infinity, they develop apparently singular terms in the
associated conformal factor and thus appear to be ill-behaved at this
(exterior) boundary. In this article however we show, through an enforcement of
the Hamiltonian and momentum constraints to the needed order in a Taylor
expansion, that these apparently singular terms are not only regular at the
boundary but can in fact be explicitly evaluated there in terms of conformally
regular geometric data. Though we employ a rather rigidly constrained and gauge
fixed formulation of the field equations, we discuss the extent to which we
expect our results to have a more 'universal' significance and, in particular,
to be applicable, after minor modifications, to alternative formulations.Comment: 43 pages, no figures, AMS-TeX. Minor revisions, updated to agree with
published versio
- …