352 research outputs found
Reachability analysis of linear hybrid systems via block decomposition
Reachability analysis aims at identifying states reachable by a system within
a given time horizon. This task is known to be computationally expensive for
linear hybrid systems. Reachability analysis works by iteratively applying
continuous and discrete post operators to compute states reachable according to
continuous and discrete dynamics, respectively. In this paper, we enhance both
of these operators and make sure that most of the involved computations are
performed in low-dimensional state space. In particular, we improve the
continuous-post operator by performing computations in high-dimensional state
space only for time intervals relevant for the subsequent application of the
discrete-post operator. Furthermore, the new discrete-post operator performs
low-dimensional computations by leveraging the structure of the guard and
assignment of a considered transition. We illustrate the potential of our
approach on a number of challenging benchmarks.Comment: Accepted at EMSOFT 202
Recommended from our members
Solving linear programs without breaking abstractions
We show that the ellipsoid method for solving linear programs can be implemented in a way that respects the symmetry of the program being solved. That is to say, there is an algorithmic implementation of the method that does not distinguish, or make choices, between variables or constraints in the program unless they are distinguished by properties definable from the program. In particular, we demonstrate that the solvability of linear programs can be expressed in fixed-point logic with counting (FPC) as long as the program is given by a separation oracle that is itself definable in FPC. We use this to show that the size of a maximum matching in a graph is definable in FPC. This settles an open problem first posed by Blass, Gurevich and Shelah [Blass et al. 1999]. On the way to defining a suitable separation oracle for the maximum matching program, we provide FPC formulas defining canonical maximum flows and minimum cuts in undirected capacitated graphs.Research supported by EPSRC grant EP/H026835.This is the author accepted manuscript. The final version is available from ACM via http://dx.doi.org/10.1145/282289
Top-k Querying of Unknown Values under Order Constraints
Many practical scenarios make it necessary to evaluate top-k queries over data items with partially unknown values. This paper considers a setting where the values are taken from a numerical domain, and where some partial order constraints are given over known and unknown values: under these constraints, we assume that all possible worlds are equally likely.
Our work is the first to propose a principled scheme to derive the value distributions and expected values of unknown items in this setting, with the goal of computing estimated top-k results by interpolating the unknown values from the known ones. We study the complexity of this general task, and show tight complexity bounds, proving that the problem is intractable, but
can be tractably approximated. We then consider the case of tree-shaped partial orders, where we show a constructive PTIME solution. We also compare our problem setting to other top-k definitions on uncertain data
Open-ended Learning in Symmetric Zero-sum Games
Zero-sum games such as chess and poker are, abstractly, functions that
evaluate pairs of agents, for example labeling them `winner' and `loser'. If
the game is approximately transitive, then self-play generates sequences of
agents of increasing strength. However, nontransitive games, such as
rock-paper-scissors, can exhibit strategic cycles, and there is no longer a
clear objective -- we want agents to increase in strength, but against whom is
unclear. In this paper, we introduce a geometric framework for formulating
agent objectives in zero-sum games, in order to construct adaptive sequences of
objectives that yield open-ended learning. The framework allows us to reason
about population performance in nontransitive games, and enables the
development of a new algorithm (rectified Nash response, PSRO_rN) that uses
game-theoretic niching to construct diverse populations of effective agents,
producing a stronger set of agents than existing algorithms. We apply PSRO_rN
to two highly nontransitive resource allocation games and find that PSRO_rN
consistently outperforms the existing alternatives.Comment: ICML 2019, final versio
Quantum mechanics as a theory of probability
We develop and defend the thesis that the Hilbert space formalism of quantum
mechanics is a new theory of probability. The theory, like its classical
counterpart, consists of an algebra of events, and the probability measures
defined on it. The construction proceeds in the following steps: (a) Axioms for
the algebra of events are introduced following Birkhoff and von Neumann. All
axioms, except the one that expresses the uncertainty principle, are shared
with the classical event space. The only models for the set of axioms are
lattices of subspaces of inner product spaces over a field K. (b) Another axiom
due to Soler forces K to be the field of real, or complex numbers, or the
quaternions. We suggest a probabilistic reading of Soler's axiom. (c) Gleason's
theorem fully characterizes the probability measures on the algebra of events,
so that Born's rule is derived. (d) Gleason's theorem is equivalent to the
existence of a certain finite set of rays, with a particular orthogonality
graph (Wondergraph). Consequently, all aspects of quantum probability can be
derived from rational probability assignments to finite "quantum gambles". We
apply the approach to the analysis of entanglement, Bell inequalities, and the
quantum theory of macroscopic objects. We also discuss the relation of the
present approach to quantum logic, realism and truth, and the measurement
problem.Comment: 37 pages, 3 figures. Forthcoming in a Festschrift for Jeffrey Bub,
ed. W. Demopoulos and the author, Springer (Kluwer): University of Western
Ontario Series in Philosophy of Scienc
Iterative Schedule Optimization for Parallelization in the Polyhedron Model
In high-performance computing, one primary objective is to exploit the performance that the given target hardware can deliver to the fullest. Compilers that have the ability to automatically optimize programs for a specific target hardware can be highly useful in this context. Iterative (or search-based) compilation requires little or no prior knowledge and can adapt more easily to concrete programs and target hardware than static cost models and heuristics. Thereby, iterative compilation helps in situations in which static heuristics do not reflect the combination of input program and target hardware well. Moreover, iterative compilation may enable the derivation of more accurate cost models and heuristics for optimizing compilers. In this context, the polyhedron model is of help as it provides not only a mathematical representation of programs but, more importantly, a uniform representation of complex sequences of program transformations by schedule functions. The latter facilitates the systematic exploration of the set of legal transformations of a given program.
Early approaches to purely iterative schedule optimization in the polyhedron model do not limit their search to schedules that preserve program semantics and, thereby, suffer from the need to explore numbers of illegal schedules. More recent research ensures the legality of program transformations but presumes a sequential rather than a parallel execution of the transformed program. Other approaches do not perform a purely iterative optimization.
We propose an approach to iterative schedule optimization for parallelization and tiling in the polyhedron model. Our approach targets loop programs that profit from data locality optimization and coarse-grained loop parallelization. The schedule search space can be explored either randomly or by means of a genetic algorithm.
To determine a schedule's profitability, we rely primarily on measuring the transformed code's execution time. While benchmarking is accurate, it increases the time and resource consumption of program optimization tremendously and can even make it impractical. We address this limitation by proposing to learn surrogate models from schedules generated and evaluated in previous runs of the iterative optimization and to replace benchmarking by performance prediction to the extent possible.
Our evaluation on the PolyBench 4.1 benchmark set reveals that, in a given setting, iterative schedule optimization yields significantly higher speedups in the execution of the program to be optimized. Surrogate performance models learned from training data that was generated during previous iterative optimizations can reduce the benchmarking effort without strongly impairing the optimization result. A prerequisite for this approach is a sufficient similarity between the training programs and the program to be optimized
- …