3 research outputs found
Lazy model checking for recursive state machines
Recursive state machines (RSMs) are state-based models for procedural programs with wide-ranging applications in program verification and interprocedural analysis. Model-checking algorithms for RSMs and related formalisms have been intensively studied in the literature. In this article, we devise a new model-checking algorithm for RSMs and requirements in computation tree logic (CTL) that exploits the compositional structure of RSMs by ternary model checking in combination with a lazy evaluation scheme. Specifically, a procedural component is only analyzed in those cases in which it might influence the satisfaction of the CTL requirement. We implemented our model-checking algorithms and evaluate them on randomized scalability benchmarks and on an interprocedural data-flow analysis of Java programs, showing both practical applicability and significant speedups in comparison to state-of-the-art model-checking tools for procedural programs.</p
More for Less: Safe Policy Improvement With Stronger Performance Guarantees
In an offline reinforcement learning setting, the safe policy improvement
(SPI) problem aims to improve the performance of a behavior policy according to
which sample data has been generated. State-of-the-art approaches to SPI
require a high number of samples to provide practical probabilistic guarantees
on the improved policy's performance. We present a novel approach to the SPI
problem that provides the means to require less data for such guarantees.
Specifically, to prove the correctness of these guarantees, we devise implicit
transformations on the data set and the underlying environment model that serve
as theoretical foundations to derive tighter improvement bounds for SPI. Our
empirical evaluation, using the well-established SPI with baseline
bootstrapping (SPIBB) algorithm, on standard benchmarks shows that our method
indeed significantly reduces the sample complexity of the SPIBB algorithm.Comment: Accecpted at IJCAI 202