47 research outputs found
COST Action IC 1402 ArVI: Runtime Verification Beyond Monitoring -- Activity Report of Working Group 1
This report presents the activities of the first working group of the COST
Action ArVI, Runtime Verification beyond Monitoring. The report aims to provide
an overview of some of the major core aspects involved in Runtime Verification.
Runtime Verification is the field of research dedicated to the analysis of
system executions. It is often seen as a discipline that studies how a system
run satisfies or violates correctness properties. The report exposes a taxonomy
of Runtime Verification (RV) presenting the terminology involved with the main
concepts of the field. The report also develops the concept of instrumentation,
the various ways to instrument systems, and the fundamental role of
instrumentation in designing an RV framework. We also discuss how RV interplays
with other verification techniques such as model-checking, deductive
verification, model learning, testing, and runtime assertion checking. Finally,
we propose challenges in monitoring quantitative and statistical data beyond
detecting property violation
Learning Linear Temporal Properties
We present two novel algorithms for learning formulas in Linear Temporal
Logic (LTL) from examples. The first learning algorithm reduces the learning
task to a series of satisfiability problems in propositional Boolean logic and
produces a smallest LTL formula (in terms of the number of subformulas) that is
consistent with the given data. Our second learning algorithm, on the other
hand, combines the SAT-based learning algorithm with classical algorithms for
learning decision trees. The result is a learning algorithm that scales to
real-world scenarios with hundreds of examples, but can no longer guarantee to
produce minimal consistent LTL formulas. We compare both learning algorithms
and demonstrate their performance on a wide range of synthetic benchmarks.
Additionally, we illustrate their usefulness on the task of understanding
executions of a leader election protocol
Towards Bridging the Gap between Control and Self-Adaptive System Properties
Two of the main paradigms used to build adaptive software employ different
types of properties to capture relevant aspects of the system's run-time
behavior. On the one hand, control systems consider properties that concern
static aspects like stability, as well as dynamic properties that capture the
transient evolution of variables such as settling time. On the other hand,
self-adaptive systems consider mostly non-functional properties that capture
concerns such as performance, reliability, and cost. In general, it is not easy
to reconcile these two types of properties or identify under which conditions
they constitute a good fit to provide run-time guarantees. There is a need of
identifying the key properties in the areas of control and self-adaptation, as
well as of characterizing and mapping them to better understand how they relate
and possibly complement each other. In this paper, we take a first step to
tackle this problem by: (1) identifying a set of key properties in control
theory, (2) illustrating the formalization of some of these properties
employing temporal logic languages commonly used to engineer self-adaptive
software systems, and (3) illustrating how to map key properties that
characterize self-adaptive software systems into control properties, leveraging
their formalization in temporal logics. We illustrate the different steps of
the mapping on an exemplar case in the cloud computing domain and conclude with
identifying open challenges in the area
Modular Subset Sum, Dynamic Strings, and Zero-Sum Sets
The modular subset sum problem consists of deciding, given a modulus , a
multiset of integers in , and a target integer , whether
there exists a subset of with elements summing to , and to
report such a set if it exists. We give a simple -time with high
probability (w.h.p.) algorithm for the modular subset sum problem. This builds
on and improves on a previous w.h.p. algorithm from Axiotis,
Backurs, Jin, Tzamos, and Wu (SODA 19). Our method utilizes the ADT of the
dynamic strings structure of Gawrychowski et al. (SODA~18). However, as this
structure is rather complicated we present a much simpler alternative which we
call the Data Dependent Tree. As an application, we consider the computational
version of a fundamental theorem in zero-sum Ramsey theory. The
Erd\H{o}s-Ginzburg-Ziv Theorem states that a multiset of integers
always contains a subset of cardinality exactly whose values sum to a
multiple of . We give an algorithm for finding such a subset in time w.h.p. which improves on an algorithm due to Del Lungo,
Marini, and Mori (Disc. Math. 09).Comment: To appear at the SIAM Symposium on Simplicity in Algorithms (SOSA21
Learning Augmented Online Facility Location
Following the research agenda initiated by Munoz & Vassilvitskii [1] and
Lykouris & Vassilvitskii [2] on learning-augmented online algorithms for
classical online optimization problems, in this work, we consider the Online
Facility Location problem under this framework. In Online Facility Location
(OFL), demands arrive one-by-one in a metric space and must be (irrevocably)
assigned to an open facility upon arrival, without any knowledge about future
demands.
We present an online algorithm for OFL that exploits potentially imperfect
predictions on the locations of the optimal facilities. We prove that the
competitive ratio decreases smoothly from sublogarithmic in the number of
demands to constant, as the error, i.e., the total distance of the predicted
locations to the optimal facility locations, decreases towards zero. We
complement our analysis with a matching lower bound establishing that the
dependence of the algorithm's competitive ratio on the error is optimal, up to
constant factors. Finally, we evaluate our algorithm on real world data and
compare our learning augmented approach with the current best online algorithm
for the problem
Dependences in Strategy Logic
Strategy Logic (SL) is a very expressive temporal logic for specifying and verifying properties of multi-agent systems: in SL, one can quantify over strategies, assign them to agents, and express LTL properties of the resulting plays. Such a powerful framework has two drawbacks: First, model checking SL has non-elementary complexity; second, the exact semantics of SL is rather intricate, and may not correspond to what is expected. In this paper, we focus on strategy dependences in SL, by tracking how existentially-quantified strategies in a formula may (or may not) depend on other strategies selected in the formula, revisiting the approach of [Mogavero et al., Reasoning about strategies: On the model-checking problem, 2014]. We explain why elementary dependences, as defined by Mogavero et al., do not exactly capture the intended concept of behavioral strategies. We address this discrepancy by introducing timeline dependences, and exhibit a large fragment of SL for which model checking can be performed in 2-EXPTIME under this new semantics
How Long It Takes for an Ordinary Node with an Ordinary ID to Output?
In the context of distributed synchronous computing, processors perform in
rounds, and the time-complexity of a distributed algorithm is classically
defined as the number of rounds before all computing nodes have output. Hence,
this complexity measure captures the running time of the slowest node(s). In
this paper, we are interested in the running time of the ordinary nodes, to be
compared with the running time of the slowest nodes. The node-averaged
time-complexity of a distributed algorithm on a given instance is defined as
the average, taken over every node of the instance, of the number of rounds
before that node output. We compare the node-averaged time-complexity with the
classical one in the standard LOCAL model for distributed network computing. We
show that there can be an exponential gap between the node-averaged
time-complexity and the classical time-complexity, as witnessed by, e.g.,
leader election. Our first main result is a positive one, stating that, in
fact, the two time-complexities behave the same for a large class of problems
on very sparse graphs. In particular, we show that, for LCL problems on cycles,
the node-averaged time complexity is of the same order of magnitude as the
slowest node time-complexity.
In addition, in the LOCAL model, the time-complexity is computed as a worst
case over all possible identity assignments to the nodes of the network. In
this paper, we also investigate the ID-averaged time-complexity, when the
number of rounds is averaged over all possible identity assignments. Our second
main result is that the ID-averaged time-complexity is essentially the same as
the expected time-complexity of randomized algorithms (where the expectation is
taken over all possible random bits used by the nodes, and the number of rounds
is measured for the worst-case identity assignment).
Finally, we study the node-averaged ID-averaged time-complexity.Comment: (Submitted) Journal versio