15,629 research outputs found
Efficient Large-scale Trace Checking Using MapReduce
The problem of checking a logged event trace against a temporal logic
specification arises in many practical cases. Unfortunately, known algorithms
for an expressive logic like MTL (Metric Temporal Logic) do not scale with
respect to two crucial dimensions: the length of the trace and the size of the
time interval for which logged events must be buffered to check satisfaction of
the specification. The former issue can be addressed by distributed and
parallel trace checking algorithms that can take advantage of modern cloud
computing and programming frameworks like MapReduce. Still, the latter issue
remains open with current state-of-the-art approaches.
In this paper we address this memory scalability issue by proposing a new
semantics for MTL, called lazy semantics. This semantics can evaluate temporal
formulae and boolean combinations of temporal-only formulae at any arbitrary
time instant. We prove that lazy semantics is more expressive than standard
point-based semantics and that it can be used as a basis for a correct
parametric decomposition of any MTL formula into an equivalent one with
smaller, bounded time intervals. We use lazy semantics to extend our previous
distributed trace checking algorithm for MTL. We evaluate the proposed
algorithm in terms of memory scalability and time/memory tradeoffs.Comment: 13 pages, 8 figure
COST Action IC 1402 ArVI: Runtime Verification Beyond Monitoring -- Activity Report of Working Group 1
This report presents the activities of the first working group of the COST
Action ArVI, Runtime Verification beyond Monitoring. The report aims to provide
an overview of some of the major core aspects involved in Runtime Verification.
Runtime Verification is the field of research dedicated to the analysis of
system executions. It is often seen as a discipline that studies how a system
run satisfies or violates correctness properties. The report exposes a taxonomy
of Runtime Verification (RV) presenting the terminology involved with the main
concepts of the field. The report also develops the concept of instrumentation,
the various ways to instrument systems, and the fundamental role of
instrumentation in designing an RV framework. We also discuss how RV interplays
with other verification techniques such as model-checking, deductive
verification, model learning, testing, and runtime assertion checking. Finally,
we propose challenges in monitoring quantitative and statistical data beyond
detecting property violation
A Metric for Linear Temporal Logic
We propose a measure and a metric on the sets of infinite traces generated by
a set of atomic propositions. To compute these quantities, we first map
properties to subsets of the real numbers and then take the Lebesgue measure of
the resulting sets. We analyze how this measure is computed for Linear Temporal
Logic (LTL) formulas. An implementation for computing the measure of bounded
LTL properties is provided and explained. This implementation leverages SAT
model counting and effects independence checks on subexpressions to compute the
measure and metric compositionally
Streaming Video over HTTP with Consistent Quality
In conventional HTTP-based adaptive streaming (HAS), a video source is
encoded at multiple levels of constant bitrate representations, and a client
makes its representation selections according to the measured network
bandwidth. While greatly simplifying adaptation to the varying network
conditions, this strategy is not the best for optimizing the video quality
experienced by end users. Quality fluctuation can be reduced if the natural
variability of video content is taken into consideration. In this work, we
study the design of a client rate adaptation algorithm to yield consistent
video quality. We assume that clients have visibility into incoming video
within a finite horizon. We also take advantage of the client-side video
buffer, by using it as a breathing room for not only network bandwidth
variability, but also video bitrate variability. The challenge, however, lies
in how to balance these two variabilities to yield consistent video quality
without risking a buffer underrun. We propose an optimization solution that
uses an online algorithm to adapt the video bitrate step-by-step, while
applying dynamic programming at each step. We incorporate our solution into
PANDA -- a practical rate adaptation algorithm designed for HAS deployment at
scale.Comment: Refined version submitted to ACM Multimedia Systems Conference
(MMSys), 201
Sciduction: Combining Induction, Deduction, and Structure for Verification and Synthesis
Even with impressive advances in automated formal methods, certain problems
in system verification and synthesis remain challenging. Examples include the
verification of quantitative properties of software involving constraints on
timing and energy consumption, and the automatic synthesis of systems from
specifications. The major challenges include environment modeling,
incompleteness in specifications, and the complexity of underlying decision
problems.
This position paper proposes sciduction, an approach to tackle these
challenges by integrating inductive inference, deductive reasoning, and
structure hypotheses. Deductive reasoning, which leads from general rules or
concepts to conclusions about specific problem instances, includes techniques
such as logical inference and constraint solving. Inductive inference, which
generalizes from specific instances to yield a concept, includes algorithmic
learning from examples. Structure hypotheses are used to define the class of
artifacts, such as invariants or program fragments, generated during
verification or synthesis. Sciduction constrains inductive and deductive
reasoning using structure hypotheses, and actively combines inductive and
deductive reasoning: for instance, deductive techniques generate examples for
learning, and inductive reasoning is used to guide the deductive engines.
We illustrate this approach with three applications: (i) timing analysis of
software; (ii) synthesis of loop-free programs, and (iii) controller synthesis
for hybrid systems. Some future applications are also discussed
A Semantic Hierarchy for Erasure Policies
We consider the problem of logical data erasure, contrasting with physical
erasure in the same way that end-to-end information flow control contrasts with
access control. We present a semantic hierarchy for erasure policies, using a
possibilistic knowledge-based semantics to define policy satisfaction such that
there is an intuitively clear upper bound on what information an erasure policy
permits to be retained. Our hierarchy allows a rich class of erasure policies
to be expressed, taking account of the power of the attacker, how much
information may be retained, and under what conditions it may be retained.
While our main aim is to specify erasure policies, the semantic framework
allows quite general information-flow policies to be formulated for a variety
of semantic notions of secrecy.Comment: 18 pages, ICISS 201
- …