383 research outputs found
Concurrent Kleene Algebra: Free Model and Completeness
Concurrent Kleene Algebra (CKA) was introduced by Hoare, Moeller, Struth and
Wehrman in 2009 as a framework to reason about concurrent programs. We prove
that the axioms for CKA with bounded parallelism are complete for the semantics
proposed in the original paper; consequently, these semantics are the free
model for this fragment. This result settles a conjecture of Hoare and
collaborators. Moreover, the techniques developed along the way are reusable;
in particular, they allow us to establish pomset automata as an operational
model for CKA.Comment: Version 2 includes an overview section that outlines the completeness
proof, as well as some extra discussion of the interpolation lemma. It also
includes better typography and a number of minor fixes. Version 3
incorporates the changes by comments from the anonymous referees at ESOP.
Among other things, these include a worked example of computing the syntactic
closure by han
Quantitative Modeling and Verification of Evolving Software
Mit der steigenden Nachfrage nach Innovationen spielt Software in verschiedenenWirtschaftsbereichen
eine wichtige Rolle, wie z.B. in der Automobilindustrie, bei intelligenten Systemen als auch bei Kommunikationssystemen. Daher ist die
Qualität für die Softwareentwicklung von großer Bedeutung.
Allerdings ändern sich die probabilistische Modelle (die Qualitätsbewertungsmodelle)
angesichts der dynamischen Natur moderner Softwaresysteme. Dies führt dazu,
dass ihre Übergangswahrscheinlichkeiten im Laufe der Zeit schwanken, welches zu
erheblichen Problemen führt.
Dahingehend werden probabilistische
Modelle im Hinblick auf ihre Laufzeit kontinuierlich aktualisiert. Eine fortdauernde
Neubewertung komplexer Wahrscheinlichkeitsmodelle ist jedoch teuer. In
letzter Zeit haben sich inkrementelle Ansätze als vielversprechend für die Verifikation
von adaptiven Systemen erwiesen. Trotzdem wurden bei der Bewertung struktureller
Änderungen im Modell noch keine wesentlichen Verbesserungen erzielt. Wahrscheinlichkeitssysteme
werden als Automaten modelliert, wie
bei Markov-Modellen. Solche Modelle können in
Matrixform dargestellt werden, um die Gleichungen basierend auf Zuständen und
Übergangswahrscheinlichkeiten zu lösen.
Laufzeitmodelle wie Matrizen sind nicht signifikant,
um die Auswirkungen von Modellveränderungen erkennen zu können.
In dieser Arbeit wird ein Framework unter Verwendung stochastischer Bäume mit
regulären Ausdrücken entwickelt, welches modular aufgebaut ist und eine aktionshaltige
sowie probabilistische Logik im Kontext der Modellprüfung aufweist. Ein solches
modulares Framework ermöglicht dem Menschen die Entwicklung der Änderungsoperationen
für die inkrementelle Berechnung lokaler Änderungen, die im Modell auftreten
können. Darüber hinaus werden probabilistische Änderungsmuster beschrieben,
um eine effiziente inkrementelle Verifizierung, unter Verwendung von Bäumen mit regulären
Ausdrücken, anwenden zu können. Durch die Bewertung der Ergebnisse wird
der Vorgang abgeschlossen.Software plays an innovative role in many different domains, such as car industry, autonomous
and smart systems, and communication. Hence, the quality of the software
is of utmost importance and needs to be properly addressed during software evolution.
Several approaches have been developed to evaluate systems’ quality attributes, such
as reliability, safety, and performance of software. Due to the dynamic nature of modern software systems, probabilistic models representing the quality of the software and their transition probabilities change over time and fluctuate, leading to a significant problem that needs to be solved to obtain correct evaluation results of quantitative
properties. Probabilistic models need to be continually updated at run-time to
solve this issue. However, continuous re-evaluation of complex probabilistic models is
expensive. Recently, incremental approaches have been found to be promising for the
verification of evolving and self-adaptive systems. Nevertheless, substantial improvements
have not yet been achieved for evaluating structural changes in the model.
Probabilistic systems are usually
represented in a matrix form to solve the equations
based on states and transition probabilities. On the other side, evolutionary changes can create
various effects on theese models and force them to re-verify the whole system. Run-time
models, such as matrices or graph representations, lack the expressiveness to identify
the change effect on the model.
In this thesis, we develop a framework using stochastic regular expression trees,
which are modular, with action-based probabilistic logic in the model checking context.
Such a modular framework enables us to develop change operations for the incremental
computation of local changes that can occur in the model. Furthermore, we describe
probabilistic change patterns to apply efficient incremental quantitative verification using
stochastic regular expression trees and evaluate our results
Search-Based Regular Expression Inference on a GPU
Regular expression inference (REI) is a supervised machine learning and
program synthesis problem that takes a cost metric for regular expressions, and
positive and negative examples of strings as input. It outputs a regular
expression that is precise (i.e., accepts all positive and rejects all negative
examples), and minimal w.r.t. to the cost metric. We present a novel algorithm
for REI over arbitrary alphabets that is enumerative and trades off time for
space. Our main algorithmic idea is to implement the search space of regular
expressions succinctly as a contiguous matrix of bitvectors. Collectively, the
bitvectors represent, as characteristic sequences, all sub-languages of the
infix-closure of the union of positive and negative examples. Mathematically,
this is a semiring of (a variant of) formal power series. Infix-closure enables
bottom-up compositional construction of larger from smaller regular expressions
using the operations of our semiring. This minimises data movement and
data-dependent branching, hence maximises data-parallelism. In addition, the
infix-closure remains unchanged during the search, hence search can be staged:
first pre-compute various expensive operations, and then run the compute
intensive search process. We provide two C++ implementations, one for general
purpose CPUs and one for Nvidia GPUs (using CUDA). We benchmark both on Google
Colab Pro: the GPU implementation is on average over 1000x faster than the CPU
implementation on the hardest benchmarks
ZStream: A cost-based query processor for adaptively detecting composite events
Composite (or Complex) event processing (CEP) systems search sequences of incoming events for occurrences of user-specified event patterns. Recently, they have gained more attention in a variety of areas due to their powerful and expressive query language and performance potential. Sequentiality (temporal ordering) is the primary way in which CEP systems relate events to each other. In this paper, we present a CEP system called ZStream to efficiently process such sequential patterns. Besides simple sequential patterns, ZStream is also able to detect other patterns, including conjunction, disjunction, negation and Kleene closure.
Unlike most recently proposed CEP systems, which use non-deterministic finite automata (NFA's) to detect patterns, ZStream uses tree-based query plans for both the logical and physical representation of query patterns. By carefully designing the underlying infrastructure and algorithms, ZStream is able to unify the evaluation of sequence, conjunction, disjunction, negation, and Kleene closure as variants of the join operator. Under this framework, a single pattern in ZStream may have several equivalent physical tree plans, with different evaluation costs. We propose a cost model to estimate the computation costs of a plan. We show that our cost model can accurately capture the actual runtime behavior of a plan, and that choosing the optimal plan can result in a factor of four or more speedup versus an NFA based approach. Based on this cost model and using a simple set of statistics about operator selectivity and data rates, ZStream is able to adaptively and seamlessly adjust the order in which it detects patterns on the fly. Finally, we describe a dynamic programming algorithm used in our cost model to efficiently search for an optimal query plan for a given pattern.National Natural Science Foundation (Grant number NETS-NOSS 0520032
Heap Abstractions for Static Analysis
Heap data is potentially unbounded and seemingly arbitrary. As a consequence,
unlike stack and static memory, heap memory cannot be abstracted directly in
terms of a fixed set of source variable names appearing in the program being
analysed. This makes it an interesting topic of study and there is an abundance
of literature employing heap abstractions. Although most studies have addressed
similar concerns, their formulations and formalisms often seem dissimilar and
some times even unrelated. Thus, the insights gained in one description of heap
abstraction may not directly carry over to some other description. This survey
is a result of our quest for a unifying theme in the existing descriptions of
heap abstractions. In particular, our interest lies in the abstractions and not
in the algorithms that construct them.
In our search of a unified theme, we view a heap abstraction as consisting of
two features: a heap model to represent the heap memory and a summarization
technique for bounding the heap representation. We classify the models as
storeless, store based, and hybrid. We describe various summarization
techniques based on k-limiting, allocation sites, patterns, variables, other
generic instrumentation predicates, and higher-order logics. This approach
allows us to compare the insights of a large number of seemingly dissimilar
heap abstractions and also paves way for creating new abstractions by
mix-and-match of models and summarization techniques.Comment: 49 pages, 20 figure
Entwurf funktionaler Implementierungen von Graphalgorithmen
Classic graph algorithms are usually presented and analysed
in imperative programming languages.
Imperative programming languages are well-suited for the description of
a program flow,
in which the order in which the operations are performed is important.
One common example of such a description is the successive,
typically destructive modification of objects.
This kind of iteration often occurs in the context of graph algorithms
that deal with a certain kind of optimisation.
In functional programming,
the order of execution is abstracted
and problem solutions are described
as compositions of intermediate solutions.
Additionally,
functional programming languages are referentially transparent
and thus destructive updates of objects are discouraged.
The development of purely functional graph algorithms begins with the
decomposition of a given problem into simpler problems.
In many cases
the solutions of these partial problems can be used to solve
different problems as well.
What is more,
this compositionality allows exchanging functions
for more efficient or more comprehensible versions with little effort.
An algebraic approach with a focus on relation algebra as defined by Tarski
is used as an intermediate step in this dissertation.
One advantage of this approach is the formality of the resulting specifications.
Despite their formality,
the resulting expressions are still readable,
because the algebraic operations have intuitive interpretations.
Another advantage is that the specification is executable,
once the necessary operations are implemented.
This dissertation presents the basics of the algebraic approach in the
functional programming language Haskell.
Using this foundation,
some exemplary graph-theoretic problems are solved in the presented
framework.
Finally,
optimisations of the presented implementations are discussed
and pointers are provided to further problems
that can be solved using the above methods.Klassische Graphalgorithmen werden üblicherweise in imperativen
Programmiersprachen
beschrieben und analysiert.
Imperative Programmiersprachen eignen sich gut,
um Programmabläufe zu beschreiben,
in welchen die Reihenfolge der Operationen
wichtig ist.
Dies betrifft insbesondere die schrittweise,
in der Regel destruktive Veränderung von Objekten,
wie sie häufig im Falle von Optimierungsproblemen
auf Graphen vorkommt.
In der funktionalen Programmierung abstrahiert man von einer festen
Berechnungsreihenfolge und beschreibt Problemlösungen als
Kompositionen von Teillösungen.
Ferner sind funktionale Programmiersprachen referentiell transparent,
sodass destruktive Veränderungen nur bedingt möglich sind.
Die Entwicklung rein funktionaler Graphalgorithmen setzt bei der Zerlegung
der bestehenden Probleme in einfachere Probleme an.
Oftmals können Lösungen dieser Teilprobleme auch in anderen
Situationen eingesetzt werden.
Darüber hinaus erlaubt es diese Kompositionalität,
einzelne Funktionen mit wenig Aufwand durch effizientere
oder verständlichere Fassungen
auszutauschen.
Als Zwischenschritt in der Entwicklung wird in dieser Dissertation
ein algebraischer Ansatz basierend auf der Relationenalgebra im Sinne von Tarski
verwendet.
Ein Vorteil dieses Ansatzes ist die
Formalität der entstehenden Spezifikationen.
Trotz ihrer Formalität bleiben die entstehenden Ausdrücke oft
leserlich,
weil die algebraischen Operationen
anschauliche Interpretationen zulassen.
Ein weiterer Vorteil ist,
dass Spezifikationen ausführbar werden,
sobald bestimmte Basisoperationen implementiert sind.
In dieser Dissertation werden Grundlagen einer Implementierung
des algebraischen Ansatzes in der
funktionalen Programmiersprache Haskell behandelt.
Ausgehend hiervon werden exemplarisch einige
Probleme der Graphentheorie gelöst.
Schließlich werden Optimierungen der vorgestellten Implementierungen
und weitere Probleme,
welche mit den obigen Methoden lösbar sind, diskutiert
- …