5,685 research outputs found
An Automata-Theoretic Approach to the Verification of Distributed Algorithms
We introduce an automata-theoretic method for the verification of distributed
algorithms running on ring networks. In a distributed algorithm, an arbitrary
number of processes cooperate to achieve a common goal (e.g., elect a leader).
Processes have unique identifiers (pids) from an infinite, totally ordered
domain. An algorithm proceeds in synchronous rounds, each round allowing a
process to perform a bounded sequence of actions such as send or receive a pid,
store it in some register, and compare register contents wrt. the associated
total order. An algorithm is supposed to be correct independently of the number
of processes. To specify correctness properties, we introduce a logic that can
reason about processes and pids. Referring to leader election, it may say that,
at the end of an execution, each process stores the maximum pid in some
dedicated register. Since the verification of distributed algorithms is
undecidable, we propose an underapproximation technique, which bounds the
number of rounds. This is an appealing approach, as the number of rounds needed
by a distributed algorithm to conclude is often exponentially smaller than the
number of processes. We provide an automata-theoretic solution, reducing model
checking to emptiness for alternating two-way automata on words. Overall, we
show that round-bounded verification of distributed algorithms over rings is
PSPACE-complete.Comment: 26 pages, 6 figure
Certifying cost annotations in compilers
We discuss the problem of building a compiler which can lift in a provably
correct way pieces of information on the execution cost of the object code to
cost annotations on the source code. To this end, we need a clear and flexible
picture of: (i) the meaning of cost annotations, (ii) the method to prove them
sound and precise, and (iii) the way such proofs can be composed. We propose a
so-called labelling approach to these three questions. As a first step, we
examine its application to a toy compiler. This formal study suggests that the
labelling approach has good compositionality and scalability properties. In
order to provide further evidence for this claim, we report our successful
experience in implementing and testing the labelling approach on top of a
prototype compiler written in OCAML for (a large fragment of) the C language
Description and Optimization of Abstract Machines in a Dialect of Prolog
In order to achieve competitive performance, abstract machines for Prolog and
related languages end up being large and intricate, and incorporate
sophisticated optimizations, both at the design and at the implementation
levels. At the same time, efficiency considerations make it necessary to use
low-level languages in their implementation. This makes them laborious to code,
optimize, and, especially, maintain and extend. Writing the abstract machine
(and ancillary code) in a higher-level language can help tame this inherent
complexity. We show how the semantics of most basic components of an efficient
virtual machine for Prolog can be described using (a variant of) Prolog. These
descriptions are then compiled to C and assembled to build a complete bytecode
emulator. Thanks to the high level of the language used and its closeness to
Prolog, the abstract machine description can be manipulated using standard
Prolog compilation and optimization techniques with relative ease. We also show
how, by applying program transformations selectively, we obtain abstract
machine implementations whose performance can match and even exceed that of
state-of-the-art, highly-tuned, hand-crafted emulators.Comment: 56 pages, 46 figures, 5 tables, To appear in Theory and Practice of
Logic Programming (TPLP
A study of systems implementation languages for the POCCNET system
The results are presented of a study of systems implementation languages for the Payload Operations Control Center Network (POCCNET). Criteria are developed for evaluating the languages, and fifteen existing languages are evaluated on the basis of these criteria
Energy-Efficient Algorithms
We initiate the systematic study of the energy complexity of algorithms (in
addition to time and space complexity) based on Landauer's Principle in
physics, which gives a lower bound on the amount of energy a system must
dissipate if it destroys information. We propose energy-aware variations of
three standard models of computation: circuit RAM, word RAM, and
transdichotomous RAM. On top of these models, we build familiar high-level
primitives such as control logic, memory allocation, and garbage collection
with zero energy complexity and only constant-factor overheads in space and
time complexity, enabling simple expression of energy-efficient algorithms. We
analyze several classic algorithms in our models and develop low-energy
variations: comparison sort, insertion sort, counting sort, breadth-first
search, Bellman-Ford, Floyd-Warshall, matrix all-pairs shortest paths, AVL
trees, binary heaps, and dynamic arrays. We explore the time/space/energy
trade-off and develop several general techniques for analyzing algorithms and
reducing their energy complexity. These results lay a theoretical foundation
for a new field of semi-reversible computing and provide a new framework for
the investigation of algorithms.Comment: 40 pages, 8 pdf figures, full version of work published in ITCS 201
A Complexity Preserving Transformation from Jinja Bytecode to Rewrite Systems
We revisit known transformations from Jinja bytecode to rewrite systems from
the viewpoint of runtime complexity. Suitably generalising the constructions
proposed in the literature, we define an alternative representation of Jinja
bytecode (JBC) executions as "computation graphs" from which we obtain a novel
representation of JBC executions as "constrained rewrite systems". We prove
non-termination and complexity preservation of the transformation. We restrict
to well-formed JBC programs that only make use of non-recursive methods and
expect tree-shaped objects as input. Our approach allows for simplified
correctness proofs and provides a framework for the combination of the
computation graph method with standard techniques from static program analysis
like for example "reachability analysis".Comment: 36 page
Recommended from our members
Fault tolerance in super-scalar and VLIW processors
In this paper, we present a method for utilizing the spare capacity in super-scalar and very long instruction word (VLIW) processors to tolerate functional unit failures. Unlike previous work that was primarily interested in detection of transient faults, we are concerned with more permanent and/or intermittent faults which necessitate processor reconfiguration. Our method utilizes the VLIW compiler or the superscalar scheduler to insert redundant operations whenever idle functional units exist. The results of these redundant operations are used to detect and diagnose functional unit failures. For super-scalar processors, the scheduler can then utilize this information to ensure that operations are performed only on non-faulty units. In VLIW processors, this is equivalent to recompiling the code to run on the remaining non-faulty functional units. Since in certain applications, recompilation may not be possible, we consider two alternative reconfiguration strategies for VLIW processors. These strategies sacrifice storage space and execution time, respectively, in order to reconfigure without recompiling. We present Markov models that describe the behavior of processors using these different approaches and we evaluate their reliabilities. The results show that, while super-scalar and VLIW with recompilation provide the highest reliability, all proposed strategies significantly increase reliability over that of an unprotected processor
- …