146 research outputs found
Synchrony vs. Causality in Asynchronous Petri Nets
Given a synchronous system, we study the question whether the behaviour of
that system can be exhibited by a (non-trivially) distributed and hence
asynchronous implementation. In this paper we show, by counterexample, that
synchronous systems cannot in general be implemented in an asynchronous fashion
without either introducing an infinite implementation or changing the causal
structure of the system behaviour.Comment: In Proceedings EXPRESS 2011, arXiv:1108.407
Synchrony versus causality in distributed systems
Dieser Beitrag ist mit Zustimmung des Rechteinhabers aufgrund einer (DFG geförderten) Allianz- bzw. Nationallizenz frei zugänglich.This publication is with permission of the rights owner freely accessible due to an Alliance licence and a national licence (funded by the DFG, German Research Foundation) respectively.Given a synchronous system, we study the question whether – or, under which conditions – the behaviour of that system can be realized by a (non-trivially) distributed and hence asynchronous implementation. In this paper, we partially answer this question by examining the role of causality for the implementation of synchrony in two fundamental different formalisms of concurrency, Petri nets and the π-calculus. For both formalisms it turns out that each ‘good’ encoding of synchronous interactions using just asynchronous interactions introduces causal dependencies in the translation
Synchrony vs Causality in the Asynchronous Pi-Calculus
We study the relation between process calculi that differ in their either
synchronous or asynchronous interaction mechanism. Concretely, we are
interested in the conditions under which synchronous interaction can be
implemented using just asynchronous interactions in the pi-calculus. We assume
a number of minimal conditions referring to the work of Gorla: a "good"
encoding must be compositional and preserve and reflect computations,
deadlocks, divergence, and success. Under these conditions, we show that it is
not possible to encode synchronous interactions without introducing additional
causal dependencies in the translation.Comment: In Proceedings EXPRESS 2011, arXiv:1108.407
Synchrony versus causality in distributed systems
Dieser Beitrag ist mit Zustimmung des Rechteinhabers aufgrund einer (DFG geförderten) Allianz- bzw. Nationallizenz frei zugänglich.This publication is with permission of the rights owner freely accessible due to an Alliance licence and a national licence (funded by the DFG, German Research Foundation) respectively.Given a synchronous system, we study the question whether – or, under which conditions – the behaviour of that system can be realized by a (non-trivially) distributed and hence asynchronous implementation. In this paper, we partially answer this question by examining the role of causality for the implementation of synchrony in two fundamental different formalisms of concurrency, Petri nets and the π-calculus. For both formalisms it turns out that each ‘good’ encoding of synchronous interactions using just asynchronous interactions introduces causal dependencies in the translation
An Operational Petri Net Semantics for the Join-Calculus
We present a concurrent operational Petri net semantics for the
join-calculus, a process calculus for specifying concurrent and distributed
systems. There often is a gap between system specifications and the actual
implementations caused by synchrony assumptions on the specification side and
asynchronously interacting components in implementations. The join-calculus is
promising to reduce this gap by providing an abstract specification language
which is asynchronously distributable. Classical process semantics establish an
implicit order of actually independent actions, by means of an interleaving. So
does the semantics of the join-calculus. To capture such independent actions,
step-based semantics, e.g., as defined on Petri nets, are employed. Our Petri
net semantics for the join-calculus induces step-behavior in a natural way. We
prove our semantics behaviorally equivalent to the original join-calculus
semantics by means of a bisimulation. We discuss how join specific assumptions
influence an existing notion of distributability based on Petri nets.Comment: In Proceedings EXPRESS/SOS 2012, arXiv:1208.244
08371 Abstracts Collection -- Fault-Tolerant Distributed Algorithms on VLSI Chips
From September the , 2008 to September the
, 2008 the Dagstuhl Seminar 08371 ``Fault-Tolerant
Distributed Algorithms on VLSI Chips \u27\u27 was held in Schloss
Dagstuhl~--~Leibniz Center for Informatics. The seminar was devoted to
exploring whether the wealth of existing fault-tolerant distributed
algorithms research can be utilized for meeting the challenges of
future-generation VLSI chips. During the seminar, several participants
from both the VLSI and distributed algorithms\u27 discipline, presented
their current research, and ongoing work and possibilities for
collaboration were discussed. Abstracts of the presentations given
during the seminar as well as abstracts of seminar results and ideas
are put together in this paper. The first section describes the
seminar topics and goals in general. Links to extended abstracts or
full papers are provided, if available
Modelling Concurrency with Comtraces and Generalized Comtraces
Comtraces (combined traces) are extensions of Mazurkiewicz traces that can
model the "not later than" relationship. In this paper, we first introduce the
novel notion of generalized comtraces, extensions of comtraces that can
additionally model the "non-simultaneously" relationship. Then we study some
basic algebraic properties and canonical reprentations of comtraces and
generalized comtraces. Finally we analyze the relationship between generalized
comtraces and generalized stratified order structures. The major technical
contribution of this paper is a proof showing that generalized comtraces can be
represented by generalized stratified order structures.Comment: 49 page
Desynchronization: Synthesis of asynchronous circuits from synchronous specifications
Asynchronous implementation techniques, which measure logic delays at run time and activate registers accordingly, are inherently more robust than their synchronous counterparts, which estimate worst-case delays at design time, and constrain the clock cycle accordingly. De-synchronization is a new paradigm to automate the design of asynchronous circuits from synchronous specifications, thus permitting widespread adoption of asynchronicity, without requiring special design skills or tools. In this paper, we first of all study different protocols for de-synchronization and formally prove their correctness, using techniques originally developed for distributed deployment of synchronous language specifications. We also provide a taxonomy of existing protocols for asynchronous latch controllers, covering in particular the four-phase handshake protocols devised in the literature for micro-pipelines. We then propose a new controller which exhibits provably maximal concurrency, and analyze the performance of desynchronized circuits with respect to the original synchronous optimized implementation. We finally prove the feasibility and effectiveness of our approach, by showing its application to a set of real designs, including a complete implementation of the DLX microprocessor architectur
- …