359 research outputs found
On the Expressive Power of 2-Stack Visibly Pushdown Automata
Visibly pushdown automata are input-driven pushdown automata that recognize
some non-regular context-free languages while preserving the nice closure and
decidability properties of finite automata. Visibly pushdown automata with
multiple stacks have been considered recently by La Torre, Madhusudan, and
Parlato, who exploit the concept of visibility further to obtain a rich
automata class that can even express properties beyond the class of
context-free languages. At the same time, their automata are closed under
boolean operations, have a decidable emptiness and inclusion problem, and enjoy
a logical characterization in terms of a monadic second-order logic over words
with an additional nesting structure. These results require a restricted
version of visibly pushdown automata with multiple stacks whose behavior can be
split up into a fixed number of phases. In this paper, we consider 2-stack
visibly pushdown automata (i.e., visibly pushdown automata with two stacks) in
their unrestricted form. We show that they are expressively equivalent to the
existential fragment of monadic second-order logic. Furthermore, it turns out
that monadic second-order quantifier alternation forms an infinite hierarchy
wrt words with multiple nestings. Combining these results, we conclude that
2-stack visibly pushdown automata are not closed under complementation.
Finally, we discuss the expressive power of B\"{u}chi 2-stack visibly pushdown
automata running on infinite (nested) words. Extending the logic by an infinity
quantifier, we can likewise establish equivalence to existential monadic
second-order logic
Coherency of Shared Memory in Ad-Hoc Networks
Memory coherence is a commonly accepted correctness criterion for distributed shared-memory computing platforms. Coherence is formulated assuming a static architecture in which all processors can communicate with one another. In this paper, we argue that the classical notion is not appropriate for ad-hoc networks consisting of mobile devices with constantly changing communication topology. We introduce and formalize a new correctness criterion, called group coherence, as a suitable abstract specification for shared-memory computing architectures over ad-hoc networks. We show that two existing systems, the Coda file system and Lampson’s global naming scheme, satisfy our definition. Finally, we propose a timestamp-based extension of the popular Snoopy cache coherence protocol for caching in ad-hoc networks, and show it to be group coherent
A proof of strong normalisation using domain theory
Ulrich Berger presented a powerful proof of strong normalisation using
domains, in particular it simplifies significantly Tait's proof of strong
normalisation of Spector's bar recursion. The main contribution of this paper
is to show that, using ideas from intersection types and Martin-Lof's domain
interpretation of type theory one can in turn simplify further U. Berger's
argument. We build a domain model for an untyped programming language where U.
Berger has an interpretation only for typed terms or alternatively has an
interpretation for untyped terms but need an extra condition to deduce strong
normalisation. As a main application, we show that Martin-L\"{o}f dependent
type theory extended with a program for Spector double negation shift.Comment: 16 page
Shared Variables Interaction Diagrams
Scenario-based specifications offer an intuitive and visual way of describing design requirements of distributed software systems. For the communication paradigm based on messages, message sequence charts (MSC) offer a standardized and formal notation amenable to formal analysis. In this paper, we define shared variables interaction diagrams (SVID) as the counterpart of MSCs when processes communicate via shared variables. After formally defining SVIDs, we develop an intuitive as well as formal definition of refinement for SVIDs. This notion provides a basis for systematically adding details to SVID requirements
Regular Specifications of Resource Requirements for Embedded Control Software
For embedded control systems, a schedule for the allocation of resources to a software component can be described by an infinite word whose ith symbol models the resources used at the ith sampling interval. Dependency of performance on schedules can be formally modeled by an automaton (w-regular language) which captures all the schedules that keep the system within performance requirements. We show how such an automaton is constructed for linear control designs and exponential stability or settling time performance requirements. Then, we explore the use of the automaton for online scheduling and for schedulability analysis. As a case study, we examine how this approach can be applied for the LQG control design. We demonstrate, by examples, that online schedulers can be used to guarantee performance in worst-case condition together with good performance in normal conditions. We also provide examples of schedulability analysis
Symbolic Analysis of GSMP Models With One Stateful Clock
We consider the problem of verifying reachability properties of stochastic real-time systems modeled as generalized semi-Markov processes (GSMPs). The standard simulation-based techniques for GSMPs are not adequate for solving verification problems, and existing symbolic techniques either require memoryless distributions for firing times, or approximate the problem using discrete time or bounded horizon. In this paper, we present a symbolic solution for the case where firing times are random variables over a rich class of distributions, but only one event is allowed to retain its firing time when a discrete change occurs. The solution allows us to compute the probability that such a GSMP satisfies a property of the form “can the system reach a target, while staying within a set of safe states”. We report on illustrative examples and their analysis using our procedure
Contention-Free Complexity of Shared Memory Algorithms
AbstractWorst-case time complexity is a measure of the maximum time needed to solve a problem over all runs. Contention-free time complexity indicates the maximum time needed when a process executes by itself, without competition from other processes. Since contention is rare in well-designed systems, it is important to design algorithms which perform well in the absence of contention. We study the contention-free time complexity of shared memory algorithms using two measures: step complexity, which counts the number of accesses to shared registers; and register complexity, which measures the number of different registers accessed. Depending on the system architecture, one of the two measures more accurately reflects the elapsed time. We provide lower and upper bounds for the contention-free step and register complexity of solving the mutual exclusion problem as a function of the number of processes and the size of the largest register that can be accessed in one atomic step. We also present bounds on the worst-case and contention-free step and register complexities of solving the naming problem. These bounds illustrate that the proposed complexity measures are useful in differentiating among the computational powers of different primitive
- …