3,866 research outputs found
Refinement for Probabilistic Systems with Nondeterminism
Before we combine actions and probabilities two very obvious questions should
be asked. Firstly, what does "the probability of an action" mean? Secondly, how
does probability interact with nondeterminism? Neither question has a single
universally agreed upon answer but by considering these questions at the outset
we build a novel and hopefully intuitive probabilistic event-based formalism.
In previous work we have characterised refinement via the notion of testing.
Basically, if one system passes all the tests that another system passes (and
maybe more) we say the first system is a refinement of the second. This is, in
our view, an important way of characterising refinement, via the question "what
sort of refinement should I be using?"
We use testing in this paper as the basis for our refinement. We develop
tests for probabilistic systems by analogy with the tests developed for
non-probabilistic systems. We make sure that our probabilistic tests, when
performed on non-probabilistic automata, give us refinement relations which
agree with for those non-probabilistic automata. We formalise this property as
a vertical refinement.Comment: In Proceedings Refine 2011, arXiv:1106.348
Computing with cells: membrane systems - some complexity issues.
Membrane computing is a branch of natural computing which abstracts computing models from the structure and the functioning of the living cell. The main ingredients of membrane systems, called P systems, are (i) the membrane structure, which consists of a hierarchical arrangements of membranes which delimit compartments where (ii) multisets of symbols, called objects, evolve according to (iii) sets of rules which are localised and associated with compartments. By using the rules in a nondeterministic/deterministic maximally parallel manner, transitions between the system configurations can be obtained. A sequence of transitions is a computation of how the system is evolving. Various ways of controlling the transfer of objects from one membrane to another and applying the rules, as well as possibilities to dissolve, divide or create membranes have been studied. Membrane systems have a great potential for implementing massively concurrent systems in an efficient way that would allow us to solve currently intractable problems once future biotechnology gives way to a practical bio-realization. In this paper we survey some interesting and fundamental complexity issues such as universality vs. nonuniversality, determinism vs. nondeterminism, membrane and alphabet size hierarchies, characterizations of context-sensitive languages and other language classes and various notions of parallelism
A uniform framework for modelling nondeterministic, probabilistic, stochastic, or mixed processes and their behavioral equivalences
Labeled transition systems are typically used as behavioral models of concurrent processes, and the labeled transitions define the a one-step state-to-state reachability relation. This model can be made generalized by modifying the transition relation to associate a state reachability distribution, rather than a single target state, with any pair of source state and transition label. The state reachability distribution becomes a function mapping each possible target state to a value that expresses the degree of one-step reachability of that state. Values are taken from a preordered set equipped with a minimum that denotes unreachability. By selecting suitable preordered sets, the resulting model, called ULTraS from Uniform Labeled Transition System, can be specialized to capture well-known models of fully nondeterministic processes (LTS), fully
probabilistic processes (ADTMC), fully stochastic processes (ACTMC), and of nondeterministic and probabilistic (MDP) or nondeterministic and stochastic (CTMDP) processes. This uniform treatment of different behavioral models extends to behavioral equivalences. These can be defined on ULTraS by relying on appropriate measure functions that expresses the degree of reachability of a set of states when performing
single-step or multi-step computations. It is shown that the specializations of bisimulation, trace, and testing
equivalences for the different classes of ULTraS coincide with the behavioral equivalences defined in the literature over traditional models
Minimizing finite automata is computationally hard
It is known that deterministic finite automata (DFAs) can be algorithmically minimized, i.e., a DFA M can be converted to an equivalent DFA M' which has a minimal number of states. The minimization can be done efficiently [6]. On the other hand, it is known that unambiguous finite automata (UFAs) and nondeterministic finite automata (NFAs) can be algorithmically minimized too, but their minimization problems turn out to be NP-complete and PSPACE-complete [8]. In this paper, the time complexity of the minimization problem for two restricted types of finite automata is investigated. These automata are nearly deterministic, since they only allow a small amount of non determinism to be used. On the one hand, NFAs with a fixed finite branching are studied, i.e., the number of nondeterministic moves within every accepting computation is bounded by a fixed finite number. On the other hand, finite automata are investigated which are essentially deterministic except that there is a fixed number of different initial states which can be chosen nondeterministically. The main result is that the minimization problems for these models are computationally hard, namely NP-complete. Hence, even the slightest extension of the deterministic model towards a nondeterministic one, e.g., allowing at most one nondeterministic move in every accepting computation or allowing two initial states instead of one, results in computationally intractable minimization problems
Partially ordered distributed computations on asynchronous point-to-point networks
Asynchronous executions of a distributed algorithm differ from each other due
to the nondeterminism in the order in which the messages exchanged are handled.
In many situations of interest, the asynchronous executions induced by
restricting nondeterminism are more efficient, in an application-specific
sense, than the others. In this work, we define partially ordered executions of
a distributed algorithm as the executions satisfying some restricted orders of
their actions in two different frameworks, those of the so-called event- and
pulse-driven computations. The aim of these restrictions is to characterize
asynchronous executions that are likely to be more efficient for some important
classes of applications. Also, an asynchronous algorithm that ensures the
occurrence of partially ordered executions is given for each case. Two of the
applications that we believe may benefit from the restricted nondeterminism are
backtrack search, in the event-driven case, and iterative algorithms for
systems of linear equations, in the pulse-driven case
Quantum Weakly Nondeterministic Communication Complexity
We study the weakest model of quantum nondeterminism in which a classical
proof has to be checked with probability one by a quantum protocol. We show the
first separation between classical nondeterministic communication complexity
and this model of quantum nondeterministic communication complexity for a total
function. This separation is quadratic.Comment: 12 pages. v3: minor correction
The Spectrum of Strong Behavioral Equivalences for Nondeterministic and Probabilistic Processes
We present a spectrum of trace-based, testing, and bisimulation equivalences
for nondeterministic and probabilistic processes whose activities are all
observable. For every equivalence under study, we examine the discriminating
power of three variants stemming from three approaches that differ for the way
probabilities of events are compared when nondeterministic choices are resolved
via deterministic schedulers. We show that the first approach - which compares
two resolutions relatively to the probability distributions of all considered
events - results in a fragment of the spectrum compatible with the spectrum of
behavioral equivalences for fully probabilistic processes. In contrast, the
second approach - which compares the probabilities of the events of a
resolution with the probabilities of the same events in possibly different
resolutions - gives rise to another fragment composed of coarser equivalences
that exhibits several analogies with the spectrum of behavioral equivalences
for fully nondeterministic processes. Finally, the third approach - which only
compares the extremal probabilities of each event stemming from the different
resolutions - yields even coarser equivalences that, however, give rise to a
hierarchy similar to that stemming from the second approach.Comment: In Proceedings QAPL 2013, arXiv:1306.241
Deterministic Consistency: A Programming Model for Shared Memory Parallelism
The difficulty of developing reliable parallel software is generating
interest in deterministic environments, where a given program and input can
yield only one possible result. Languages or type systems can enforce
determinism in new code, and runtime systems can impose synthetic schedules on
legacy parallel code. To parallelize existing serial code, however, we would
like a programming model that is naturally deterministic without language
restrictions or artificial scheduling. We propose "deterministic consistency",
a parallel programming model as easy to understand as the "parallel assignment"
construct in sequential languages such as Perl and JavaScript, where concurrent
threads always read their inputs before writing shared outputs. DC supports
common data- and task-parallel synchronization abstractions such as fork/join
and barriers, as well as non-hierarchical structures such as producer/consumer
pipelines and futures. A preliminary prototype suggests that software-only
implementations of DC can run applications written for popular parallel
environments such as OpenMP with low (<10%) overhead for some applications.Comment: 7 pages, 3 figure
- …