1,765 research outputs found
Model Checking CTL is Almost Always Inherently Sequential
The model checking problem for CTL is known to be P-complete (Clarke, Emerson, and Sistla (1986), see Schnoebelen (2002)). We consider fragments of CTL obtained by restricting the use of temporal modalities or the use of negations—restrictions already studied for LTL by Sistla and Clarke (1985) and Markey (2004). For all these fragments, except for the trivial case without any temporal operator, we systematically prove model checking to be either inherently sequential (P-complete) or very efficiently parallelizable (LOGCFL-complete). For most fragments, however, model checking for CTL is already P-complete. Hence our results indicate that in most applications, approaching CTL model checking by parallelism will not result in the desired speed up. We also completely determine the complexity of the model checking problem for all fragments of the extensions ECTL, CTL +, and ECTL +
Model Checking CTL is Almost Always Inherently Sequential
The model checking problem for CTL is known to be P-complete (Clarke,
Emerson, and Sistla (1986), see Schnoebelen (2002)). We consider fragments of
CTL obtained by restricting the use of temporal modalities or the use of
negations---restrictions already studied for LTL by Sistla and Clarke (1985)
and Markey (2004). For all these fragments, except for the trivial case without
any temporal operator, we systematically prove model checking to be either
inherently sequential (P-complete) or very efficiently parallelizable
(LOGCFL-complete). For most fragments, however, model checking for CTL is
already P-complete. Hence our results indicate that, in cases where the
combined complexity is of relevance, approaching CTL model checking by
parallelism cannot be expected to result in any significant speedup. We also
completely determine the complexity of the model checking problem for all
fragments of the extensions ECTL, CTL+, and ECTL+
Complexity, parallel computation and statistical physics
The intuition that a long history is required for the emergence of complexity
in natural systems is formalized using the notion of depth. The depth of a
system is defined in terms of the number of parallel computational steps needed
to simulate it. Depth provides an objective, irreducible measure of history
applicable to systems of the kind studied in statistical physics. It is argued
that physical complexity cannot occur in the absence of substantial depth and
that depth is a useful proxy for physical complexity. The ideas are illustrated
for a variety of systems in statistical physics.Comment: 21 pages, 7 figure
Parallel quantum computing: from theory to practice
The term quantum parallelism is commonly used to refer to a property of quantum
computations where an algorithm can act simultaneously on a superposition of states.
However, this is not the only aspect of parallelism in quantum computing. Analogously
to the classical computing model, every algorithm consists of elementary quantum
operations and the application of them could be parallelised itself. This kind of
parallelism is explored in this thesis in the one way quantum computing (1WQC) and
the quantum circuit model.
In the quantum circuit model we explore arithmetic circuits and circuit complexity
theory. Two new arithmetic circuits for quantum computers are introduced in this
work: an adder and a multiply-adder. The latter is especially interesting because its
depth (i.e. the number of parallel steps required to finish the computation) is smaller
than for any known classical circuit when applied sequentially. From the complexity
theoretical perspective we concentrate on the classes QAC0 and QAC0[2], the quantum
counterparts of AC0 and AC0[2]. The class AC0 comprises of constant depth circuits with
unbounded fan-in AND and OR gates and AC0[2] is obtained when unbounded fan-in
parity gates are added to AC0 circuits. We prove that QAC0 circuits with two layers
of multi-qubit gates cannot compute parity exactly. This is a step towards proving
QAC0 6= QAC0[2], a relation known to hold for AC0 and AC0[2].
In 1WQC, computation is done through measurements on an entangled state called
the resource state. Two well known parallelisation methods exist in this model:
signal shifting and finding the maximally delayed general flow. The first one uses
the measurement calculus formalism to rewrite the dependencies of an existing
computation, whereas the second technique exploits the geometry of the resource state
to find the optimal ordering of measurements. We prove that the aforementioned
methods result in same depth computations when the input and output sizes are equal.
Through showing this equivalence we reveal new properties of 1WQC computations
and design a new algorithm for the above mentioned parallelisations
The parallel complexity of TSP heuristics
We consider eight heuristics for constructing approximate solutions to the traveling salesman problem and analyze their complexity in a model of parallel computation. The problems of finding a tour by the nearest neighbor, nearest merger, nearest insertion, cheapest insertion, and farthest insertion heuristics are shown to be -complete. Hence, it is unlikely that such tours can be obtained in polylogarithmic work space on a sequential computer or in polylogarithmic time on a computer with unbounded parallelism. The double minimum spanning tree and nearest addition heuristics can be implemented to run in polylogarithmic time on a polynomial number of processors. For the Christofides heuristic, we give a randomized polylogarithmic approximation scheme requiring a polynomial number of processors
- …