1,273 research outputs found
A taxonomy of problems with fast parallel algorithms
The class NC consists of problems solvable very fast (in time polynomial in log n) in parallel with a feasible (polynomial) number of processors. Many natural problems in NC are known; in this paper an attempt is made to identify important subclasses of NC and give interesting examples in each subclass. The notion of NC1-reducibility is introduced and used throughout (problem R is NC1-reducible to problem S if R can be solved with uniform log-depth circuits using oracles for S). Problems complete with respect to this reducibility are given for many of the subclasses of NC. A general technique, the “parallel greedy algorithm,” is identified and used to show that finding a minimum spanning forest of a graph is reducible to the graph accessibility problem and hence is in NC2 (solvable by uniform Boolean circuits of depth O(log2 n) and polynomial size). The class LOGCFL is given a new characterization in terms of circuit families. The class DET of problems reducible to integer determinants is defined and many examples given. A new problem complete for deterministic polynomial time is given, namely, finding the lexicographically first maximal clique in a graph. This paper is a revised version of S. A. Cook, (1983, in “Proceedings 1983 Intl. Found. Comut. Sci. Conf.,” Lecture Notes in Computer Science Vol. 158, pp. 78–93, Springer-Verlag, Berlin/New York)
Quantum Simulation Logic, Oracles, and the Quantum Advantage
Query complexity is a common tool for comparing quantum and classical
computation, and it has produced many examples of how quantum algorithms differ
from classical ones. Here we investigate in detail the role that oracles play
for the advantage of quantum algorithms. We do so by using a simulation
framework, Quantum Simulation Logic (QSL), to construct oracles and algorithms
that solve some problems with the same success probability and number of
queries as the quantum algorithms. The framework can be simulated using only
classical resources at a constant overhead as compared to the quantum resources
used in quantum computation. Our results clarify the assumptions made and the
conditions needed when using quantum oracles. Using the same assumptions on
oracles within the simulation framework we show that for some specific
algorithms, like the Deutsch-Jozsa and Simon's algorithms, there simply is no
advantage in terms of query complexity. This does not detract from the fact
that quantum query complexity provides examples of how a quantum computer can
be expected to behave, which in turn has proved useful for finding new quantum
algorithms outside of the oracle paradigm, where the most prominent example is
Shor's algorithm for integer factorization.Comment: 48 pages, 46 figure
An Introduction to Quantum Computing for Non-Physicists
Richard Feynman's observation that quantum mechanical effects could not be
simulated efficiently on a computer led to speculation that computation in
general could be done more efficiently if it used quantum effects. This
speculation appeared justified when Peter Shor described a polynomial time
quantum algorithm for factoring integers.
In quantum systems, the computational space increases exponentially with the
size of the system which enables exponential parallelism. This parallelism
could lead to exponentially faster quantum algorithms than possible
classically. The catch is that accessing the results, which requires
measurement, proves tricky and requires new non-traditional programming
techniques.
The aim of this paper is to guide computer scientists and other
non-physicists through the conceptual and notational barriers that separate
quantum computing from conventional computing. We introduce basic principles of
quantum mechanics to explain where the power of quantum computers comes from
and why it is difficult to harness. We describe quantum cryptography,
teleportation, and dense coding. Various approaches to harnessing the power of
quantum parallelism are explained, including Shor's algorithm, Grover's
algorithm, and Hogg's algorithms. We conclude with a discussion of quantum
error correction.Comment: 45 pages. To appear in ACM Computing Surveys. LATEX file. Exposition
improved throughout thanks to reviewers' comment
Minimisation of Multiplicity Tree Automata
We consider the problem of minimising the number of states in a multiplicity
tree automaton over the field of rational numbers. We give a minimisation
algorithm that runs in polynomial time assuming unit-cost arithmetic. We also
show that a polynomial bound in the standard Turing model would require a
breakthrough in the complexity of polynomial identity testing by proving that
the latter problem is logspace equivalent to the decision version of
minimisation. The developed techniques also improve the state of the art in
multiplicity word automata: we give an NC algorithm for minimising multiplicity
word automata. Finally, we consider the minimal consistency problem: does there
exist an automaton with states that is consistent with a given finite
sample of weight-labelled words or trees? We show that this decision problem is
complete for the existential theory of the rationals, both for words and for
trees of a fixed alphabet rank.Comment: Paper to be published in Logical Methods in Computer Science. Minor
editing changes from previous versio
Delegating computation reliably : paradigms and constructions
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2009.Cataloged from PDF version of thesis.Includes bibliographical references (p. 285-297).In an emerging computing paradigm, computational capabilities, from processing power to storage capacities, are offered to users over communication networks as a service. This new paradigm holds enormous promise for increasing the utility of computationally weak devices. A natural approach is for weak devices to delegate expensive tasks, such as storing a large file or running a complex computation, to more powerful entities (say servers) connected to the same network. While the delegation approach seems promising, it raises an immediate concern: when and how can a weak device verify that a computational task was completed correctly? This practically motivated question touches on foundational questions in cryptography and complexity theory. The focus of this thesis is verifying the correctness of delegated computations. We construct efficient protocols (interactive proofs) for delegating computational tasks. In particular, we present: e A protocol for delegating any computation, where the work needed to verify the correctness of the output is linear in the input length, polynomial in the computation's depth, and only poly-logarithmic in the computation's size. The space needed for verification is only logarithmic in the computation size. Thus, for any computation of polynomial size and poly-logarithmic depth (the rich complexity class N/C), the work required to verify the correctness of the output is only quasi-linear in the input length. The work required to prove the output's correctness is only polynomial in the original computation's size. This protocol also has applications to constructing one-round arguments for delegating computation, and efficient zero-knowledge proofs. * A general transformation, reducing the parallel running time (or computation depth) of the verifier in protocols for delegating computation (interactive proofs) to be constant. Next, we explore the power of the delegation paradigm in settings where mutually distrustful parties interact. In particular, we consider the settings of checking the correctness of computer programs and of designing error-correcting codes. We show: * A new methodology for checking the correctness of programs (program checking), in which work is delegated from the program checker to the untrusted program being checked. Using this methodology we obtain program checkers for an entire complexity class (the class of N/C¹-computations that are WNC-hard), and for a slew of specific functions such as matrix multiplication, inversion, determinant and rank, as well as graph functions such as connectivity, perfect matching and bounded-degree graph isomorphism. * A methodology for designing error-correcting codes with efficient decoding procedures, in which work is delegated from the decoder to the encoder. We use this methodology to obtain constant-depth (AC⁰) locally decodable and locally-list decodable codes. We also show that the parameters of these codes are optimal (up to polynomial factors) for constant-depth decoding.by Guy N. Rothblum.Ph.D
- …