6,719 research outputs found
Quantum Kolmogorov Complexity Based on Classical Descriptions
We develop a theory of the algorithmic information in bits contained in an
individual pure quantum state. This extends classical Kolmogorov complexity to
the quantum domain retaining classical descriptions. Quantum Kolmogorov
complexity coincides with the classical Kolmogorov complexity on the classical
domain. Quantum Kolmogorov complexity is upper bounded and can be effectively
approximated from above under certain conditions. With high probability a
quantum object is incompressible. Upper- and lower bounds of the quantum
complexity of multiple copies of individual pure quantum states are derived and
may shed some light on the no-cloning properties of quantum states. In the
quantum situation complexity is not sub-additive. We discuss some relations
with ``no-cloning'' and ``approximate cloning'' properties.Comment: 17 pages, LaTeX, final and extended version of quant-ph/9907035, with
corrections to the published journal version (the two displayed equations in
the right-hand column on page 2466 had the left-hand sides of the displayed
formulas erroneously interchanged
Unbounded-error quantum computation with small space bounds
We prove the following facts about the language recognition power of quantum
Turing machines (QTMs) in the unbounded error setting: QTMs are strictly more
powerful than probabilistic Turing machines for any common space bound
satisfying . For "one-way" Turing machines, where the
input tape head is not allowed to move left, the above result holds for
. We also give a characterization for the class of languages
recognized with unbounded error by real-time quantum finite automata (QFAs)
with restricted measurements. It turns out that these automata are equal in
power to their probabilistic counterparts, and this fact does not change when
the QFA model is augmented to allow general measurements and mixed states.
Unlike the case with classical finite automata, when the QFA tape head is
allowed to remain stationary in some steps, more languages become recognizable.
We define and use a QTM model that generalizes the other variants introduced
earlier in the study of quantum space complexity.Comment: A preliminary version of this paper appeared in the Proceedings of
the Fourth International Computer Science Symposium in Russia, pages
356--367, 200
Measurement-Based Quantum Turing Machines and their Universality
Quantum measurement is universal for quantum computation. This universality
allows alternative schemes to the traditional three-step organisation of
quantum computation: initial state preparation, unitary transformation,
measurement. In order to formalize these other forms of computation, while
pointing out the role and the necessity of classical control in
measurement-based computation, and for establishing a new upper bound of the
minimal resources needed to quantum universality, a formal model is introduced
by means of Measurement-based Quantum Turing Machines.Comment: 13 pages, based upon quant-ph/0402156 with significant improvement
Quantum Branching Programs and Space-Bounded Nonuniform Quantum Complexity
In this paper, the space complexity of nonuniform quantum computations is
investigated. The model chosen for this are quantum branching programs, which
provide a graphic description of sequential quantum algorithms. In the first
part of the paper, simulations between quantum branching programs and
nonuniform quantum Turing machines are presented which allow to transfer lower
and upper bound results between the two models. In the second part of the
paper, different variants of quantum OBDDs are compared with their
deterministic and randomized counterparts. In the third part, quantum branching
programs are considered where the performed unitary operation may depend on the
result of a previous measurement. For this model a simulation of randomized
OBDDs and exponential lower bounds are presented.Comment: 45 pages, 3 Postscript figures. Proofs rearranged, typos correcte
A Survey on Continuous Time Computations
We provide an overview of theories of continuous time computation. These
theories allow us to understand both the hardness of questions related to
continuous time dynamical systems and the computational power of continuous
time analog models. We survey the existing models, summarizing results, and
point to relevant references in the literature
Temperature 1 Self-Assembly: Deterministic Assembly in 3D and Probabilistic Assembly in 2D
We investigate the power of the Wang tile self-assembly model at temperature
1, a threshold value that permits attachment between any two tiles that share
even a single bond. When restricted to deterministic assembly in the plane, no
temperature 1 assembly system has been shown to build a shape with a tile
complexity smaller than the diameter of the shape. In contrast, we show that
temperature 1 self-assembly in 3 dimensions, even when growth is restricted to
at most 1 step into the third dimension, is capable of simulating a large class
of temperature 2 systems, in turn permitting the simulation of arbitrary Turing
machines and the assembly of squares in near optimal
tile complexity. Further, we consider temperature 1 probabilistic assembly in
2D, and show that with a logarithmic scale up of tile complexity and shape
scale, the same general class of temperature systems can be simulated
with high probability, yielding Turing machine simulation and
assembly of squares with high probability. Our results show a sharp
contrast in achievable tile complexity at temperature 1 if either growth into
the third dimension or a small probability of error are permitted. Motivated by
applications in nanotechnology and molecular computing, and the plausibility of
implementing 3 dimensional self-assembly systems, our techniques may provide
the needed power of temperature 2 systems, while at the same time avoiding the
experimental challenges faced by those systems
Alternation-Trading Proofs, Linear Programming, and Lower Bounds
A fertile area of recent research has demonstrated concrete polynomial time
lower bounds for solving natural hard problems on restricted computational
models. Among these problems are Satisfiability, Vertex Cover, Hamilton Path,
Mod6-SAT, Majority-of-Majority-SAT, and Tautologies, to name a few. The proofs
of these lower bounds follow a certain proof-by-contradiction strategy that we
call alternation-trading. An important open problem is to determine how
powerful such proofs can possibly be.
We propose a methodology for studying these proofs that makes them amenable
to both formal analysis and automated theorem proving. We prove that the search
for better lower bounds can often be turned into a problem of solving a large
series of linear programming instances. Implementing a small-scale theorem
prover based on this result, we extract new human-readable time lower bounds
for several problems. This framework can also be used to prove concrete
limitations on the current techniques.Comment: To appear in STACS 2010, 12 page
- …