18,897 research outputs found
A Survey on Continuous Time Computations
We provide an overview of theories of continuous time computation. These
theories allow us to understand both the hardness of questions related to
continuous time dynamical systems and the computational power of continuous
time analog models. We survey the existing models, summarizing results, and
point to relevant references in the literature
Non-classical computing: feasible versus infeasible
Physics sets certain limits on what is and is not computable. These limits are very far from having been reached by current technologies. Whilst proposals for hypercomputation are almost certainly infeasible, there are a number of non classical approaches that do hold considerable promise. There are a range of possible architectures that could be implemented on silicon that are distinctly different from the von Neumann model. Beyond this, quantum simulators, which are the quantum equivalent of analogue computers, may be constructable in the near future
Complexity, parallel computation and statistical physics
The intuition that a long history is required for the emergence of complexity
in natural systems is formalized using the notion of depth. The depth of a
system is defined in terms of the number of parallel computational steps needed
to simulate it. Depth provides an objective, irreducible measure of history
applicable to systems of the kind studied in statistical physics. It is argued
that physical complexity cannot occur in the absence of substantial depth and
that depth is a useful proxy for physical complexity. The ideas are illustrated
for a variety of systems in statistical physics.Comment: 21 pages, 7 figure
Are there new models of computation? Reply to Wegner and Eberbach
Wegner and Eberbach[Weg04b] have argued that there are fundamental limitations
to Turing Machines as a foundation of computability and that these can be overcome
by so-called superTuring models such as interaction machines, the [pi]calculus and the
$-calculus. In this paper we contest Weger and Eberbach claims
Strictly contractive quantum channels and physically realizable quantum computers
We study the robustness of quantum computers under the influence of errors
modelled by strictly contractive channels. A channel is defined to be
strictly contractive if, for any pair of density operators in its
domain, for some (here denotes the trace norm). In other words, strictly
contractive channels render the states of the computer less distinguishable in
the sense of quantum detection theory. Starting from the premise that all
experimental procedures can be carried out with finite precision, we argue that
there exists a physically meaningful connection between strictly contractive
channels and errors in physically realizable quantum computers. We show that,
in the absence of error correction, sensitivity of quantum memories and
computers to strictly contractive errors grows exponentially with storage time
and computation time respectively, and depends only on the constant and the
measurement precision. We prove that strict contractivity rules out the
possibility of perfect error correction, and give an argument that approximate
error correction, which covers previous work on fault-tolerant quantum
computation as a special case, is possible.Comment: 14 pages; revtex, amsfonts, amssymb; made some changes (recommended
by Phys. Rev. A), updated the reference
- âŠ