1,445 research outputs found
Multiple Particle Interference and Quantum Error Correction
The concept of multiple particle interference is discussed, using insights
provided by the classical theory of error correcting codes. This leads to a
discussion of error correction in a quantum communication channel or a quantum
computer. Methods of error correction in the quantum regime are presented, and
their limitations assessed. A quantum channel can recover from arbitrary
decoherence of x qubits if K bits of quantum information are encoded using n
quantum bits, where K/n can be greater than 1-2 H(2x/n), but must be less than
1 - 2 H(x/n). This implies exponential reduction of decoherence with only a
polynomial increase in the computing resources required. Therefore quantum
computation can be made free of errors in the presence of physically realistic
levels of decoherence. The methods also allow isolation of quantum
communication from noise and evesdropping (quantum privacy amplification).Comment: Submitted to Proc. Roy. Soc. Lond. A. in November 1995, accepted May
1996. 39 pages, 6 figures. This is now the final version. The changes are
some added references, changed final figure, and a more precise use of the
word `decoherence'. I would like to propose the word `defection' for a
general unknown error of a single qubit (rotation and/or entanglement). It is
useful because it captures the nature of the error process, and has a verb
form `to defect'. Random unitary changes (rotations) of a qubit are caused by
defects in the quantum computer; to entangle randomly with the environment is
to form a treacherous alliance with an enemy of successful quantu
An evolutionary model with Turing machines
The development of a large non-coding fraction in eukaryotic DNA and the
phenomenon of the code-bloat in the field of evolutionary computations show a
striking similarity. This seems to suggest that (in the presence of mechanisms
of code growth) the evolution of a complex code can't be attained without
maintaining a large inactive fraction. To test this hypothesis we performed
computer simulations of an evolutionary toy model for Turing machines, studying
the relations among fitness and coding/non-coding ratio while varying mutation
and code growth rates. The results suggest that, in our model, having a large
reservoir of non-coding states constitutes a great (long term) evolutionary
advantage.Comment: 16 pages, 7 figure
Two-Bit Messages are Sufficient to Implement Atomic Read/Write Registers in Crash-prone Systems
Atomic registers are certainly the most basic objects of computing science.
Their implementation on top of an n-process asynchronous message-passing system
has received a lot of attention. It has been shown that t \textless{} n/2
(where t is the maximal number of processes that may crash) is a necessary and
sufficient requirement to build an atomic register on top of a crash-prone
asynchronous message-passing system. Considering such a context, this paper
presents an algorithm which implements a single-writer multi-reader atomic
register with four message types only, and where no message needs to carry
control information in addition to its type. Hence, two bits are sufficient to
capture all the control information carried by all the implementation messages.
Moreover, the messages of two types need to carry a data value while the
messages of the two other types carry no value at all. As far as we know, this
algorithm is the first with such an optimality property on the size of control
information carried by messages. It is also particularly efficient from a time
complexity point of view
Turing's three philosophical lessons and the philosophy of information
In this article, I outline the three main philosophical lessons that we may learn from Turing's work, and how they lead to a new philosophy of information. After a brief introduction, I discuss his work on the method of levels of abstraction (LoA), and his insistence that questions could be meaningfully asked only by specifying the correct LoA. I then look at his second lesson, about the sort of philosophical questions that seem to be most pressing today. Finally, I focus on the third lesson, concerning the new philosophical anthropology that owes so much to Turing's work. I then show how the lessons are learned by the philosophy of information. In the conclusion, I draw a general synthesis of the points made, in view of the development of the philosophy of information itself as a continuation of Turing's work. This journal is © 2012 The Royal Society.Peer reviewe
Diffusion-induced spontaneous pattern formation on gelation surfaces
Although the pattern formation on polymer gels has been considered as a
result of the mechanical instability due to the volume phase transition, we
found a macroscopic surface pattern formation not caused by the mechanical
instability. It develops on gelation surfaces, and we consider the
reaction-diffusion dynamics mainly induces a surface instability during
polymerization. Random and straight stripe patterns were observed, depending on
gelation conditions. We found the scaling relation between the characteristic
wavelength and the gelation time. This scaling is consistent with the
reaction-diffusion dynamics and would be a first step to reveal the gelation
pattern formation dynamics.Comment: 7 pages, 4 figure
Formation of regular spatial patterns in ratio-dependent predator-prey model driven by spatial colored-noise
Results are reported concerning the formation of spatial patterns in the
two-species ratio-dependent predator-prey model driven by spatial
colored-noise. The results show that there is a critical value with respect to
the intensity of spatial noise for this system when the parameters are in the
Turing space, above which the regular spatial patterns appear in two
dimensions, but under which there are not regular spatial patterns produced. In
particular, we investigate in two-dimensional space the formation of regular
spatial patterns with the spatial noise added in the side and the center of the
simulation domain, respectively.Comment: 4 pages and 3 figure
Differentiation and Replication of Spots in a Reaction Diffusion System with Many Chemicals
The replication and differentiation of spots in reaction diffusion equations
are studied by extending the Gray-Scott model with self-replicating spots to
include many degrees of freedom needed to model systems with many chemicals. By
examining many possible reaction networks, the behavior of this model is
categorized into three types: replication of homogeneous fixed spots,
replication of oscillatory spots, and differentiation from `m ultipotent
spots'. These multipotent spots either replicate or differentiate into other
types of spots with different fixed-point dynamics, and as a result, an
inhomogeneous pattern of spots is formed. This differentiation process of spots
is analyzed in terms of the loss of chemical diversity and decrease of the
local Kolmogorov-Sinai entropy. The relevance of the results to developmental
cell biology and stem cells is also discussed.Comment: 8 pages, 12 figures, Submitted to EP
Computation in Physical Systems: A Normative Mapping Account
The relationship between abstract formal procedures and the activities of actual physical systems has proved to be surprisingly subtle and controversial, and there are a number of competing accounts of when a physical system can be properly said to implement a mathematical formalism and hence perform a computation. I defend an account wherein computational descriptions of physical systems are high-level normative interpretations motivated by our pragmatic concerns. Furthermore, the criteria of utility and success vary according to our diverse purposes and pragmatic goals. Hence there is no independent or uniform fact to the matter, and I advance the âanti-realistâ conclusion that computational descriptions of physical systems are not founded upon deep ontological distinctions, but rather upon interest-relative human conventions. Hence physical computation is a âconventionalâ rather than a ânaturalâ kind
AIML and sequence-to-sequence models to build artificial intelligence chatbots: insights from a comparative analysis
A chatbot is a software that is able to autonomously communicate with a human being through text and due to its usefulness, an increasing number of businesses are implementing such tools in order to provide timely communication to their clients. In the past, whilst literature has focused on implementing innovative chatbots and the evaluation of such tools, limited studies have been done to critically comparing such conversational systems. In order to address this gap, this study critically compares the Artificial Intelligence Mark-up Language (AIML), and Sequence-to-Sequence models for building chatbots. In this endeavor, two chatbots were developed to implement each model and were evaluated using a mixture of glass box and black box evaluation, based on 3 metrics, namely, userâs satisfaction, the information retrieval rate, and the task completion rate of each chatbot. Results showed that the AIML chatbot ensured better user satisfaction, and task completion rate, while the Sequence-to-Sequence model had better information retrieval rate
The finite tiling problem is undecidable in the hyperbolic plane
In this paper, we consider the finite tiling problem which was proved
undecidable in the Euclidean plane by Jarkko Kari in 1994. Here, we prove that
the same problem for the hyperbolic plane is also undecidable
- âŠ