75 research outputs found
A Toom rule that increases the thickness of sets
Toom's north-east-self voting cellular automaton rule R is known to suppress
small minorities. A variant which we call R^+ is also known to turn an
arbitrary initial configuration into a homogenous one (without changing the
ones that were homogenous to start with). Here we show that R^+ always
increases a certain property of sets called thickness. This result is intended
as a step towards a proof of the fast convergence towards consensus under R^+.
The latter is observable experimentally, even in the presence of some noise.Comment: 16 pages, 8 figure
Density classification on infinite lattices and trees
Consider an infinite graph with nodes initially labeled by independent
Bernoulli random variables of parameter p. We address the density
classification problem, that is, we want to design a (probabilistic or
deterministic) cellular automaton or a finite-range interacting particle system
that evolves on this graph and decides whether p is smaller or larger than 1/2.
Precisely, the trajectories should converge to the uniform configuration with
only 0's if p1/2. We present solutions to that problem
on the d-dimensional lattice, for any d>1, and on the regular infinite trees.
For Z, we propose some candidates that we back up with numerical simulations
Mean-field critical behaviour and ergodicity break in a nonequilibrium one-dimensional RSOS growth model
We investigate the nonequilibrium roughening transition of a one-dimensional
restricted solid-on-solid model by directly sampling the stationary probability
density of a suitable order parameter as the surface adsorption rate varies.
The shapes of the probability density histograms suggest a typical
Ginzburg-Landau scenario for the phase transition of the model, and estimates
of the "magnetic" exponent seem to confirm its mean-field critical behaviour.
We also found that the flipping times between the metastable phases of the
model scale exponentially with the system size, signaling the breaking of
ergodicity in the thermodynamic limit. Incidentally, we discovered that a
closely related model not considered before also displays a phase transition
with the same critical behaviour as the original model. Our results support the
usefulness of off-critical histogram techniques in the investigation of
nonequilibrium phase transitions. We also briefly discuss in an appendix a good
and simple pseudo-random number generator used in our simulations.Comment: LaTeX2e, 15 pages (large fonts and spacings), 5 figures. Accepted for
publication in the Int. J. Mod. Phys.
Prediction and Generation of Binary Markov Processes: Can a Finite-State Fox Catch a Markov Mouse?
Understanding the generative mechanism of a natural system is a vital
component of the scientific method. Here, we investigate one of the fundamental
steps toward this goal by presenting the minimal generator of an arbitrary
binary Markov process. This is a class of processes whose predictive model is
well known. Surprisingly, the generative model requires three distinct
topologies for different regions of parameter space. We show that a previously
proposed generator for a particular set of binary Markov processes is, in fact,
not minimal. Our results shed the first quantitative light on the relative
(minimal) costs of prediction and generation. We find, for instance, that the
difference between prediction and generation is maximized when the process is
approximately independently, identically distributed.Comment: 12 pages, 12 figures;
http://csc.ucdavis.edu/~cmg/compmech/pubs/gmc.ht
Towards a Universal Theory of Artificial Intelligence based on Algorithmic Probability and Sequential Decision Theory
Decision theory formally solves the problem of rational agents in uncertain
worlds if the true environmental probability distribution is known.
Solomonoff's theory of universal induction formally solves the problem of
sequence prediction for unknown distribution. We unify both theories and give
strong arguments that the resulting universal AIXI model behaves optimal in any
computable environment. The major drawback of the AIXI model is that it is
uncomputable. To overcome this problem, we construct a modified algorithm
AIXI^tl, which is still superior to any other time t and space l bounded agent.
The computation time of AIXI^tl is of the order t x 2^l.Comment: 8 two-column pages, latex2e, 1 figure, submitted to ijca
MDL Convergence Speed for Bernoulli Sequences
The Minimum Description Length principle for online sequence
estimation/prediction in a proper learning setup is studied. If the underlying
model class is discrete, then the total expected square loss is a particularly
interesting performance measure: (a) this quantity is finitely bounded,
implying convergence with probability one, and (b) it additionally specifies
the convergence speed. For MDL, in general one can only have loss bounds which
are finite but exponentially larger than those for Bayes mixtures. We show that
this is even the case if the model class contains only Bernoulli distributions.
We derive a new upper bound on the prediction error for countable Bernoulli
classes. This implies a small bound (comparable to the one for Bayes mixtures)
for certain important model classes. We discuss the application to Machine
Learning tasks such as classification and hypothesis testing, and
generalization to countable classes of i.i.d. models.Comment: 28 page
Families of Small Regular Graphs of Girth 5
In this paper we obtain --regular graphs of girth 5 with fewer
vertices than previously known ones for and for any prime performing operations of reductions and amalgams on the Levi graph of
an elliptic semiplane of type . We also obtain a 13-regular graph of
girth 5 on 236 vertices from using the same technique
Computability of the Radon-Nikodym derivative
We study the computational content of the Radon-Nokodym theorem from measure
theory in the framework of the representation approach to computable analysis.
We define computable measurable spaces and canonical representations of the
measures and the integrable functions on such spaces. For functions f,g on
represented sets, f is W-reducible to g if f can be computed by applying the
function g at most once. Let RN be the Radon-Nikodym operator on the space
under consideration and let EC be the non-computable operator mapping every
enumeration of a set of natural numbers to its characteristic function. We
prove that for every computable measurable space, RN is W-reducible to EC, and
we construct a computable measurable space for which EC is W-reducible to RN
Entropy and Quantum Kolmogorov Complexity: A Quantum Brudno's Theorem
In classical information theory, entropy rate and Kolmogorov complexity per
symbol are related by a theorem of Brudno. In this paper, we prove a quantum
version of this theorem, connecting the von Neumann entropy rate and two
notions of quantum Kolmogorov complexity, both based on the shortest qubit
descriptions of qubit strings that, run by a universal quantum Turing machine,
reproduce them as outputs.Comment: 26 pages, no figures. Reference to publication added: published in
the Communications in Mathematical Physics
(http://www.springerlink.com/content/1432-0916/
- …