9,560 research outputs found
From Knowledge, Knowability and the Search for Objective Randomness to a New Vision of Complexity
Herein we consider various concepts of entropy as measures of the complexity
of phenomena and in so doing encounter a fundamental problem in physics that
affects how we understand the nature of reality. In essence the difficulty has
to do with our understanding of randomness, irreversibility and
unpredictability using physical theory, and these in turn undermine our
certainty regarding what we can and what we cannot know about complex phenomena
in general. The sources of complexity examined herein appear to be channels for
the amplification of naturally occurring randomness in the physical world. Our
analysis suggests that when the conditions for the renormalization group apply,
this spontaneous randomness, which is not a reflection of our limited
knowledge, but a genuine property of nature, does not realize the conventional
thermodynamic state, and a new condition, intermediate between the dynamic and
the thermodynamic state, emerges. We argue that with this vision of complexity,
life, which with ordinary statistical mechanics seems to be foreign to physics,
becomes a natural consequence of dynamical processes.Comment: Phylosophica
On Universal Prediction and Bayesian Confirmation
The Bayesian framework is a well-studied and successful framework for
inductive reasoning, which includes hypothesis testing and confirmation,
parameter estimation, sequence prediction, classification, and regression. But
standard statistical guidelines for choosing the model class and prior are not
always available or fail, in particular in complex situations. Solomonoff
completed the Bayesian framework by providing a rigorous, unique, formal, and
universal choice for the model class and the prior. We discuss in breadth how
and in which sense universal (non-i.i.d.) sequence prediction solves various
(philosophical) problems of traditional Bayesian sequence prediction. We show
that Solomonoff's model possesses many desirable properties: Strong total and
weak instantaneous bounds, and in contrast to most classical continuous prior
densities has no zero p(oste)rior problem, i.e. can confirm universal
hypotheses, is reparametrization and regrouping invariant, and avoids the
old-evidence and updating problem. It even performs well (actually better) in
non-computable environments.Comment: 24 page
Universal Intelligence: A Definition of Machine Intelligence
A fundamental problem in artificial intelligence is that nobody really knows
what intelligence is. The problem is especially acute when we need to consider
artificial systems which are significantly different to humans. In this paper
we approach this problem in the following way: We take a number of well known
informal definitions of human intelligence that have been given by experts, and
extract their essential features. These are then mathematically formalised to
produce a general measure of intelligence for arbitrary machines. We believe
that this equation formally captures the concept of machine intelligence in the
broadest reasonable sense. We then show how this formal definition is related
to the theory of universal optimal learning agents. Finally, we survey the many
other tests and definitions of intelligence that have been proposed for
machines.Comment: 50 gentle page
Why Philosophers Should Care About Computational Complexity
One might think that, once we know something is computable, how efficiently
it can be computed is a practical question with little further philosophical
importance. In this essay, I offer a detailed case that one would be wrong. In
particular, I argue that computational complexity theory---the field that
studies the resources (such as time, space, and randomness) needed to solve
computational problems---leads to new perspectives on the nature of
mathematical knowledge, the strong AI debate, computationalism, the problem of
logical omniscience, Hume's problem of induction, Goodman's grue riddle, the
foundations of quantum mechanics, economic rationality, closed timelike curves,
and several other topics of philosophical interest. I end by discussing aspects
of complexity theory itself that could benefit from philosophical analysis.Comment: 58 pages, to appear in "Computability: G\"odel, Turing, Church, and
beyond," MIT Press, 2012. Some minor clarifications and corrections; new
references adde
Physical limits of inference
I show that physical devices that perform observation, prediction, or
recollection share an underlying mathematical structure. I call devices with
that structure "inference devices". I present a set of existence and
impossibility results concerning inference devices. These results hold
independent of the precise physical laws governing our universe. In a limited
sense, the impossibility results establish that Laplace was wrong to claim that
even in a classical, non-chaotic universe the future can be unerringly
predicted, given sufficient knowledge of the present. Alternatively, these
impossibility results can be viewed as a non-quantum mechanical "uncertainty
principle". Next I explore the close connections between the mathematics of
inference devices and of Turing Machines. In particular, the impossibility
results for inference devices are similar to the Halting theorem for TM's.
Furthermore, one can define an analog of Universal TM's (UTM's) for inference
devices. I call those analogs "strong inference devices". I use strong
inference devices to define the "inference complexity" of an inference task,
which is the analog of the Kolmogorov complexity of computing a string. However
no universe can contain more than one strong inference device. So whereas the
Kolmogorov complexity of a string is arbitrary up to specification of the UTM,
there is no such arbitrariness in the inference complexity of an inference
task. I end by discussing the philosophical implications of these results,
e.g., for whether the universe "is" a computer.Comment: 43 pages, updated version of Physica D version, which originally
appeared in 2007 CNLS conference on unconventional computatio
Computational and Biological Analogies for Understanding Fine-Tuned Parameters in Physics
In this philosophical paper, we explore computational and biological
analogies to address the fine-tuning problem in cosmology. We first clarify
what it means for physical constants or initial conditions to be fine-tuned. We
review important distinctions such as the dimensionless and dimensional
physical constants, and the classification of constants proposed by
Levy-Leblond. Then we explore how two great analogies, computational and
biological, can give new insights into our problem. This paper includes a
preliminary study to examine the two analogies. Importantly, analogies are both
useful and fundamental cognitive tools, but can also be misused or
misinterpreted. The idea that our universe might be modelled as a computational
entity is analysed, and we discuss the distinction between physical laws and
initial conditions using algorithmic information theory. Smolin introduced the
theory of "Cosmological Natural Selection" with a biological analogy in mind.
We examine an extension of this analogy involving intelligent life. We discuss
if and how this extension could be legitimated.
Keywords: origin of the universe, fine-tuning, physical constants, initial
conditions, computational universe, biological universe, role of intelligent
life, cosmological natural selection, cosmological artificial selection,
artificial cosmogenesis.Comment: 25 pages, Foundations of Science, in pres
- ā¦