1,620 research outputs found
Algorithmic Randomness as Foundation of Inductive Reasoning and Artificial Intelligence
This article is a brief personal account of the past, present, and future of
algorithmic randomness, emphasizing its role in inductive inference and
artificial intelligence. It is written for a general audience interested in
science and philosophy. Intuitively, randomness is a lack of order or
predictability. If randomness is the opposite of determinism, then algorithmic
randomness is the opposite of computability. Besides many other things, these
concepts have been used to quantify Ockham's razor, solve the induction
problem, and define intelligence.Comment: 9 LaTeX page
Approximations of Algorithmic and Structural Complexity Validate Cognitive-behavioural Experimental Results
We apply methods for estimating the algorithmic complexity of sequences to
behavioural sequences of three landmark studies of animal behavior each of
increasing sophistication, including foraging communication by ants, flight
patterns of fruit flies, and tactical deception and competition strategies in
rodents. In each case, we demonstrate that approximations of Logical Depth and
Kolmogorv-Chaitin complexity capture and validate previously reported results,
in contrast to other measures such as Shannon Entropy, compression or ad hoc.
Our method is practically useful when dealing with short sequences, such as
those often encountered in cognitive-behavioural research. Our analysis
supports and reveals non-random behavior (LD and K complexity) in flies even in
the absence of external stimuli, and confirms the "stochastic" behaviour of
transgenic rats when faced that they cannot defeat by counter prediction. The
method constitutes a formal approach for testing hypotheses about the
mechanisms underlying animal behaviour.Comment: 28 pages, 7 figures and 2 table
Towards a Universal Theory of Artificial Intelligence based on Algorithmic Probability and Sequential Decision Theory
Decision theory formally solves the problem of rational agents in uncertain
worlds if the true environmental probability distribution is known.
Solomonoff's theory of universal induction formally solves the problem of
sequence prediction for unknown distribution. We unify both theories and give
strong arguments that the resulting universal AIXI model behaves optimal in any
computable environment. The major drawback of the AIXI model is that it is
uncomputable. To overcome this problem, we construct a modified algorithm
AIXI^tl, which is still superior to any other time t and space l bounded agent.
The computation time of AIXI^tl is of the order t x 2^l.Comment: 8 two-column pages, latex2e, 1 figure, submitted to ijca
Universal Intelligence: A Definition of Machine Intelligence
A fundamental problem in artificial intelligence is that nobody really knows
what intelligence is. The problem is especially acute when we need to consider
artificial systems which are significantly different to humans. In this paper
we approach this problem in the following way: We take a number of well known
informal definitions of human intelligence that have been given by experts, and
extract their essential features. These are then mathematically formalised to
produce a general measure of intelligence for arbitrary machines. We believe
that this equation formally captures the concept of machine intelligence in the
broadest reasonable sense. We then show how this formal definition is related
to the theory of universal optimal learning agents. Finally, we survey the many
other tests and definitions of intelligence that have been proposed for
machines.Comment: 50 gentle page
- …