11,890 research outputs found
The Algorithmic Origins of Life
Although it has been notoriously difficult to pin down precisely what it is
that makes life so distinctive and remarkable, there is general agreement that
its informational aspect is one key property, perhaps the key property. The
unique informational narrative of living systems suggests that life may be
characterized by context-dependent causal influences, and in particular, that
top-down (or downward) causation -- where higher-levels influence and constrain
the dynamics of lower-levels in organizational hierarchies -- may be a major
contributor to the hierarchal structure of living systems. Here we propose that
the origin of life may correspond to a physical transition associated with a
shift in causal structure, where information gains direct, and
context-dependent causal efficacy over the matter it is instantiated in. Such a
transition may be akin to more traditional physical transitions (e.g.
thermodynamic phase transitions), with the crucial distinction that determining
which phase (non-life or life) a given system is in requires dynamical
information and therefore can only be inferred by identifying causal
architecture. We discuss some potential novel research directions based on this
hypothesis, including potential measures of such a transition that may be
amenable to laboratory study, and how the proposed mechanism corresponds to the
onset of the unique mode of (algorithmic) information processing characteristic
of living systems.Comment: 13 pages, 1 tabl
Identifying Nonlinear 1-Step Causal Influences in Presence of Latent Variables
We propose an approach for learning the causal structure in stochastic
dynamical systems with a -step functional dependency in the presence of
latent variables. We propose an information-theoretic approach that allows us
to recover the causal relations among the observed variables as long as the
latent variables evolve without exogenous noise. We further propose an
efficient learning method based on linear regression for the special sub-case
when the dynamics are restricted to be linear. We validate the performance of
our approach via numerical simulations
When is an action caused from within? Quantifying the causal chain leading to actions in simulated agents
An agent's actions can be influenced by external factors through the inputs
it receives from the environment, as well as internal factors, such as memories
or intrinsic preferences. The extent to which an agent's actions are "caused
from within", as opposed to being externally driven, should depend on its
sensor capacity as well as environmental demands for memory and
context-dependent behavior. Here, we test this hypothesis using simulated
agents ("animats"), equipped with small adaptive Markov Brains (MB) that evolve
to solve a perceptual-categorization task under conditions varied with regards
to the agents' sensor capacity and task difficulty. Using a novel formalism
developed to identify and quantify the actual causes of occurrences ("what
caused what?") in complex networks, we evaluate the direct causes of the
animats' actions. In addition, we extend this framework to trace the causal
chain ("causes of causes") leading to an animat's actions back in time, and
compare the obtained spatio-temporal causal history across task conditions. We
found that measures quantifying the extent to which an animat's actions are
caused by internal factors (as opposed to being driven by the environment
through its sensors) varied consistently with defining aspects of the task
conditions they evolved to thrive in.Comment: Submitted and accepted to Alife 2019 conference. Revised version:
edits include adding more references to relevant work and clarifying minor
points in response to reviewer
The Origins of Computational Mechanics: A Brief Intellectual History and Several Clarifications
The principle goal of computational mechanics is to define pattern and
structure so that the organization of complex systems can be detected and
quantified. Computational mechanics developed from efforts in the 1970s and
early 1980s to identify strange attractors as the mechanism driving weak fluid
turbulence via the method of reconstructing attractor geometry from measurement
time series and in the mid-1980s to estimate equations of motion directly from
complex time series. In providing a mathematical and operational definition of
structure it addressed weaknesses of these early approaches to discovering
patterns in natural systems.
Since then, computational mechanics has led to a range of results from
theoretical physics and nonlinear mathematics to diverse applications---from
closed-form analysis of Markov and non-Markov stochastic processes that are
ergodic or nonergodic and their measures of information and intrinsic
computation to complex materials and deterministic chaos and intelligence in
Maxwellian demons to quantum compression of classical processes and the
evolution of computation and language.
This brief review clarifies several misunderstandings and addresses concerns
recently raised regarding early works in the field (1980s). We show that
misguided evaluations of the contributions of computational mechanics are
groundless and stem from a lack of familiarity with its basic goals and from a
failure to consider its historical context. For all practical purposes, its
modern methods and results largely supersede the early works. This not only
renders recent criticism moot and shows the solid ground on which computational
mechanics stands but, most importantly, shows the significant progress achieved
over three decades and points to the many intriguing and outstanding challenges
in understanding the computational nature of complex dynamic systems.Comment: 11 pages, 123 citations;
http://csc.ucdavis.edu/~cmg/compmech/pubs/cmr.ht
- …