89 research outputs found
What caused what? A quantitative account of actual causation using dynamical causal networks
Actual causation is concerned with the question "what caused what?" Consider
a transition between two states within a system of interacting elements, such
as an artificial neural network, or a biological brain circuit. Which
combination of synapses caused the neuron to fire? Which image features caused
the classifier to misinterpret the picture? Even detailed knowledge of the
system's causal network, its elements, their states, connectivity, and dynamics
does not automatically provide a straightforward answer to the "what caused
what?" question. Counterfactual accounts of actual causation based on graphical
models, paired with system interventions, have demonstrated initial success in
addressing specific problem cases in line with intuitive causal judgments.
Here, we start from a set of basic requirements for causation (realization,
composition, information, integration, and exclusion) and develop a rigorous,
quantitative account of actual causation that is generally applicable to
discrete dynamical systems. We present a formal framework to evaluate these
causal requirements that is based on system interventions and partitions, and
considers all counterfactuals of a state transition. This framework is used to
provide a complete causal account of the transition by identifying and
quantifying the strength of all actual causes and effects linking the two
consecutive system states. Finally, we examine several exemplary cases and
paradoxes of causation and show that they can be illuminated by the proposed
framework for quantifying actual causation.Comment: 43 pages, 16 figures, supplementary discussion, supplementary
methods, supplementary proof
When is an action caused from within? Quantifying the causal chain leading to actions in simulated agents
An agent's actions can be influenced by external factors through the inputs
it receives from the environment, as well as internal factors, such as memories
or intrinsic preferences. The extent to which an agent's actions are "caused
from within", as opposed to being externally driven, should depend on its
sensor capacity as well as environmental demands for memory and
context-dependent behavior. Here, we test this hypothesis using simulated
agents ("animats"), equipped with small adaptive Markov Brains (MB) that evolve
to solve a perceptual-categorization task under conditions varied with regards
to the agents' sensor capacity and task difficulty. Using a novel formalism
developed to identify and quantify the actual causes of occurrences ("what
caused what?") in complex networks, we evaluate the direct causes of the
animats' actions. In addition, we extend this framework to trace the causal
chain ("causes of causes") leading to an animat's actions back in time, and
compare the obtained spatio-temporal causal history across task conditions. We
found that measures quantifying the extent to which an animat's actions are
caused by internal factors (as opposed to being driven by the environment
through its sensors) varied consistently with defining aspects of the task
conditions they evolved to thrive in.Comment: Submitted and accepted to Alife 2019 conference. Revised version:
edits include adding more references to relevant work and clarifying minor
points in response to reviewer
The Role of Conditional Independence in the Evolution of Intelligent Systems
Systems are typically made from simple components regardless of their
complexity. While the function of each part is easily understood, higher order
functions are emergent properties and are notoriously difficult to explain. In
networked systems, both digital and biological, each component receives inputs,
performs a simple computation, and creates an output. When these components
have multiple outputs, we intuitively assume that the outputs are causally
dependent on the inputs but are themselves independent of each other given the
state of their shared input. However, this intuition can be violated for
components with probabilistic logic, as these typically cannot be decomposed
into separate logic gates with one output each. This violation of conditional
independence on the past system state is equivalent to instantaneous
interaction --- the idea is that some information between the outputs is not
coming from the inputs and thus must have been created instantaneously. Here we
compare evolved artificial neural systems with and without instantaneous
interaction across several task environments. We show that systems without
instantaneous interactions evolve faster, to higher final levels of
performance, and require fewer logic components to create a densely connected
cognitive machinery.Comment: Original Abstract submitted to the GECCO conference 2017 Berli
Black-boxing and cause-effect power
Reductionism assumes that causation in the physical world occurs at the micro
level, excluding the emergence of macro-level causation. We challenge this
reductionist assumption by employing a principled, well-defined measure of
intrinsic cause-effect power - integrated information ({\Phi}), and showing
that, according to this measure, it is possible for a macro level to "beat" the
micro level. Simple systems were evaluated for {\Phi} across different spatial
and temporal scales by systematically considering all possible black boxes.
These are macro elements that consist of one or more micro elements over one or
more micro updates. Cause-effect power was evaluated based on the inputs and
outputs of the black boxes, ignoring the internal micro elements that support
their input-output function. We show how black-box elements can have more
common inputs and outputs than the corresponding micro elements, revealing the
emergence of high-order mechanisms and joint constraints that are not apparent
at the micro level. As a consequence, a macro, black-box system can have higher
{\Phi} than its micro constituents by having more mechanisms (higher
composition) that are more interconnected (higher integration). We also show
that, for a given micro system, one can identify local maxima of {\Phi} across
several spatiotemporal scales. The framework is demonstrated on a simple
biological system, the Boolean network model of the fission-yeast cell-cycle,
for which we identify stable local maxima during the course of its simulated
biological function. These local maxima correspond to macro levels of
organization at which emergent cause-effect properties of physical systems come
into focus, and provide a natural vantage point for scientific inquiries.Comment: 45 pages (32 main text, 13 supplementary), 14 figures (9 main text, 5
supplementary
PyPhi: A toolbox for integrated information theory
Integrated information theory provides a mathematical framework to fully
characterize the cause-effect structure of a physical system. Here, we
introduce PyPhi, a Python software package that implements this framework for
causal analysis and unfolds the full cause-effect structure of discrete
dynamical systems of binary elements. The software allows users to easily study
these structures, serves as an up-to-date reference implementation of the
formalisms of integrated information theory, and has been applied in research
on complexity, emergence, and certain biological questions. We first provide an
overview of the main algorithm and demonstrate PyPhi's functionality in the
course of analyzing an example system, and then describe details of the
algorithm's design and implementation.
PyPhi can be installed with Python's package manager via the command 'pip
install pyphi' on Linux and macOS systems equipped with Python 3.4 or higher.
PyPhi is open-source and licensed under the GPLv3; the source code is hosted on
GitHub at https://github.com/wmayner/pyphi . Comprehensive and
continually-updated documentation is available at https://pyphi.readthedocs.io/
. The pyphi-users mailing list can be joined at
https://groups.google.com/forum/#!forum/pyphi-users . A web-based graphical
interface to the software is available at
http://integratedinformationtheory.org/calculate.html .Comment: 22 pages, 4 figures, 6 pages of appendices. Supporting information
"S1 Calculating Phi" can be found in the ancillary file
Only what exists can cause: An intrinsic view of free will
This essay addresses the implications of integrated information theory (IIT)
for free will. IIT is a theory of what consciousness is and what it takes to
have it. According to IIT, the presence of consciousness is accounted for by a
maximum of cause-effect power in the brain. Moreover, the way specific
experiences feel is accounted for by how that cause-effect power is structured.
If IIT is right, we do have free will in the fundamental sense: we have true
alternatives, we make true decisions, and we - not our neurons or atoms - are
the true cause of our willed actions and bear true responsibility for them.
IIT's argument for true free will hinges on the proper understanding of
consciousness as true existence, as captured by its intrinsic powers ontology:
what truly exists, in physical terms, are intrinsic entities, and only what
truly exists can cause.Comment: 26 pages, 12 figure
- …