41,530 research outputs found
Towards a Synergy-based Approach to Measuring Information Modification
Distributed computation in artificial life and complex systems is often
described in terms of component operations on information: information storage,
transfer and modification. Information modification remains poorly described
however, with the popularly-understood examples of glider and particle
collisions in cellular automata being only quantitatively identified to date
using a heuristic (separable information) rather than a proper
information-theoretic measure. We outline how a recently-introduced axiomatic
framework for measuring information redundancy and synergy, called partial
information decomposition, can be applied to a perspective of distributed
computation in order to quantify component operations on information. Using
this framework, we propose a new measure of information modification that
captures the intuitive understanding of information modification events as
those involving interactions between two or more information sources. We also
consider how the local dynamics of information modification in space and time
could be measured, and suggest a new axiom that redundancy measures would need
to meet in order to make such local measurements. Finally, we evaluate the
potential for existing redundancy measures to meet this localizability axiom.Comment: 9 pages, 4 figure
Multiscale Information Decomposition: Exact Computation for Multivariate Gaussian Processes
Exploiting the theory of state space models, we derive the exact expressions
of the information transfer, as well as redundant and synergistic transfer, for
coupled Gaussian processes observed at multiple temporal scales. All of the
terms, constituting the frameworks known as interaction information
decomposition and partial information decomposition, can thus be analytically
obtained for different time scales from the parameters of the VAR model that
fits the processes. We report the application of the proposed methodology
firstly to benchmark Gaussian systems, showing that this class of systems may
generate patterns of information decomposition characterized by mainly
redundant or synergistic information transfer persisting across multiple time
scales or even by the alternating prevalence of redundant and synergistic
source interaction depending on the time scale. Then, we apply our method to an
important topic in neuroscience, i.e., the detection of causal interactions in
human epilepsy networks, for which we show the relevance of partial information
decomposition to the detection of multiscale information transfer spreading
from the seizure onset zone
Bits from Biology for Computational Intelligence
Computational intelligence is broadly defined as biologically-inspired
computing. Usually, inspiration is drawn from neural systems. This article
shows how to analyze neural systems using information theory to obtain
constraints that help identify the algorithms run by such systems and the
information they represent. Algorithms and representations identified
information-theoretically may then guide the design of biologically inspired
computing systems (BICS). The material covered includes the necessary
introduction to information theory and the estimation of information theoretic
quantities from neural data. We then show how to analyze the information
encoded in a system about its environment, and also discuss recent
methodological developments on the question of how much information each agent
carries about the environment either uniquely, or redundantly or
synergistically together with others. Last, we introduce the framework of local
information dynamics, where information processing is decomposed into component
processes of information storage, transfer, and modification -- locally in
space and time. We close by discussing example applications of these measures
to neural data and other complex systems
Partial information decomposition as a unified approach to the specification of neural goal functions
In many neural systems anatomical motifs are present repeatedly, but despite their structural similarity they can serve very different tasks. A prime example for such a motif is the canonical microcircuit of six-layered neo-cortex, which is repeated across cortical areas, and is involved in a number of different tasks (e.g. sensory, cognitive, or motor tasks). This observation has spawned interest in finding a common underlying principle, a âgoal functionâ, of information processing implemented in this structure. By definition such a goal function, if universal, cannot be cast in processing-domain specific language (e.g. âedge filteringâ, âworking memoryâ). Thus, to formulate such a principle, we have to use a domain-independent framework. Information theory offers such a framework. However, while the classical framework of information theory focuses on the relation between one input and one output (Shannonâs mutual information), we argue that neural information processing crucially depends on the combination of multiple inputs to create the output of a processor. To account for this, we use a very recent extension of Shannon Information theory, called partial information decomposition (PID). PID allows to quantify the information that several inputs provide individually (unique information), redundantly (shared information) or only jointly (synergistic information) about the output. First, we review the framework of PID. Then we apply it to reevaluate and analyze several earlier proposals of information theoretic neural goal functions (predictive coding, infomax and coherent infomax, efficient coding). We find that PID allows to compare these goal functions in a common framework, and also provides a versatile approach to design new goal functions from first principles. Building on this, we design and analyze a novel goal function, called âcoding with synergyâ, which builds on combining external input and prior knowledge in a synergistic manner. We suggest that this novel goal function may be highly useful in neural information processing
Construction procurement systems : a linkage with project organisational models
This paper constitutes a literature review undertaken at the start of a two and a half year EPSRC funded research project. As such, its purpose is to present the details of the âre-searchâconcerning construction procurement and project organizational design. The paper shows that the âpost -Lathamâconstruction industry provides several new developments (client power, partnering, concurrent engineering etc) which are altering the construction project process, and therefore prove worthy vehicles for investigation into project organizational structures
Partial Information Decomposition as a Unified Approach to the Specification of Neural Goal Functions
In many neural systems anatomical motifs are present repeatedly, but despite
their structural similarity they can serve very different tasks. A prime
example for such a motif is the canonical microcircuit of six-layered
neo-cortex, which is repeated across cortical areas, and is involved in a
number of different tasks (e.g.sensory, cognitive, or motor tasks). This
observation has spawned interest in finding a common underlying principle, a
'goal function', of information processing implemented in this structure. By
definition such a goal function, if universal, cannot be cast in
processing-domain specific language (e.g. 'edge filtering', 'working memory').
Thus, to formulate such a principle, we have to use a domain-independent
framework. Information theory offers such a framework. However, while the
classical framework of information theory focuses on the relation between one
input and one output (Shannon's mutual information), we argue that neural
information processing crucially depends on the combination of
\textit{multiple} inputs to create the output of a processor. To account for
this, we use a very recent extension of Shannon Information theory, called
partial information decomposition (PID). PID allows to quantify the information
that several inputs provide individually (unique information), redundantly
(shared information) or only jointly (synergistic information) about the
output. First, we review the framework of PID. Then we apply it to reevaluate
and analyze several earlier proposals of information theoretic neural goal
functions (predictive coding, infomax, coherent infomax, efficient coding). We
find that PID allows to compare these goal functions in a common framework, and
also provides a versatile approach to design new goal functions from first
principles. Building on this, we design and analyze a novel goal function,
called 'coding with synergy'. [...]Comment: 21 pages, 4 figures, appendi
Intersection Information based on Common Randomness
The introduction of the partial information decomposition generated a flurry
of proposals for defining an intersection information that quantifies how much
of "the same information" two or more random variables specify about a target
random variable. As of yet, none is wholly satisfactory. A palatable measure of
intersection information would provide a principled way to quantify slippery
concepts, such as synergy. Here, we introduce an intersection information
measure based on the G\'acs-K\"orner common random variable that is the first
to satisfy the coveted target monotonicity property. Our measure is imperfect,
too, and we suggest directions for improvement.Comment: 19 pages, 5 figure
Measuring multivariate redundant information with pointwise common change in surprisal
The problem of how to properly quantify redundant information is an open question that has been the subject of much recent research. Redundant information refers to information about a target variable S that is common to two or more predictor variables Xi . It can be thought of as quantifying overlapping information content or similarities in the representation of S between the Xi . We present a new measure of redundancy which measures the common change in surprisal shared between variables at the local or pointwise level. We provide a game-theoretic operational definition of unique information, and use this to derive constraints which are used to obtain a maximum entropy distribution. Redundancy is then calculated from this maximum entropy distribution by counting only those local co-information terms which admit an unambiguous interpretation as redundant information. We show how this redundancy measure can be used within the framework of the Partial Information Decomposition (PID) to give an intuitive decomposition of the multivariate mutual information into redundant, unique and synergistic contributions. We compare our new measure to existing approaches over a range of example systems, including continuous Gaussian variables. Matlab code for the measure is provided, including all considered examples
Intra-individual movement variability during skill transitions: A useful marker?
Applied research suggests athletes and coaches need to be challenged in knowing when and how much a movement should be consciously attended to. This is exacerbated when the skill is in transition between two more stable states, such as when an already well learnt skill is being refined. Using existing theory and research, this paper highlights the potential application of movement variability as a tool to inform a coachâs decision-making process when implementing a systematic approach to technical refinement. Of particular interest is the structure of co-variability between mechanical degrees-of-freedom (e.g., joints) within the movement systemâs entirety when undergoing a skill transition. Exemplar data from golf are presented, demonstrating the link between movement variability and mental effort as an important feature of automaticity, and thus intervention design throughout the different stages of refinement. Movement variability was shown to reduce when mental effort directed towards an individual aspect of the skill was high (target variable). The opposite pattern was apparent for variables unrelated to the technical refinement. Therefore, two related indicators, movement variability and mental effort, are offered as a basis through which the evaluation of automaticity during technical refinements may be made
- âŠ