64 research outputs found
A New Framework for Decomposing Multivariate Information
What are the distinct ways in which a set of predictor variables can provide information about a target variable? When does a variable provide unique information, when do variables share redundant information, and when do variables combine synergistically to provide complementary information? The redundancy lattice from the partial information decomposition of Williams and Beer provided a promising glimpse at the answer to these questions. However, this structure was constructed using a much-criticised measure of redundant information, and despite sustained research, no completely satisfactory replacement measure has been proposed. This thesis presents a new framework for information decomposition that is based upon the decomposition of pointwise mutual information rather than mutual information. The framework is derived in two separate ways. The first of these derivations is based upon a modified version of the original axiomatic approach taken by Williams and Beer. However, to overcome the difficulty associated with signed pointwise mutual information, the decomposition is applied separately to the unsigned entropic components of pointwise mutual information which are referred to as the specificity and ambiguity. This yields a separate redundancy lattice for each component. Based upon an operational interpretation of redundancy, measures of redundant specificity and redundant ambiguity are defined which enables one to evaluate the partial information atoms separately for each lattice. These separate atoms can then be recombined to yield the sought-after multivariate information decomposition. This framework is applied to canonical examples from the literature and the results and various properties of the decomposition are discussed. In particular, the pointwise decomposition using specificity and ambiguity is shown to satisfy a chain rule over target variables, which provides new insights into the so-called two-bit-copy example. The second approach begins by considering the distinct ways in which two marginal observers can share their information with the non-observing individual third party. Several novel measures of information content are introduced, namely the union, intersection and unique information contents. Next, the algebraic structure of these new measures of shared marginal information is explored, and it is shown that the structure of shared marginal information is that of a distributive lattice. Furthermore, by using the fundamental theorem of distributive lattices, it is shown that these new measures are isomorphic to a ring of sets. Finally, by combining this structure together with the semi-lattice of joint information, the redundancy lattice form partial information decomposition is found to be embedded within this larger algebraic structure. However, since this structure considers information contents, it is actually equivalent to the specificity lattice from the first derivation of pointwise partial information decomposition. The thesis then closes with a discussion about whether or not one should combine the information contents from the specificity and ambiguity lattices
Unique Information and Secret Key Agreement
The partial information decomposition (PID) is a promising framework for
decomposing a joint random variable into the amount of influence each source
variable Xi has on a target variable Y, relative to the other sources. For two
sources, influence breaks down into the information that both X0 and X1
redundantly share with Y, what X0 uniquely shares with Y, what X1 uniquely
shares with Y, and finally what X0 and X1 synergistically share with Y.
Unfortunately, considerable disagreement has arisen as to how these four
components should be quantified. Drawing from cryptography, we consider the
secret key agreement rate as an operational method of quantifying unique
informations. Secret key agreement rate comes in several forms, depending upon
which parties are permitted to communicate. We demonstrate that three of these
four forms are inconsistent with the PID. The remaining form implies certain
interpretations as to the PID's meaning---interpretations not present in PID's
definition but that, we argue, need to be explicit. These reveal an
inconsistency between third-order connected information, two-way secret key
agreement rate, and synergy. Similar difficulties arise with a popular PID
measure in light the results here as well as from a maximum entropy viewpoint.
We close by reviewing the challenges facing the PID.Comment: 9 pages, 3 figures, 4 tables;
http://csc.ucdavis.edu/~cmg/compmech/pubs/pid_skar.htm. arXiv admin note:
text overlap with arXiv:1808.0860
Unique Information via Dependency Constraints
The partial information decomposition (PID) is perhaps the leading proposal
for resolving information shared between a set of sources and a target into
redundant, synergistic, and unique constituents. Unfortunately, the PID
framework has been hindered by a lack of a generally agreed-upon, multivariate
method of quantifying the constituents. Here, we take a step toward rectifying
this by developing a decomposition based on a new method that quantifies unique
information. We first develop a broadly applicable method---the dependency
decomposition---that delineates how statistical dependencies influence the
structure of a joint distribution. The dependency decomposition then allows us
to define a measure of the information about a target that can be uniquely
attributed to a particular source as the least amount which the source-target
statistical dependency can influence the information shared between the sources
and the target. The result is the first measure that satisfies the core axioms
of the PID framework while not satisfying the Blackwell relation, which depends
on a particular interpretation of how the variables are related. This makes a
key step forward to a practical PID.Comment: 15 pages, 7 figures, 2 tables, 3 appendices;
http://csc.ucdavis.edu/~cmg/compmech/pubs/idep.ht
A novel approach to multivariate redundancy and synergy
Consider a situation in which a set of "source" random variables
have information about some "target" random variable .
For example, in neuroscience might represent the state of an external
stimulus and the activity of different brain regions.
Recent work in information theory has considered how to decompose the
information that the sources provide about the target
into separate terms such as (1) the "redundant information" that is shared
among all of sources, (2) the "unique information" that is provided only by a
single source, (3) the "synergistic information" that is provided by all
sources only when considered jointly, and (4) the "union information" that is
provided by at least one source. We propose a novel framework deriving such a
decomposition that can be applied to any number of sources. Our measures are
motivated in three distinct ways: via a formal analogy to intersection and
union operators in set theory, via a decision-theoretic operationalization
based on Blackwell's theorem, and via an axiomatic derivation. A key aspect of
our approach is that we relax the assumption that measures of redundancy and
union information should be related by the inclusion-exclusion principle. We
discuss relations to previous proposals as well as possible generalizations
Direct and Indirect Effects -- An Information Theoretic Perspective
Information theoretic (IT) approaches to quantifying causal influences have
experienced some popularity in the literature, in both theoretical and applied
(e.g. neuroscience and climate science) domains. While these causal measures
are desirable in that they are model agnostic and can capture non-linear
interactions, they are fundamentally different from common statistical notions
of causal influence in that they (1) compare distributions over the effect
rather than values of the effect and (2) are defined with respect to random
variables representing a cause rather than specific values of a cause. We here
present IT measures of direct, indirect, and total causal effects. The proposed
measures are unlike existing IT techniques in that they enable measuring causal
effects that are defined with respect to specific values of a cause while still
offering the flexibility and general applicability of IT techniques. We provide
an identifiability result and demonstrate application of the proposed measures
in estimating the causal effect of the El Ni\~no-Southern Oscillation on
temperature anomalies in the North American Pacific Northwest
- β¦