314,110 research outputs found
Quantifying Information Flow During Emergencies
Recent advances on human dynamics have focused on the normal patterns of human activities, with the quantitative understanding of human behavior under extreme events remaining a crucial missing chapter. This has a wide array of potential applications, ranging from emergency response and detection to traffic control and management. Previous studies have shown that human communications are both temporally and spatially localized following the onset of emergencies, indicating that social propagation is a primary means to propagate situational awareness. We study real anomalous events using country-wide mobile phone data, finding that information flow during emergencies is dominated by repeated communications. We further demonstrate that the observed communication patterns cannot be explained by inherent reciprocity in social networks, and are universal across different demographics
Quantifying Information Flow with Beliefs
To reason about information flow, a new model is developed that
describes how attacker beliefs change due to the attacker's observation of the execution of a probabilistic (or deterministic) program. The model enables compositional reasoning about information flow from attacks involving sequences of interactions. The model also supports a new metric for quantitative information flow that measures accuracy of an attacker's beliefs. Applying this new metric reveals inadequacies of traditional information flow metrics, which are based on reduction of uncertainty. However, the new metric is sufficiently general that it can be instantiated to measure either accuracy or uncertainty. The new metric can also be used to reason about misinformation; deterministic programs are shown to be incapable of producing misinformation. Additionally, programs in which nondeterministic choices are made by insiders, who collude with attackers, can be analyzed
Modes of Information Flow
Information flow between components of a system takes many forms and is key
to understanding the organization and functioning of large-scale, complex
systems. We demonstrate three modalities of information flow from time series X
to time series Y. Intrinsic information flow exists when the past of X is
individually predictive of the present of Y, independent of Y's past; this is
most commonly considered information flow. Shared information flow exists when
X's past is predictive of Y's present in the same manner as Y's past; this
occurs due to synchronization or common driving, for example. Finally,
synergistic information flow occurs when neither X's nor Y's pasts are
predictive of Y's present on their own, but taken together they are. The two
most broadly-employed information-theoretic methods of quantifying information
flow---time-delayed mutual information and transfer entropy---are both
sensitive to a pair of these modalities: time-delayed mutual information to
both intrinsic and shared flow, and transfer entropy to both intrinsic and
synergistic flow. To quantify each mode individually we introduce our
cryptographic flow ansatz, positing that intrinsic flow is synonymous with
secret key agreement between X and Y. Based on this, we employ an
easily-computed secret-key-agreement bound---intrinsic mutual
information&mdashto quantify the three flow modalities in a variety of systems
including asymmetric flows and financial markets.Comment: 11 pages; 10 figures;
http://csc.ucdavis.edu/~cmg/compmech/pubs/ite.ht
Quantifying Attention Flow in Transformers
In the Transformer model, "self-attention" combines information from attended
embeddings into the representation of the focal embedding in the next layer.
Thus, across layers of the Transformer, information originating from different
tokens gets increasingly mixed. This makes attention weights unreliable as
explanations probes. In this paper, we consider the problem of quantifying this
flow of information through self-attention. We propose two methods for
approximating the attention to input tokens given attention weights, attention
rollout and attention flow, as post hoc methods when we use attention weights
as the relative relevance of the input tokens. We show that these methods give
complementary views on the flow of information, and compared to raw attention,
both yield higher correlations with importance scores of input tokens obtained
using an ablation method and input gradients
A static analysis for quantifying information flow in a simple imperative language
We propose an approach to quantify interference in a simple imperative language that includes a looping construct. In this paper we focus on a particular case of this definition of interference: leakage of information from private variables to public ones via a Trojan Horse attack. We quantify leakage in terms of Shannon's information theory and we motivate our definition by proving a result relating this definition of leakage and the classical notion of programming language interference. The major contribution of the paper is a quantitative static analysis based on this definition for such a language. The analysis uses some non-trivial information theory results like Fano's inequality and L1 inequalities to provide reasonable bounds for conditional statements. While-loops are handled by integrating a qualitative flow-sensitive dependency analysis into the quantitative analysis
Quantitative Analysis of Opacity in Cloud Computing Systems
The file attached to this record is the author's final peer reviewed version. The Publisher's final version can be found by following the DOI link.Federated cloud systems increase the reliability and reduce the cost of the computational support.
The resulting combination of secure private clouds and less secure public clouds, together with the fact that resources need to be located within different clouds, strongly affects the information flow security of the entire system. In this paper, the clouds as well as entities of a federated cloud system are
assigned security levels, and a probabilistic flow sensitive security model for a federated cloud system is proposed. Then the notion of opacity --- a notion capturing the security of information flow ---
of a cloud computing systems is introduced, and different variants of quantitative analysis of opacity are presented. As a result, one can track the information flow in a cloud system, and analyze the impact of different resource allocation strategies by quantifying the corresponding opacity characteristics
Information flow through a model of the C. elegans klinotaxis circuit
Understanding how information about external stimuli is transformed into
behavior is one of the central goals of neuroscience. Here we characterize the
information flow through a complete sensorimotor circuit: from stimulus, to
sensory neurons, to interneurons, to motor neurons, to muscles, to motion.
Specifically, we apply a recently developed framework for quantifying
information flow to a previously published ensemble of models of salt
klinotaxis in the nematode worm C. elegans. The models are grounded in the
neuroanatomy and currently known neurophysiology of the worm. The unknown model
parameters were optimized to reproduce the worm's behavior. Information flow
analysis reveals several key principles underlying how the models operate: (1)
Interneuron class AIY is responsible for integrating information about positive
and negative changes in concentration, and exhibits a strong left/right
information asymmetry. (2) Gap junctions play a crucial role in the transfer of
information responsible for the information symmetry observed in interneuron
class AIZ. (3) Neck motor neuron class SMB implements an information gating
mechanism that underlies the circuit's state-dependent response. (4) The neck
carries non-uniform distribution about changes in concentration. Thus, not all
directions of movement are equally informative. Each of these findings
corresponds to an experimental prediction that could be tested in the worm to
greatly refine our understanding of the neural circuit underlying klinotaxis.
Information flow analysis also allows us to explore how information flow
relates to underlying electrophysiology. Despite large variations in the neural
parameters of individual circuits, the overall information flow architecture
circuit is remarkably consistent across the ensemble, suggesting that
information flow analysis captures general principles of operation for the
klinotaxis circuit
Information-Theoretic Meaning of Quantum Information Flow and Its Applications to Amplitude Amplification Algorithms
The advantages of quantum information processing are in many cases obtained
as consequences of quantum interactions, especially for computational tasks
where two-qubit interactions are essential. In this work, we establish the
framework of analyzing and quantifying loss or gain of information on a quantum
system when the system interacts with its environment. We show that the
information flow, the theoretical method of characterizing (non-)Markovianity
of quantum dynamics, corresponds to the rate of the minimum uncertainty about
the system given quantum side information. Thereafter, we analyze the
information exchange among subsystems that are under the performance of quantum
algorithms, in particular, the amplitude amplification algorithms where the
computational process relies fully on quantum evolution. Different realizations
of the algorithm are considered, such as i)quantum circuits, ii) analog
computation, and iii) adiabatic computation. It is shown that, in all the
cases, our formalism provides insights about the process of amplifying the
amplitude from the information flow or leakage on the subsystems.Comment: 7 pages, 5 figures, close to the published versio
- …