502 research outputs found

    Statistical pairwise interaction model of stock market

    Full text link
    Financial markets are a classical example of complex systems as they comprise many interacting stocks. As such, we can obtain a surprisingly good description of their structure by making the rough simplification of binary daily returns. Spin glass models have been applied and gave some valuable results but at the price of restrictive assumptions on the market dynamics or others are agent-based models with rules designed in order to recover some empirical behaviours. Here we show that the pairwise model is actually a statistically consistent model with observed first and second moments of the stocks orientation without making such restrictive assumptions. This is done with an approach based only on empirical data of price returns. Our data analysis of six major indices suggests that the actual interaction structure may be thought as an Ising model on a complex network with interaction strengths scaling as the inverse of the system size. This has potentially important implications since many properties of such a model are already known and some techniques of the spin glass theory can be straightforwardly applied. Typical behaviours, as multiple equilibria or metastable states, different characteristic time scales, spatial patterns, order-disorder, could find an explanation in this picture.Comment: 11 pages, 8 figure

    Redundant variables and Granger causality

    Get PDF
    We discuss the use of multivariate Granger causality in presence of redundant variables: the application of the standard analysis, in this case, leads to under-estimation of causalities. Using the un-normalized version of the causality index, we quantitatively develop the notions of redundancy and synergy in the frame of causality and propose two approaches to group redundant variables: (i) for a given target, the remaining variables are grouped so as to maximize the total causality and (ii) the whole set of variables is partitioned to maximize the sum of the causalities between subsets. We show the application to a real neurological experiment, aiming to a deeper understanding of the physiological basis of abnormal neuronal oscillations in the migraine brain. The outcome by our approach reveals the change in the informational pattern due to repetitive transcranial magnetic stimulations.Comment: 4 pages, 5 figures. Accepted for publication in Physical Review

    Expanding the Transfer Entropy to Identify Information Subgraphs in Complex Systems

    Get PDF
    We propose a formal expansion of the transfer entropy to put in evidence irreducible sets of variables which provide information for the future state of each assigned target. Multiplets characterized by a large contribution to the expansion are associated to informational circuits present in the system, with an informational character which can be associated to the sign of the contribution. For the sake of computational complexity, we adopt the assumption of Gaussianity and use the corresponding exact formula for the conditional mutual information. We report the application of the proposed methodology on two EEG data sets

    Stimulus-dependent maximum entropy models of neural population codes

    Get PDF
    Neural populations encode information about their stimulus in a collective fashion, by joint activity patterns of spiking and silence. A full account of this mapping from stimulus to neural activity is given by the conditional probability distribution over neural codewords given the sensory input. To be able to infer a model for this distribution from large-scale neural recordings, we introduce a stimulus-dependent maximum entropy (SDME) model---a minimal extension of the canonical linear-nonlinear model of a single neuron, to a pairwise-coupled neural population. The model is able to capture the single-cell response properties as well as the correlations in neural spiking due to shared stimulus and due to effective neuron-to-neuron connections. Here we show that in a population of 100 retinal ganglion cells in the salamander retina responding to temporal white-noise stimuli, dependencies between cells play an important encoding role. As a result, the SDME model gives a more accurate account of single cell responses and in particular outperforms uncoupled models in reproducing the distributions of codewords emitted in response to a stimulus. We show how the SDME model, in conjunction with static maximum entropy models of population vocabulary, can be used to estimate information-theoretic quantities like surprise and information transmission in a neural population.Comment: 11 pages, 7 figure

    Inference of kinetic Ising model on sparse graphs

    Full text link
    Based on dynamical cavity method, we propose an approach to the inference of kinetic Ising model, which asks to reconstruct couplings and external fields from given time-dependent output of original system. Our approach gives an exact result on tree graphs and a good approximation on sparse graphs, it can be seen as an extension of Belief Propagation inference of static Ising model to kinetic Ising model. While existing mean field methods to the kinetic Ising inference e.g., na\" ive mean-field, TAP equation and simply mean-field, use approximations which calculate magnetizations and correlations at time tt from statistics of data at time t−1t-1, dynamical cavity method can use statistics of data at times earlier than t−1t-1 to capture more correlations at different time steps. Extensive numerical experiments show that our inference method is superior to existing mean-field approaches on diluted networks.Comment: 9 pages, 3 figures, comments are welcom

    Detection of subthreshold pulses in neurons with channel noise

    Full text link
    Neurons are subject to various kinds of noise. In addition to synaptic noise, the stochastic opening and closing of ion channels represents an intrinsic source of noise that affects the signal processing properties of the neuron. In this paper, we studied the response of a stochastic Hodgkin-Huxley neuron to transient input subthreshold pulses. It was found that the average response time decreases but variance increases as the amplitude of channel noise increases. In the case of single pulse detection, we show that channel noise enables one neuron to detect the subthreshold signals and an optimal membrane area (or channel noise intensity) exists for a single neuron to achieve optimal performance. However, the detection ability of a single neuron is limited by large errors. Here, we test a simple neuronal network that can enhance the pulse detecting abilities of neurons and find dozens of neurons can perfectly detect subthreshold pulses. The phenomenon of intrinsic stochastic resonance is also found both at the level of single neurons and at the level of networks. At the network level, the detection ability of networks can be optimized for the number of neurons comprising the network.Comment: 14 pages, 9 figure

    Beyond inverse Ising model: structure of the analytical solution for a class of inverse problems

    Full text link
    I consider the problem of deriving couplings of a statistical model from measured correlations, a task which generalizes the well-known inverse Ising problem. After reminding that such problem can be mapped on the one of expressing the entropy of a system as a function of its corresponding observables, I show the conditions under which this can be done without resorting to iterative algorithms. I find that inverse problems are local (the inverse Fisher information is sparse) whenever the corresponding models have a factorized form, and the entropy can be split in a sum of small cluster contributions. I illustrate these ideas through two examples (the Ising model on a tree and the one-dimensional periodic chain with arbitrary order interaction) and support the results with numerical simulations. The extension of these methods to more general scenarios is finally discussed.Comment: 15 pages, 6 figure

    Network information and connected correlations

    Full text link
    Entropy and information provide natural measures of correlation among elements in a network. We construct here the information theoretic analog of connected correlation functions: irreducible NN--point correlation is measured by a decrease in entropy for the joint distribution of NN variables relative to the maximum entropy allowed by all the observed N−1N-1 variable distributions. We calculate the ``connected information'' terms for several examples, and show that it also enables the decomposition of the information that is carried by a population of elements about an outside source.Comment: 4 pages, 3 figure

    Statistical mechanics of transcription-factor binding site discovery using Hidden Markov Models

    Full text link
    Hidden Markov Models (HMMs) are a commonly used tool for inference of transcription factor (TF) binding sites from DNA sequence data. We exploit the mathematical equivalence between HMMs for TF binding and the "inverse" statistical mechanics of hard rods in a one-dimensional disordered potential to investigate learning in HMMs. We derive analytic expressions for the Fisher information, a commonly employed measure of confidence in learned parameters, in the biologically relevant limit where the density of binding sites is low. We then use techniques from statistical mechanics to derive a scaling principle relating the specificity (binding energy) of a TF to the minimum amount of training data necessary to learn it.Comment: 25 pages, 2 figures, 1 table V2 - typos fixed and new references adde
    • 

    corecore