2 research outputs found

    On the Bias of Directed Information Estimators

    Full text link
    When estimating the directed information between two jointly stationary Markov processes, it is typically assumed that the recipient of the directed information is itself Markov of the same order as the joint process. While this assumption is often made explicit in the presentation of such estimators, a characterization of when we can expect the assumption to hold is lacking. Using the concept of d-separation from Bayesian networks, we present sufficient conditions for which this assumption holds. We further show that the set of parameters for which the condition is not also necessary has Lebesgue measure zero. Given the strictness of these conditions, we introduce a notion of partial directed information, which can be used to bound the bias of directed information estimates when the directed information recipient is not itself Markov. Lastly we estimate this bound on simulations in a variety of settings to assess the extent to which the bias should be cause for concern

    Measuring Sample Path Causal Influences with Relative Entropy

    Full text link
    We present a sample path dependent measure of causal influence between time series. The proposed causal measure is a random sequence, a realization of which enables identification of specific patterns that give rise to high levels of causal influence. We show that these patterns cannot be identified by existing measures such as directed information (DI). We demonstrate how sequential prediction theory may be leveraged to estimate the proposed causal measure and introduce a notion of regret for assessing the performance of such estimators. We prove a finite sample bound on this regret that is determined by the worst case regret of the sequential predictors used in the estimator. Justification for the proposed measure is provided through a series of examples, simulations, and application to stock market data. Within the context of estimating DI, we show that, because joint Markovicity of a pair of processes does not imply the marginal Markovicity of individual processes, commonly used plug-in estimators of DI will be biased for a large subset of jointly Markov processes. We introduce a notion of DI with "stale history", which can be combined with a plug-in estimator to upper and lower bound the DI when marginal Markovicity does not hold
    corecore