8,737 research outputs found

    A Linear Programming Approach to Sequential Hypothesis Testing

    Full text link
    Under some mild Markov assumptions it is shown that the problem of designing optimal sequential tests for two simple hypotheses can be formulated as a linear program. The result is derived by investigating the Lagrangian dual of the sequential testing problem, which is an unconstrained optimal stopping problem, depending on two unknown Lagrangian multipliers. It is shown that the derivative of the optimal cost function with respect to these multipliers coincides with the error probabilities of the corresponding sequential test. This property is used to formulate an optimization problem that is jointly linear in the cost function and the Lagrangian multipliers and an be solved for both with off-the-shelf algorithms. To illustrate the procedure, optimal sequential tests for Gaussian random sequences with different dependency structures are derived, including the Gaussian AR(1) process.Comment: 25 pages, 4 figures, accepted for publication in Sequential Analysi

    When is a Network a Network? Multi-Order Graphical Model Selection in Pathways and Temporal Networks

    Full text link
    We introduce a framework for the modeling of sequential data capturing pathways of varying lengths observed in a network. Such data are important, e.g., when studying click streams in information networks, travel patterns in transportation systems, information cascades in social networks, biological pathways or time-stamped social interactions. While it is common to apply graph analytics and network analysis to such data, recent works have shown that temporal correlations can invalidate the results of such methods. This raises a fundamental question: when is a network abstraction of sequential data justified? Addressing this open question, we propose a framework which combines Markov chains of multiple, higher orders into a multi-layer graphical model that captures temporal correlations in pathways at multiple length scales simultaneously. We develop a model selection technique to infer the optimal number of layers of such a model and show that it outperforms previously used Markov order detection techniques. An application to eight real-world data sets on pathways and temporal networks shows that it allows to infer graphical models which capture both topological and temporal characteristics of such data. Our work highlights fallacies of network abstractions and provides a principled answer to the open question when they are justified. Generalizing network representations to multi-order graphical models, it opens perspectives for new data mining and knowledge discovery algorithms.Comment: 10 pages, 4 figures, 1 table, companion python package pathpy available on gitHu

    Distinguishing Hidden Markov Chains

    Full text link
    Hidden Markov Chains (HMCs) are commonly used mathematical models of probabilistic systems. They are employed in various fields such as speech recognition, signal processing, and biological sequence analysis. We consider the problem of distinguishing two given HMCs based on an observation sequence that one of the HMCs generates. More precisely, given two HMCs and an observation sequence, a distinguishing algorithm is expected to identify the HMC that generates the observation sequence. Two HMCs are called distinguishable if for every ε>0\varepsilon > 0 there is a distinguishing algorithm whose error probability is less than ε\varepsilon. We show that one can decide in polynomial time whether two HMCs are distinguishable. Further, we present and analyze two distinguishing algorithms for distinguishable HMCs. The first algorithm makes a decision after processing a fixed number of observations, and it exhibits two-sided error. The second algorithm processes an unbounded number of observations, but the algorithm has only one-sided error. The error probability, for both algorithms, decays exponentially with the number of processed observations. We also provide an algorithm for distinguishing multiple HMCs. Finally, we discuss an application in stochastic runtime verification.Comment: This is the full version of a LICS'16 pape

    Characterization of Model-Based Detectors for CPS Sensor Faults/Attacks

    Full text link
    A vector-valued model-based cumulative sum (CUSUM) procedure is proposed for identifying faulty/falsified sensor measurements. First, given the system dynamics, we derive tools for tuning the CUSUM procedure in the fault/attack free case to fulfill a desired detection performance (in terms of false alarm rate). We use the widely-used chi-squared fault/attack detection procedure as a benchmark to compare the performance of the CUSUM. In particular, we characterize the state degradation that a class of attacks can induce to the system while enforcing that the detectors (CUSUM and chi-squared) do not raise alarms. In doing so, we find the upper bound of state degradation that is possible by an undetected attacker. We quantify the advantage of using a dynamic detector (CUSUM), which leverages the history of the state, over a static detector (chi-squared) which uses a single measurement at a time. Simulations of a chemical reactor with heat exchanger are presented to illustrate the performance of our tools.Comment: Submitted to IEEE Transactions on Control Systems Technolog

    Detecting simultaneous variant intervals in aligned sequences

    Get PDF
    Given a set of aligned sequences of independent noisy observations, we are concerned with detecting intervals where the mean values of the observations change simultaneously in a subset of the sequences. The intervals of changed means are typically short relative to the length of the sequences, the subset where the change occurs, the "carriers," can be relatively small, and the sizes of the changes can vary from one sequence to another. This problem is motivated by the scientific problem of detecting inherited copy number variants in aligned DNA samples. We suggest a statistic based on the assumption that for any given interval of changed means there is a given fraction of samples that carry the change. We derive an analytic approximation for the false positive error probability of a scan, which is shown by simulations to be reasonably accurate. We show that the new method usually improves on methods that analyze a single sample at a time and on our earlier multi-sample method, which is most efficient when the carriers form a large fraction of the set of sequences. The proposed procedure is also shown to be robust with respect to the assumed fraction of carriers of the changes.Comment: Published in at http://dx.doi.org/10.1214/10-AOAS400 the Annals of Applied Statistics (http://www.imstat.org/aoas/) by the Institute of Mathematical Statistics (http://www.imstat.org

    An Information-Theoretic Test for Dependence with an Application to the Temporal Structure of Stock Returns

    Full text link
    Information theory provides ideas for conceptualising information and measuring relationships between objects. It has found wide application in the sciences, but economics and finance have made surprisingly little use of it. We show that time series data can usefully be studied as information -- by noting the relationship between statistical redundancy and dependence, we are able to use the results of information theory to construct a test for joint dependence of random variables. The test is in the same spirit of those developed by Ryabko and Astola (2005, 2006b,a), but differs from these in that we add extra randomness to the original stochatic process. It uses data compression to estimate the entropy rate of a stochastic process, which allows it to measure dependence among sets of random variables, as opposed to the existing econometric literature that uses entropy and finds itself restricted to pairwise tests of dependence. We show how serial dependence may be detected in S&P500 and PSI20 stock returns over different sample periods and frequencies. We apply the test to synthetic data to judge its ability to recover known temporal dependence structures.Comment: 22 pages, 7 figure

    Marginal likelihoods in phylogenetics: a review of methods and applications

    Full text link
    By providing a framework of accounting for the shared ancestry inherent to all life, phylogenetics is becoming the statistical foundation of biology. The importance of model choice continues to grow as phylogenetic models continue to increase in complexity to better capture micro and macroevolutionary processes. In a Bayesian framework, the marginal likelihood is how data update our prior beliefs about models, which gives us an intuitive measure of comparing model fit that is grounded in probability theory. Given the rapid increase in the number and complexity of phylogenetic models, methods for approximating marginal likelihoods are increasingly important. Here we try to provide an intuitive description of marginal likelihoods and why they are important in Bayesian model testing. We also categorize and review methods for estimating marginal likelihoods of phylogenetic models, highlighting several recent methods that provide well-behaved estimates. Furthermore, we review some empirical studies that demonstrate how marginal likelihoods can be used to learn about models of evolution from biological data. We discuss promising alternatives that can complement marginal likelihoods for Bayesian model choice, including posterior-predictive methods. Using simulations, we find one alternative method based on approximate-Bayesian computation (ABC) to be biased. We conclude by discussing the challenges of Bayesian model choice and future directions that promise to improve the approximation of marginal likelihoods and Bayesian phylogenetics as a whole.Comment: 33 pages, 3 figure
    corecore