12,921 research outputs found

    Switching Regression Models and Causal Inference in the Presence of Discrete Latent Variables

    Get PDF
    Given a response YY and a vector X=(X1,,Xd)X = (X^1, \dots, X^d) of dd predictors, we investigate the problem of inferring direct causes of YY among the vector XX. Models for YY that use all of its causal covariates as predictors enjoy the property of being invariant across different environments or interventional settings. Given data from such environments, this property has been exploited for causal discovery. Here, we extend this inference principle to situations in which some (discrete-valued) direct causes of Y Y are unobserved. Such cases naturally give rise to switching regression models. We provide sufficient conditions for the existence, consistency and asymptotic normality of the MLE in linear switching regression models with Gaussian noise, and construct a test for the equality of such models. These results allow us to prove that the proposed causal discovery method obtains asymptotic false discovery control under mild conditions. We provide an algorithm, make available code, and test our method on simulated data. It is robust against model violations and outperforms state-of-the-art approaches. We further apply our method to a real data set, where we show that it does not only output causal predictors, but also a process-based clustering of data points, which could be of additional interest to practitioners.Comment: 46 pages, 14 figures; real-world application added in Section 5.2; additional numerical experiments added in the Appendix

    Recent advances in directional statistics

    Get PDF
    Mainstream statistical methodology is generally applicable to data observed in Euclidean space. There are, however, numerous contexts of considerable scientific interest in which the natural supports for the data under consideration are Riemannian manifolds like the unit circle, torus, sphere and their extensions. Typically, such data can be represented using one or more directions, and directional statistics is the branch of statistics that deals with their analysis. In this paper we provide a review of the many recent developments in the field since the publication of Mardia and Jupp (1999), still the most comprehensive text on directional statistics. Many of those developments have been stimulated by interesting applications in fields as diverse as astronomy, medicine, genetics, neurology, aeronautics, acoustics, image analysis, text mining, environmetrics, and machine learning. We begin by considering developments for the exploratory analysis of directional data before progressing to distributional models, general approaches to inference, hypothesis testing, regression, nonparametric curve estimation, methods for dimension reduction, classification and clustering, and the modelling of time series, spatial and spatio-temporal data. An overview of currently available software for analysing directional data is also provided, and potential future developments discussed.Comment: 61 page

    Small-variance asymptotics for Bayesian neural networks

    Get PDF
    Bayesian neural networks (BNNs) are a rich and flexible class of models that have several advantages over standard feedforward networks, but are typically expensive to train on large-scale data. In this thesis, we explore the use of small-variance asymptotics-an approach to yielding fast algorithms from probabilistic models-on various Bayesian neural network models. We first demonstrate how small-variance asymptotics shows precise connections between standard neural networks and BNNs; for example, particular sampling algorithms for BNNs reduce to standard backpropagation in the small-variance limit. We then explore a more complex BNN where the number of hidden units is additionally treated as a random variable in the model. While standard sampling schemes would be too slow to be practical, our asymptotic approach yields a simple method for extending standard backpropagation to the case where the number of hidden units is not fixed. We show on several data sets that the resulting algorithm has benefits over backpropagation on networks with a fixed architecture.2019-01-02T00:00:00

    Using Markov Models and Statistics to Learn, Extract, Fuse, and Detect Patterns in Raw Data

    Full text link
    Many systems are partially stochastic in nature. We have derived data driven approaches for extracting stochastic state machines (Markov models) directly from observed data. This chapter provides an overview of our approach with numerous practical applications. We have used this approach for inferring shipping patterns, exploiting computer system side-channel information, and detecting botnet activities. For contrast, we include a related data-driven statistical inferencing approach that detects and localizes radiation sources.Comment: Accepted by 2017 International Symposium on Sensor Networks, Systems and Securit

    Asymptotic equivalence for inhomogeneous jump diffusion processes and white noise

    Get PDF
    We prove the global asymptotic equivalence between the experiments generated by the discrete (high frequency) or continuous observation of a path of a time inhomogeneous jump-diffusion process and a Gaussian white noise experiment. Here, the considered parameter is the drift function, and we suppose that the observation time TT tends to \infty. The approximation is given in the sense of the Le Cam Δ\Delta-distance, under smoothness conditions on the unknown drift function. These asymptotic equivalences are established by constructing explicit Markov kernels that can be used to reproduce one experiment from the other.Comment: 20 pages; to appear on ESAIM: P\&S. In this version there are some improvements in the exposition following the reports suggestion
    corecore