6 research outputs found

    Data Driven Nonparametric Detection

    Get PDF
    The major goal of signal detection is to distinguish between hypotheses about the state of events based on observations. Typically, signal detection can be categorized into centralized detection, where all observed data are available for making decision, and decentralized detection, where only quantized data from distributed sensors are forwarded to a fusion center for decision making. While these problems have been intensively studied under parametric and semi-parametric models with underlying distributions being fully or partially known, nonparametric scenarios are not well understood yet. This thesis mainly explores nonparametric models with unknown underlying distributions as well as semi-parametric models as an intermediate step to solve nonparametric problems. One major topic of this thesis is on nonparametric decentralized detection, in which the joint distribution of the state of an event and sensor observations are not known, but only some training data are available. The kernel-based nonparametric approach has been proposed by Nguyen, Wainwright and Jordan where sensors\u27 quality is treated equally. We study heterogeneous sensor networks, and propose a weighted kernel so that weight parameters are utilized to selectively incorporate sensors\u27 information into the fusion center\u27s decision rule based on quality of sensors\u27 observations. Furthermore, weight parameters also serve as sensor selection parameters with nonzero parameters corresponding to sensors being selected. Sensor selection is jointly performed with decision rules of sensors and the fusion center with the resulting optimal decision rule having only a sparse number of nonzero weight parameters. A gradient projection algorithm and a Gauss-Seidel algorithm are developed to solve the risk minimization problem, which is non-convex, and both algorithms are shown to converge to critical points. The other major topic of this thesis is composite outlier detection in centralized scenarios. The goal is to detect the existence of data streams drawn from outlying distributions among data streams drawn from a typical distribution. We study both the semi-parametric model with known typical distribution and unknown outlying distributions, and the nonparametric model with unknown typical and outlying distributions. For both models, we construct generalized likelihood ratio tests (GLRT), and show that with the knowledge of the KL divergence between the outlier and typical distributions, GLRT is exponentially consistent (i.e, the error risk function decays exponentially fast). We also show that with the knowledge of the Chernoff distance between the outlying and typical distributions, GLRT for semi-parametric model achieves the same risk decay exponent as the parametric model, and GLRT for nonparametric model achieves the same performance when the number of data streams gets asymptotically large. We further show that for both models without any knowledge about the distance between distributions, there does not exist an exponentially consistent test. However, GLRT with a diminishing threshold can still be consistent

    On optimal two sample homogeneity tests for finite alphabets

    Get PDF
    Abstract—Suppose we are given two independent strings of data from a known finite alphabet. We are interested in testing the null hypothesis that both the strings were drawn from the same distribution, assuming that the samples within each string are mutually independent. Among statisticians, the most popular solution for such a homogeneity test is the two sample chi-square test, primarily due to its ease of implementation and the fact that the limiting null hypothesis distribution of the associated test statistic is known and easy to compute. Although tests that are asymptotically optimal in error probability have been proposed in the information theory literature, such optimality results are not well-known and such tests are rarely used in practice. In this paper we seek to bridge the gap between theory and practice. We study two different optimal tests proposed by Shayevitz [1] and Gutman [2]. We first obtain a simplified structure of Shayevitz’s test and then obtain limiting distributions of the test statistics used in both the tests. These results provide guidelines for choosing thresholds that guarantee an approximate false alarm constraint for finite length observation sequences, thus making these tests easy to use in practice. The approximation accuracies are demonstrated using simulations. We argue that such homogeneity tests with provable optimality properties could potentially be better choices than the chi-square test in practice. I

    Universal outlier hypothesis testing with applications to anomaly detection

    Get PDF
    Outlier hypothesis testing is studied in a universal setting. Multiple sequences of observations are collected, a small subset (possibly empty) of which are outliers. A sequence is considered an outlier if the observations in that sequence are distributed according to an “outlier” distribution, distinct from the “typical” distribution governing the observations in the majority of the sequences. The outlier and typical distributions are not fully known, and they can be arbitrarily close. The goal is to design a universal test to best discern the outlier sequence(s). Both fixed sample size and sequential settings are considered in this dissertation. In the fixed sample size setting, for models with exactly one outlier, the generalized likelihood test is shown to be universally exponentially consistent. A single letter characterization of the error exponent achieved by such a test is derived, and it is shown that the test achieves the optimal error exponent asymptotically as the number of sequences goes to infinity. When the null hypothesis with no outlier is included, a modification of the generalized likelihood test is shown to achieve the same error exponent under each non-null hypothesis, and also consistency under the null hypothesis. Then, models with multiple outliers are considered. When the outliers can be distinctly distributed, in order to achieve exponential consistency, it is shown that it is essential that the number of outliers be known at the outset. For the setting with a known number of distinctly distributed outliers, the generalized likelihood test is shown to be universally exponentially consistent. The limiting error exponent achieved by such a test is characterized, and the test is shown to be asymptotically exponentially consistent. For the setting with an unknown number of identically distributed outliers, a modification of the generalized likelihood test is shown to achieve a positive error exponent under each non-null hypothesis, and consistency under the null hypothesis. In the sequential setting, a test with the flavor of the repeated significance test is proposed. The test is shown to be universally consistent, and universally exponentially consistent under non-null hypotheses. In addition, with the typical distribution being known, the test is shown to be asymptotically optimal universally when the number of outliers is the largest possible. In all cases, the asymptotic performance of the proposed test when none of the underlying distributions is known is shown to converge to that when only the typical distribution is known as the number of sequences goes to infinity. For models with continuous alphabets, a test with the same structure as the generalized likelihood test is proposed, and it is shown to be universally consistent. It is also demonstrated that there is a close connection between universal outlier hypothesis testing and cluster analysis. The performance of various proposed tests is evaluated against a synthetic data set, and contrasted with that of two popular clustering methods. Applied to a real data set for spam detection, the sequential test is shown to outperform the fixed sample size test when the lengths of the sequences exceed a certain value. In addition, the performance of the proposed tests is shown to be superior to that of another kernel-based test for large sample sizes
    corecore