33,459 research outputs found

    A Linear-Time Kernel Goodness-of-Fit Test

    Full text link
    We propose a novel adaptive test of goodness-of-fit, with computational cost linear in the number of samples. We learn the test features that best indicate the differences between observed samples and a reference model, by minimizing the false negative rate. These features are constructed via Stein's method, meaning that it is not necessary to compute the normalising constant of the model. We analyse the asymptotic Bahadur efficiency of the new test, and prove that under a mean-shift alternative, our test always has greater relative efficiency than a previous linear-time kernel test, regardless of the choice of parameters for that test. In experiments, the performance of our method exceeds that of the earlier linear-time test, and matches or exceeds the power of a quadratic-time kernel test. In high dimensions and where model structure may be exploited, our goodness of fit test performs far better than a quadratic-time two-sample test based on the Maximum Mean Discrepancy, with samples drawn from the model.Comment: Accepted to NIPS 201

    A Kernel Test of Goodness of Fit

    Get PDF
    We propose a nonparametric statistical test for goodness-of-fit: given a set of samples, the test determines how likely it is that these were generated from a target density function. The measure of goodness-of-fit is a divergence constructed via Stein's method using functions from a Reproducing Kernel Hilbert Space. Our test statistic is based on an empirical estimate of this divergence, taking the form of a V-statistic in terms of the log gradients of the target density and the kernel. We derive a statistical test, both for i.i.d. and non-i.i.d. samples, where we estimate the null distribution quantiles using a wild bootstrap procedure. We apply our test to quantifying convergence of approximate Markov Chain Monte Carlo methods, statistical model criticism, and evaluating quality of fit vs model complexity in nonparametric density estimation

    Simultaneous Testing of Mean and Variance Structures in Nonlinear Time Series Models

    Get PDF
    This paper proposes a nonparametric simultaneous test for parametric specification of the conditional mean and variance functions in a time series regression model. The test is based on an empirical likelihood (EL) statistic that measures the goodness of fit between the parametric estimates and the nonparametric kernel estimates of the mean and variance functions. A unique feature of the test is its ability to distribute natural weights automatically between the mean and the variance components of the goodness of fit. To reduce the dependence of the test on a single pair of smoothing bandwidths, we construct an adaptive test by maximizing a standardized version of the empirical likelihood test statistic over a set of smoothing bandwidths. The test procedure is based on a bootstrap calibration to the distribution of the empirical likelihood test statistic. We demonstrate that the empirical likelihood test is able to distinguish local alternatives which are different from the null hypothesis at an optimal rate.Bootstrap, empirical likelihood, goodness{of{t test, kernel estimation, least squares empirical likelihood, rate-optimal test

    Topics in kernal hypothesis testing

    Get PDF
    This thesis investigates some unaddressed problems in kernel nonparametric hypothesis testing. The contributions are grouped around three main themes: Wild Bootstrap for Degenerate Kernel Tests. A wild bootstrap method for nonparametric hypothesis tests based on kernel distribution embeddings is proposed. This bootstrap method is used to construct provably consistent tests that apply to random processes. It applies to a large group of kernel tests based on V-statistics, which are degenerate under the null hypothesis, and non-degenerate elsewhere. In experiments, the wild bootstrap gives strong performance on synthetic examples, on audio data, and in performance benchmarking for the Gibbs sampler. A Kernel Test of Goodness of Fit. A nonparametric statistical test for goodness-of-fit is proposed: given a set of samples, the test determines how likely it is that these were generated from a target density function. The measure of goodness-of-fit is a divergence constructed via Stein's method using functions from a Reproducing Kernel Hilbert Space. Construction of the test is based on the wild bootstrap method. We apply our test to quantifying convergence of approximate Markov Chain Monte Carlo methods, statistical model criticism, and evaluating quality of fit vs model complexity in nonparametric density estimation. Fast Analytic Functions Based Two Sample Test. A class of nonparametric two-sample tests with a cost linear in the sample size is proposed. Two tests are given, both based on an ensemble of distances between analytic functions representing each of the distributions. Experiments on artificial benchmarks and on challenging real-world testing problems demonstrate good power/time tradeoff retained even in high dimensional problems. The main contributions to science are the following. We prove that the kernel tests based on the wild bootstrap method tightly control the type one error on the desired level and are consistent i.e. type two error drops to zero with increasing number of samples. We construct a kernel goodness of fit test that requires only knowledge of the density up to an normalizing constant. We use this test to construct first consistent test for convergence of Markov Chains and use it to quantify properties of approximate MCMC algorithms. Finally, we construct a linear time two-sample test that uses new, finite dimensional feature representation of probability measures

    A kernel Stein test of goodness of fit for sequential models

    Full text link
    We propose a goodness-of-fit measure for probability densities modeling observations with varying dimensionality, such as text documents of differing lengths or variable-length sequences. The proposed measure is an instance of the kernel Stein discrepancy (KSD), which has been used to construct goodness-of-fit tests for unnormalized densities. The KSD is defined by its Stein operator: current operators used in testing apply to fixed-dimensional spaces. As our main contribution, we extend the KSD to the variable-dimension setting by identifying appropriate Stein operators, and propose a novel KSD goodness-of-fit test. As with the previous variants, the proposed KSD does not require the density to be normalized, allowing the evaluation of a large class of models. Our test is shown to perform well in practice on discrete sequential data benchmarks.Comment: 18 pages. Accepted to ICML 202

    A Kernel Stein Test of Goodness of Fit for Sequential Models

    Get PDF
    We propose a goodness-of-fit measure for probability densities modeling observations with varying dimensionality, such as text documents of differing lengths or variable-length sequences. The proposed measure is an instance of the kernel Stein discrepancy (KSD), which has been used to construct goodness-of-fit tests for unnormalized densities. The KSD is defined by its Stein operator: current operators used in testing apply to fixed-dimensional spaces. As our main contribution, we extend the KSD to the variabledimension setting by identifying appropriate Stein operators, and propose a novel KSD goodness-offit test. As with the previous variants, the proposed KSD does not require the density to be normalized, allowing the evaluation of a large class of models. Our test is shown to perform well in practice on discrete sequential data benchmarks

    A linear-time kernel goodness-of-fit test

    Get PDF
    We propose a novel adaptive test of goodness-of-fit, with computational cost linear in the number of samples. We learn the test features that best indicate the differences between observed samples and a reference model, by minimizing the false negative rate. These features are constructed via Stein's method, meaning that it is not necessary to compute the normalising constant of the model. We analyse the asymptotic Bahadur efficiency of the new test, and prove that under a mean-shift alternative, our test always has greater relative efficiency than a previous linear-time kernel test, regardless of the choice of parameters for that test. In experiments, the performance of our method exceeds that of the earlier linear-time test, and matches or exceeds the power of a quadratic-time kernel test. In high dimensions and where model structure may be exploited, our goodness of fit test performs far better than a quadratic-time two-sample test based on the Maximum Mean Discrepancy, with samples drawn from the model

    Nonparametric estimation of the distribution of the autoregressive coefficient from panel random-coefficient AR(1) data

    Get PDF
    We discuss nonparametric estimation of the distribution function G(x)G(x) of the autoregressive coefficient a(1,1)a \in (-1,1) from a panel of NN random-coefficient AR(1) data, each of length nn, by the empirical distribution function of lag 1 sample autocorrelations of individual AR(1) processes. Consistency and asymptotic normality of the empirical distribution function and a class of kernel density estimators is established under some regularity conditions on G(x)G(x) as NN and nn increase to infinity. The Kolmogorov-Smirnov goodness-of-fit test for simple and composite hypotheses of Beta distributed aa is discussed. A simulation study for goodness-of-fit testing compares the finite-sample performance of our nonparametric estimator to the performance of its parametric analogue discussed in Beran et al. (2010)

    A test for model specification of diffusion processes

    Get PDF
    We propose a test for model specification of a parametric diffusion process based on a kernel estimation of the transitional density of the process. The empirical likelihood is used to formulate a statistic, for each kernel smoothing bandwidth, which is effectively a Studentized L2-distance between the kernel transitional density estimator and the parametric transitional density implied by the parametric process. To reduce the sensitivity of the test on smoothing bandwidth choice, the final test statistic is constructed by combining the empirical likelihood statistics over a set of smoothing bandwidths. To better capture the finite sample distribution of the test statistic and data dependence, the critical value of the test is obtained by a parametric bootstrap procedure. Properties of the test are evaluated asymptotically and numerically by simulation and by a real data example.Bootstrap; diffusion process; empirical likelihood; goodness-of-fit test; time series; transitional density
    corecore