25,711 research outputs found

    A Linear-Time Kernel Goodness-of-Fit Test

    Full text link
    We propose a novel adaptive test of goodness-of-fit, with computational cost linear in the number of samples. We learn the test features that best indicate the differences between observed samples and a reference model, by minimizing the false negative rate. These features are constructed via Stein's method, meaning that it is not necessary to compute the normalising constant of the model. We analyse the asymptotic Bahadur efficiency of the new test, and prove that under a mean-shift alternative, our test always has greater relative efficiency than a previous linear-time kernel test, regardless of the choice of parameters for that test. In experiments, the performance of our method exceeds that of the earlier linear-time test, and matches or exceeds the power of a quadratic-time kernel test. In high dimensions and where model structure may be exploited, our goodness of fit test performs far better than a quadratic-time two-sample test based on the Maximum Mean Discrepancy, with samples drawn from the model.Comment: Accepted to NIPS 201

    A linear-time kernel goodness-of-fit test

    Get PDF
    We propose a novel adaptive test of goodness-of-fit, with computational cost linear in the number of samples. We learn the test features that best indicate the differences between observed samples and a reference model, by minimizing the false negative rate. These features are constructed via Stein's method, meaning that it is not necessary to compute the normalising constant of the model. We analyse the asymptotic Bahadur efficiency of the new test, and prove that under a mean-shift alternative, our test always has greater relative efficiency than a previous linear-time kernel test, regardless of the choice of parameters for that test. In experiments, the performance of our method exceeds that of the earlier linear-time test, and matches or exceeds the power of a quadratic-time kernel test. In high dimensions and where model structure may be exploited, our goodness of fit test performs far better than a quadratic-time two-sample test based on the Maximum Mean Discrepancy, with samples drawn from the model

    Topics in kernal hypothesis testing

    Get PDF
    This thesis investigates some unaddressed problems in kernel nonparametric hypothesis testing. The contributions are grouped around three main themes: Wild Bootstrap for Degenerate Kernel Tests. A wild bootstrap method for nonparametric hypothesis tests based on kernel distribution embeddings is proposed. This bootstrap method is used to construct provably consistent tests that apply to random processes. It applies to a large group of kernel tests based on V-statistics, which are degenerate under the null hypothesis, and non-degenerate elsewhere. In experiments, the wild bootstrap gives strong performance on synthetic examples, on audio data, and in performance benchmarking for the Gibbs sampler. A Kernel Test of Goodness of Fit. A nonparametric statistical test for goodness-of-fit is proposed: given a set of samples, the test determines how likely it is that these were generated from a target density function. The measure of goodness-of-fit is a divergence constructed via Stein's method using functions from a Reproducing Kernel Hilbert Space. Construction of the test is based on the wild bootstrap method. We apply our test to quantifying convergence of approximate Markov Chain Monte Carlo methods, statistical model criticism, and evaluating quality of fit vs model complexity in nonparametric density estimation. Fast Analytic Functions Based Two Sample Test. A class of nonparametric two-sample tests with a cost linear in the sample size is proposed. Two tests are given, both based on an ensemble of distances between analytic functions representing each of the distributions. Experiments on artificial benchmarks and on challenging real-world testing problems demonstrate good power/time tradeoff retained even in high dimensional problems. The main contributions to science are the following. We prove that the kernel tests based on the wild bootstrap method tightly control the type one error on the desired level and are consistent i.e. type two error drops to zero with increasing number of samples. We construct a kernel goodness of fit test that requires only knowledge of the density up to an normalizing constant. We use this test to construct first consistent test for convergence of Markov Chains and use it to quantify properties of approximate MCMC algorithms. Finally, we construct a linear time two-sample test that uses new, finite dimensional feature representation of probability measures

    Efficient Aggregated Kernel Tests using Incomplete UU-statistics

    Get PDF
    We propose a series of computationally efficient nonparametric tests for the two-sample, independence, and goodness-of-fit problems, using the Maximum Mean Discrepancy (MMD), Hilbert Schmidt Independence Criterion (HSIC), and Kernel Stein Discrepancy (KSD), respectively. Our test statistics are incomplete UU-statistics, with a computational cost that interpolates between linear time in the number of samples, and quadratic time, as associated with classical UU-statistic tests. The three proposed tests aggregate over several kernel bandwidths to detect departures from the null on various scales: we call the resulting tests MMDAggInc, HSICAggInc and KSDAggInc. This procedure provides a solution to the fundamental kernel selection problem as we can aggregate a large number of kernels with several bandwidths without incurring a significant loss of test power. For the test thresholds, we derive a quantile bound for wild bootstrapped incomplete UU-statistics, which is of independent interest. We derive non-asymptotic uniform separation rates for MMDAggInc and HSICAggInc, and quantify exactly the trade-off between computational efficiency and the attainable rates: this result is novel for tests based on incomplete UU-statistics, to our knowledge. We further show that in the quadratic-time case, the wild bootstrap incurs no penalty to test power over the more widespread permutation-based approach, since both attain the same minimax optimal rates (which in turn match the rates that use oracle quantiles). We support our claims with numerical experiments on the trade-off between computational efficiency and test power. In all three testing frameworks, the linear-time versions of our proposed tests perform at least as well as the current linear-time state-of-the-art tests.Comment: 33 pages, 4 figure

    A goodness-of-fit test for parametric and semi-parametric models in multiresponse regression

    Full text link
    We propose an empirical likelihood test that is able to test the goodness of fit of a class of parametric and semi-parametric multiresponse regression models. The class includes as special cases fully parametric models; semi-parametric models, like the multiindex and the partially linear models; and models with shape constraints. Another feature of the test is that it allows both the response variable and the covariate be multivariate, which means that multiple regression curves can be tested simultaneously. The test also allows the presence of infinite-dimensional nuisance functions in the model to be tested. It is shown that the empirical likelihood test statistic is asymptotically normally distributed under certain mild conditions and permits a wild bootstrap calibration. Despite the large size of the class of models to be considered, the empirical likelihood test enjoys good power properties against departures from a hypothesized model within the class.Comment: Published in at http://dx.doi.org/10.3150/09-BEJ208 the Bernoulli (http://isi.cbs.nl/bernoulli/) by the International Statistical Institute/Bernoulli Society (http://isi.cbs.nl/BS/bshome.htm

    Informative Features for Model Comparison

    Get PDF
    Given two candidate models, and a set of target observations, we address the problem of measuring the relative goodness of fit of the two models. We propose two new statistical tests which are nonparametric, computationally efficient (runtime complexity is linear in the sample size), and interpretable. As a unique advantage, our tests can produce a set of examples (informative features) indicating the regions in the data domain where one model fits significantly better than the other. In a real-world problem of comparing GAN models, the test power of our new test matches that of the state-of-the-art test of relative goodness of fit, while being one order of magnitude faster.Comment: Accepted to NIPS 201

    Application of Regression Models for Area, Production and Productivity Trends of Maize (Zea mays) Crop for Panchmahal Region of Gujarat State, India

    Get PDF
    The present investigation was carried out to study area, production and productivity trends and growth rates of maize (Zea mays) crop grown in Panchmahal region of Gujarat state, India for the period 1949-50 to 2007-08 based on parametric and nonparametric regression models. In parametric models different linear, non-linear and time-series models were employed. The statistically most suited parametric models were selected on the basis of adjusted R2, significant regression co-efficient and co-efficient of determination (R2). Appropriate time-series models were fitted after judging the data for stationarity. The statistically appropriate model was selected on the basis of various goodness of fit criteria viz. Akaike?s Information Criterion, Bayesian Information Criterion, RMSE, MAE , assumptions of normality and independence of residuals. In nonparametric regression optimum bandwidth was computed by cross-validation method. ?Epanechnikov-kernel? was used as the weight function. Nonparametric estimates of underlying growth function were computed at each and every time point. Residual analysis was carried out to test the randomness. Relative growth rates of area, production and productivity were estimated based on the best fitted trend function. Linear model was found suitable to fit the trends in area and production of maize crop whereas for the productivity nonparametric regression without jump-point emerged as the best fitted trend function. The compound growth rate values obtained for the years (1949-50 to 2007-08) in area, production and productivity of the maize crop showed that the production had increased at a rate of 0.49 per cent per annum due to combined effect of increase in area and productivity at a rate of 0.30 and 0.21 per cent per annum respectively
    • …
    corecore