2,263 research outputs found

    FastMMD: Ensemble of Circular Discrepancy for Efficient Two-Sample Test

    Full text link
    The maximum mean discrepancy (MMD) is a recently proposed test statistic for two-sample test. Its quadratic time complexity, however, greatly hampers its availability to large-scale applications. To accelerate the MMD calculation, in this study we propose an efficient method called FastMMD. The core idea of FastMMD is to equivalently transform the MMD with shift-invariant kernels into the amplitude expectation of a linear combination of sinusoid components based on Bochner's theorem and Fourier transform (Rahimi & Recht, 2007). Taking advantage of sampling of Fourier transform, FastMMD decreases the time complexity for MMD calculation from O(N2d)O(N^2 d) to O(LNd)O(L N d), where NN and dd are the size and dimension of the sample set, respectively. Here LL is the number of basis functions for approximating kernels which determines the approximation accuracy. For kernels that are spherically invariant, the computation can be further accelerated to O(LNlogd)O(L N \log d) by using the Fastfood technique (Le et al., 2013). The uniform convergence of our method has also been theoretically proved in both unbiased and biased estimates. We have further provided a geometric explanation for our method, namely ensemble of circular discrepancy, which facilitates us to understand the insight of MMD, and is hopeful to help arouse more extensive metrics for assessing two-sample test. Experimental results substantiate that FastMMD is with similar accuracy as exact MMD, while with faster computation speed and lower variance than the existing MMD approximation methods

    B-tests: Low Variance Kernel Two-Sample Tests

    Get PDF
    A family of maximum mean discrepancy (MMD) kernel two-sample tests is introduced. Members of the test family are called Block-tests or B-tests, since the test statistic is an average over MMDs computed on subsets of the samples. The choice of block size allows control over the tradeoff between test power and computation time. In this respect, the BB-test family combines favorable properties of previously proposed MMD two-sample tests: B-tests are more powerful than a linear time test where blocks are just pairs of samples, yet they are more computationally efficient than a quadratic time test where a single large block incorporating all the samples is used to compute a U-statistic. A further important advantage of the B-tests is their asymptotically Normal null distribution: this is by contrast with the U-statistic, which is degenerate under the null hypothesis, and for which estimates of the null distribution are computationally demanding. Recent results on kernel selection for hypothesis testing transfer seamlessly to the B-tests, yielding a means to optimize test power via kernel choice.Comment: Neural Information Processing Systems (2013

    Expectations Hypotheses Tests

    Get PDF
    We investigate the Expectations Hypotheses of the term structure of interest rates and of the foreign exchange market using vector autoregressive methods for the U.S. dollar, Deutsche mark, and British pound interest rates and exchange rates. In addition to standard Wald tests, we formulate Lagrange Multiplier and Distance Metric tests which require estimation under the non-linear constraints of the null hypotheses. Estimation under the null is achieved by iterating on approximate solutions that require only matrix inversions. We use a bias-corrected, constrained vector autoregression as a data generating process and construct extensive Monte Carlo simulations of the various test statistics under the null hypotheses. Wald tests suffer from severe size distortions and use of the asymptotic critical values results in gross over-rejection of the null. The Lagrange Multiplier tests slightly under-reject the null, and the Distance Metric tests over-reject. Use of the small sample distributions of the different tests leads to a common interpretation of the validity of the Expectations Hypotheses. The evidence against the Expectations Hypotheses for these interest rates and exchange rates is much less strong than under asymptotic inference.

    Bump hunting with non-Gaussian kernels

    Full text link
    It is well known that the number of modes of a kernel density estimator is monotone nonincreasing in the bandwidth if the kernel is a Gaussian density. There is numerical evidence of nonmonotonicity in the case of some non-Gaussian kernels, but little additional information is available. The present paper provides theoretical and numerical descriptions of the extent to which the number of modes is a nonmonotone function of bandwidth in the case of general compactly supported densities. Our results address popular kernels used in practice, for example, the Epanechnikov, biweight and triweight kernels, and show that in such cases nonmonotonicity is present with strictly positive probability for all sample sizes n\geq3. In the Epanechnikov and biweight cases the probability of nonmonotonicity equals 1 for all n\geq2. Nevertheless, in spite of the prevalence of lack of monotonicity revealed by these results, it is shown that the notion of a critical bandwidth (the smallest bandwidth above which the number of modes is guaranteed to be monotone) is still well defined. Moreover, just as in the Gaussian case, the critical bandwidth is of the same size as the bandwidth that minimises mean squared error of the density estimator. These theoretical results, and new numerical evidence, show that the main effects of nonmonotonicity occur for relatively small bandwidths, and have negligible impact on many aspects of bump hunting.Comment: Published at http://dx.doi.org/10.1214/009053604000000715 in the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Fast Two-Sample Testing with Analytic Representations of Probability Measures

    Full text link
    We propose a class of nonparametric two-sample tests with a cost linear in the sample size. Two tests are given, both based on an ensemble of distances between analytic functions representing each of the distributions. The first test uses smoothed empirical characteristic functions to represent the distributions, the second uses distribution embeddings in a reproducing kernel Hilbert space. Analyticity implies that differences in the distributions may be detected almost surely at a finite number of randomly chosen locations/frequencies. The new tests are consistent against a larger class of alternatives than the previous linear-time tests based on the (non-smoothed) empirical characteristic functions, while being much faster than the current state-of-the-art quadratic-time kernel-based or energy distance-based tests. Experiments on artificial benchmarks and on challenging real-world testing problems demonstrate that our tests give a better power/time tradeoff than competing approaches, and in some cases, better outright power than even the most expensive quadratic-time tests. This performance advantage is retained even in high dimensions, and in cases where the difference in distributions is not observable with low order statistics

    Inference on Income Inequality and Tax Progressivity Indices: U-Statistics and Bootstrap Methods

    Get PDF
    This paper discusses asymptotic and bootstrap inference methods for a set of inequality and progressivity indices. The application of non-degenerate U-statistics theory is described, particularly through the derivation of the Suits-progressivity index distribution. We have also provided formulae for the “plug-in” estimator of the index variances, which are less onerous than the U-statistic version (this is especially relevant for those indices whose asymptotic variances contain kernels of degree 3). As far as inference issues are concerned, there are arguments in favour of applying bootstrap methods. By using an accurate database on income and taxes of the Spanish households (statistical matching EPF90-IRPF90), our results show that bootstrap methods perform better (considering their sample precision), particularly those methods yielding asymmetric CI. We also show that the bootstrap method is a useful technique for Lorenz dominance analysis. An illustration of such application has been made for the Spanish tax and welfare system. We distinguish clear dominance of cashbenefits on income redistribution. Public health and state school education also have significant redistributive effects.Income Inequality; Tax Progressivity; Statistical Inference; U-statistics; Bootstrap method.
    corecore