54,392 research outputs found

    Nonparametric multivariate rank tests and their unbiasedness

    Full text link
    Although unbiasedness is a basic property of a good test, many tests on vector parameters or scalar parameters against two-sided alternatives are not finite-sample unbiased. This was already noticed by Sugiura [Ann. Inst. Statist. Math. 17 (1965) 261--263]; he found an alternative against which the Wilcoxon test is not unbiased. The problem is even more serious in multivariate models. When testing the hypothesis against an alternative which fits well with the experiment, it should be verified whether the power of the test under this alternative cannot be smaller than the significance level. Surprisingly, this serious problem is not frequently considered in the literature. The present paper considers the two-sample multivariate testing problem. We construct several rank tests which are finite-sample unbiased against a broad class of location/scale alternatives and are finite-sample distribution-free under the hypothesis and alternatives. Each of them is locally most powerful against a specific alternative of the Lehmann type. Their powers against some alternatives are numerically compared with each other and with other rank and classical tests. The question of affine invariance of two-sample multivariate tests is also discussed.Comment: Published in at http://dx.doi.org/10.3150/10-BEJ326 the Bernoulli (http://isi.cbs.nl/bernoulli/) by the International Statistical Institute/Bernoulli Society (http://isi.cbs.nl/BS/bshome.htm

    Symmetries in algebraic Property Testing

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2010.Cataloged from PDF version of thesis.Includes bibliographical references (p. 94-100).Modern computational tasks often involve large amounts of data, and efficiency is a very desirable feature of such algorithms. Local algorithms are especially attractive, since they can imply global properties by only inspecting a small window into the data. In Property Testing, a local algorithm should perform the task of distinguishing objects satisfying a given property from objects that require many modifications in order to satisfy the property. A special place in Property Testing is held by algebraic properties: they are some of the first properties to be tested, and have been heavily used in the PCP and LTC literature. We focus on conditions under which algebraic properties are testable, following the general goal of providing a more unified treatment of these properties. In particular, we explore the notion of symmetry in relation to testing, a direction initiated by Kaufman and Sudan. We investigate the interplay between local testing, symmetry and dual structure in linear codes, by showing both positive and negative results. On the negative side, we exhibit a counterexample to a conjecture proposed by Alon, Kaufman, Krivelevich, Litsyn, and Ron aimed at providing general sufficient conditions for testing. We show that a single codeword of small weight in the dual family together with the property of being invariant under a 2-transitive group of permutations do not necessarily imply testing. On the positive side, we exhibit a large class of codes whose duals possess a strong structural property ('the single orbit property'). Namely, they can be specified by a single codeword of small weight and the group of invariances of the code. Hence we show that sparsity and invariance under the affine group of permutations are sufficient conditions for a notion of very structured testing. These findings also reveal a new characterization of the extensively studied BCH codes. As a by-product, we obtain a more explicit description of structured tests for the special family of BCH codes of design distance 5.by Elena Grigorescu.Ph.D

    "Tests for Multivariate Analysis of Variance in High Dimension Under Non-Normality"

    Get PDF
    In this article, we consider the problem of testing the equality of mean vectors of dimension ρ of several groups with a common unknown non-singular covariance matrix Σ, based on N independent observation vectors where N may be less than the dimension ρ. This problem, known in the literature as the Multivariate Analysis of variance (MANOVA) in high-dimension has recently been considered in the statistical literature by Srivastava and Fujikoshi[7], Srivastava [5] and Schott[3]. All these tests are not invariant under the change of units of measurements. On the lines of Srivastava and Du[8] and Srivastava[6], we propose a test that has the above invariance property. The null and the non-null distributions are derived under the assumption that ( N, ρ) → ∞ and N may be less than ρ and the observation vectors follow a general non-normal model.

    The Size and Power of Bootstrap Tests for Linear Restrictions in Misspecified Cointegrating Relationships

    Get PDF
    This paper considers computer intensive methods for inference on cointegrating vectors in maximum likelihood analysis. It investigates the robustness of LR , Wald tests and an F-type test for linear restrictions on cointegrating space to misspecification on the number of cointegrating relations. In addition, since all the distributional results within the maximum likelihood cointegration model rely on asymptotic considerations, it is important to consider the sensitivity of inference procedures to the sample size. In this paper we use bootstrap hypothesis testing as a way to improve inference for linear restriction on the cointegrating space. We find that the resampling procedure is a very useful device for tests that lack the invariance property such as the Wald test, where the size distortion of the bootstrap test converges to zero even for a sample size T=50. Moreover, it turns out that when the number of cointegrating vectors are correctly specified the bootstrap succeeds where the asymptotic approximation is not satisfactory, that is, for a sample size T

    The Search for Invariance: Repeated Positive Testing Serves the Goals of Causal Learning

    Get PDF
    Positive testing is characteristic of exploratory behavior, yet it seems to be at odds with the aim of information seeking. After all, repeated demonstrations of one’s current hypothesis often produce the same evidence and fail to distinguish it from potential alternatives. Research on the development of scientific reasoning and adult rule learning have both documented and attempted to explain this behavior. The current chapter reviews this prior work and introduces a novel theoretical account—the Search for Invariance (SI) hypothesis—which suggests that producing multiple positive examples serves the goals of causal learning. This hypothesis draws on the interventionist framework of causal reasoning, which suggests that causal learners are concerned with the invariance of candidate hypotheses. In a probabilistic and interdependent causal world, our primary goal is to determine whether, and in what contexts, our causal hypotheses provide accurate foundations for inference and intervention—not to disconfirm their alternatives. By recognizing the central role of invariance in causal learning, the phenomenon of positive testing may be reinterpreted as a rational information-seeking strategy

    View-tolerant face recognition and Hebbian learning imply mirror-symmetric neural tuning to head orientation

    Get PDF
    The primate brain contains a hierarchy of visual areas, dubbed the ventral stream, which rapidly computes object representations that are both specific for object identity and relatively robust against identity-preserving transformations like depth-rotations. Current computational models of object recognition, including recent deep learning networks, generate these properties through a hierarchy of alternating selectivity-increasing filtering and tolerance-increasing pooling operations, similar to simple-complex cells operations. While simulations of these models recapitulate the ventral stream's progression from early view-specific to late view-tolerant representations, they fail to generate the most salient property of the intermediate representation for faces found in the brain: mirror-symmetric tuning of the neural population to head orientation. Here we prove that a class of hierarchical architectures and a broad set of biologically plausible learning rules can provide approximate invariance at the top level of the network. While most of the learning rules do not yield mirror-symmetry in the mid-level representations, we characterize a specific biologically-plausible Hebb-type learning rule that is guaranteed to generate mirror-symmetric tuning to faces tuning at intermediate levels of the architecture

    Category learning induces position invariance of pattern recognition across the visual field

    Get PDF
    Human object recognition is considered to be largely invariant to translation across the visual field. However, the origin of this invariance to positional changes has remained elusive, since numerous studies found that the ability to discriminate between visual patterns develops in a largely location-specific manner, with only a limited transfer to novel visual field positions. In order to reconcile these contradicting observations, we traced the acquisition of categories of unfamiliar grey-level patterns within an interleaved learning and testing paradigm that involved either the same or different retinal locations. Our results show that position invariance is an emergent property of category learning. Pattern categories acquired over several hours at a fixed location in either the peripheral or central visual field gradually become accessible at new locations without any position-specific feedback. Furthermore, categories of novel patterns presented in the left hemifield are distinctly faster learnt and better generalized to other locations than those learnt in the right hemifield. Our results suggest that during learning initially position-specific representations of categories based on spatial pattern structure become encoded in a relational, position-invariant format. Such representational shifts may provide a generic mechanism to achieve perceptual invariance in object recognition

    On the Maximal Invariant Statistic for Adaptive Radar Detection in Partially-Homogeneous Disturbance with Persymmetric Covariance

    Full text link
    This letter deals with the problem of adaptive signal detection in partially-homogeneous and persymmetric Gaussian disturbance within the framework of invariance theory. First, a suitable group of transformations leaving the problem invariant is introduced and the Maximal Invariant Statistic (MIS) is derived. Then, it is shown that the (Two-step) Generalized-Likelihood Ratio test, Rao and Wald tests can be all expressed in terms of the MIS, thus proving that they all ensure a Constant False-Alarm Rate (CFAR).Comment: submitted for journal publicatio
    corecore