630 research outputs found

    Lateralization of short- and long-term visual memories in an insect

    Get PDF
    The formation of memories within the vertebrate brain is lateralized between hemispheres across multiple modalities. However, in invertebrates evidence for lateralization is restricted to olfactory memories, primarily from social bees. Here, we use a classical conditioning paradigm with a visual conditioned stimulus to show that visual memories are lateralized in the wood ant, Formica rufa. We show that a brief contact between a sugar reward and either the right or left antenna (reinforcement) is sufficient to produce a lateralized memory, even though the visual cue is visible to both eyes throughout training and testing. Reinforcement given to the right antenna induced short-term memories, whereas reinforcement given to the left antenna induced long-term memories. Thus, short- and long-term visual memories are lateralized in wood ants. This extends the modalities across which memories are lateralized in insects and suggests that such memory lateralization may have evolved multiple times, possibly linked to the evolution of eusociality in the Hymenoptera

    Encoding databases satisfying a given set of dependencies

    Get PDF
    Consider a relation schema with a set of dependency constraints. A fundamental question is what is the minimum space where the possible instances of the schema can be "stored". We study the following model. Encode the instances by giving a function which maps the set of possible instances into the set of words of a given length over the binary alphabet in a decodable way. The problem is to find the minimum length needed. This minimum is called the information content of the database. We investigate several cases where the set of dependency constraints consist of relatively simple sets of functional or multivalued dependencies. We also consider the following natural extension. Is it possible to encode the instances such a way that small changes in the instance cause a small change in the code. © 2012 Springer-Verlag

    Efficient Bayesian hierarchical functional data analysis with basis function approximations using Gaussian-Wishart processes

    Full text link
    Functional data are defined as realizations of random functions (mostly smooth functions) varying over a continuum, which are usually collected with measurement errors on discretized grids. In order to accurately smooth noisy functional observations and deal with the issue of high-dimensional observation grids, we propose a novel Bayesian method based on the Bayesian hierarchical model with a Gaussian-Wishart process prior and basis function representations. We first derive an induced model for the basis-function coefficients of the functional data, and then use this model to conduct posterior inference through Markov chain Monte Carlo. Compared to the standard Bayesian inference that suffers serious computational burden and unstableness for analyzing high-dimensional functional data, our method greatly improves the computational scalability and stability, while inheriting the advantage of simultaneously smoothing raw observations and estimating the mean-covariance functions in a nonparametric way. In addition, our method can naturally handle functional data observed on random or uncommon grids. Simulation and real studies demonstrate that our method produces similar results as the standard Bayesian inference with low-dimensional common grids, while efficiently smoothing and estimating functional data with random and high-dimensional observation grids where the standard Bayesian inference fails. In conclusion, our method can efficiently smooth and estimate high-dimensional functional data, providing one way to resolve the curse of dimensionality for Bayesian functional data analysis with Gaussian-Wishart processes.Comment: Under revie

    Implementing Monte Carlo tests with P-value buckets

    Get PDF
    Software packages usually report the results of statistical tests using p-values. Users often interpret these by comparing them to standard thresholds, e.g. 0.1%, 1% and 5%, which is sometimes reinforced by a star rating (***, **, *). We consider an arbitrary statistical test whose p-value p is not available explicitly, but can be approximated by Monte Carlo samples, e.g. by bootstrap or permutation tests. The standard implementation of such tests usually draws a fixed number of samples to approximate p. However, the probability that the exact and the approximated p-value lie on different sides of a threshold (the resampling risk) can be high, particularly for p-values close to a threshold. We present a method to overcome this. We consider a finite set of user-specified intervals which cover [0,1] and which can be overlapping. We call these p-value buckets. We present algorithms that, with arbitrarily high probability, return a p-value bucket containing p. We prove that for both a bounded resampling risk and a finite runtime, overlapping buckets need to be employed, and that our methods both bound the resampling risk and guarantee a finite runtime for such overlapping buckets. To interpret decisions with overlapping buckets, we propose an extension of the star rating system. We demonstrate that our methods are suitable for use in standard software, including for low p-value thresholds occurring in multiple testing settings, and that they can be computationally more efficient than standard implementations

    IPAD: Stable Interpretable Forecasting with Knockoffs Inference

    Get PDF
    Interpretability and stability are two important features that are desired in many contemporary big data applications arising in economics and finance. While the former is enjoyed to some extent by many existing forecasting approaches, the latter in the sense of controlling the fraction of wrongly discovered features which can enhance greatly the interpretability is still largely underdeveloped in the econometric settings. To this end, in this paper we exploit the general framework of model-X knockoffs introduced recently in Cand\`{e}s, Fan, Janson and Lv (2018), which is nonconventional for reproducible large-scale inference in that the framework is completely free of the use of p-values for significance testing, and suggest a new method of intertwined probabilistic factors decoupling (IPAD) for stable interpretable forecasting with knockoffs inference in high-dimensional models. The recipe of the method is constructing the knockoff variables by assuming a latent factor model that is exploited widely in economics and finance for the association structure of covariates. Our method and work are distinct from the existing literature in that we estimate the covariate distribution from data instead of assuming that it is known when constructing the knockoff variables, our procedure does not require any sample splitting, we provide theoretical justifications on the asymptotic false discovery rate control, and the theory for the power analysis is also established. Several simulation examples and the real data analysis further demonstrate that the newly suggested method has appealing finite-sample performance with desired interpretability and stability compared to some popularly used forecasting methods

    The Lamé Class of Lorenz Curves.

    Get PDF
    In this paper, the class of Lamé Lorenz curves is studied. This family has the advantage of modeling inequality with a single parameter. The family has a double motivation: it can be obtain from an economic model and from simple transformations of classical Lorenz curves. The underlying cumulative distribution functions have a simple closed form, and correspond to the Singh-Maddala and Dagum distributions, which are well known in the economic literature. The Lorenz order is studied and several inequality and polarization measures are obtained, including Gini, Donaldson-Weymark-Kakwani, Pietra and Wolfson indices. Some extensions of the Lamé family are obtained. Fitting and estimation methods under two different data configuration are proposed. Empirical applications with real data are given. Finally, some relationships with other curves are included.The authors thank to Ministerio de Econom a y Competitividad, project ECO2010-15455, for partial support. The second author thanks to the Ministerio de Educaci on (FPU AP-2010-4907) for partial support. We are grateful for the constructive suggestions provided by the reviewers, which improved the paper

    False positives and other statistical errors in standard analyses of eye movements in reading

    Get PDF
    In research on eye movements in reading, it is common to analyze a number of canonical dependent measures to study how the effects of a manipulation unfold over time. Although this gives rise to the well-known multiple comparisons problem, i.e. an inflated probability that the null hypothesis is incorrectly rejected (Type I error), it is accepted standard practice not to apply any correction procedures. Instead, there appears to be a widespread belief that corrections are not necessary because the increase in false positives is too small to matter. To our knowledge, no formal argument has ever been presented to justify this assumption. Here, we report a computational investigation of this issue using Monte Carlo simulations. Our results show that, contrary to conventional wisdom, false positives are increased to unacceptable levels when no corrections are applied. Our simulations also show that counter-measures like the Bonferroni correction keep false positives in check while reducing statistical power only moderately. Hence, there is little reason why such corrections should not be made a standard requirement. Further, we discuss three statistical illusions that can arise when statistical power is low, and we show how power can be improved to prevent these illusions. In sum, our work renders a detailed picture of the various types of statistical errors than can occur in studies of reading behavior and we provide concrete guidance about how these errors can be avoided
    • 

    corecore