28 research outputs found

    Effect of Geometric Complexity on Intuitive Model Selection

    Get PDF
    Occam’s razor is the principle stating that, all else being equal, simpler explanations for a set of observations are to be preferred to more complex ones. This idea can be made precise in the context of statistical inference, where the same quantitative notion of complexity of a statistical model emerges naturally from different approaches based on Bayesian model selection and information theory. The broad applicability of this mathematical formulation suggests a normative model of decision-making under uncertainty: complex explanations should be penalized according to this common measure of complexity. However, little is known about if and how humans intuitively quantify the relative complexity of competing interpretations of noisy data. Here we measure the sensitivity of naive human subjects to statistical model complexity. Our data show that human subjects bias their decisions in favor of simple explanations based not only on the dimensionality of the alternatives (number of model parameters), but also on finer-grained aspects of their geometry. In particular, as predicted by the theory, models intuitively judged as more complex are not only those with more parameters, but also those with larger volume and prominent curvature or boundaries. Our results imply that principled notions of statistical model complexity have direct quantitative relevance to human decision-making

    How Occam’s Razor Guides Human Inference

    Get PDF
    Occam’s razor is the principle stating that, all else being equal, simpler explanations for a set of observations are preferred over more complex ones. This idea is central to multiple formal theories of statistical model selection and is posited to play a role in human perception and decision-making, but a general, quantitative account of the specific nature and impact of complexity on human decision-making is still missing. Here we use preregistered experiments to show that, when faced with uncertain evidence, human subjects bias their decisions in favor of simpler explanations in a way that can be quantified precisely using the framework of Bayesian model selection. Specifically, these biases, which were also exhibited by artificial neural networks trained to optimize performance on comparable tasks, reflect an aversion to complex explanations (statistical models of data) that depends on specific geometrical features of those models, namely their dimensionality, boundaries, volume, and curvature. Moreover, the simplicity bias persists for human, but not artificial, subjects even for tasks for which the bias is maladaptive and can lower overall performance. Taken together, our results imply that principled notions of statistical model complexity have direct, quantitative relevance to human and machine decision-making and establish a new understanding of the computational foundations, and behavioral benefits, of our predilection for inferring simplicity in the latent properties of our complex world

    Synthesizing realistic neural population activity patterns using Generative Adversarial Networks

    Get PDF
    The ability to synthesize realistic patterns of neural activity is crucial for studying neural information processing. Here we used the Generative Adversarial Networks (GANs) framework to simulate the concerted activity of a population of neurons. We adapted the Wasserstein-GAN variant to facilitate the generation of unconstrained neural population activity patterns while still benefiting from parameter sharing in the temporal domain. We demonstrate that our proposed GAN, which we termed Spike-GAN, generates spike trains that match accurately the first- and second-order statistics of datasets of tens of neurons and also approximates well their higher-order statistics. We applied Spike-GAN to a real dataset recorded from salamander retina and showed that it performs as well as state-of-the-art approaches based on the maximum entropy and the dichotomized Gaussian frameworks. Importantly, Spike-GAN does not require to specify a priori the statistics to be matched by the model, and so constitutes a more flexible method than these alternative approaches. Finally, we show how to exploit a trained Spike-GAN to construct 'importance maps' to detect the most relevant statistical structures present in a spike train. Spike-GAN provides a powerful, easy-to-use technique for generating realistic spiking neural activity and for describing the most relevant features of the large-scale neural population recordings studied in modern systems neuroscience.Comment: Published as a conference paper at ICLR 2018 V2: minor changes in supp. materia

    Time as a supervisor: temporal regularity and auditory object learning

    Get PDF
    Sensory systems appear to learn to transform incoming sensory information into perceptual representations, or “objects,” that can inform and guide behavior with minimal explicit supervision. Here, we propose that the auditory system can achieve this goal by using time as a supervisor, i.e., by learning features of a stimulus that are temporally regular. We will show that this procedure generates a feature space sufficient to support fundamental computations of auditory perception. In detail, we consider the problem of discriminating between instances of a prototypical class of natural auditory objects, i.e., rhesus macaque vocalizations. We test discrimination in two ethologically relevant tasks: discrimination in a cluttered acoustic background and generalization to discriminate between novel exemplars. We show that an algorithm that learns these temporally regular features affords better or equivalent discrimination and generalization than conventional feature-selection algorithms, i.e., principal component analysis and independent component analysis. Our findings suggest that the slow temporal features of auditory stimuli may be sufficient for parsing auditory scenes and that the auditory brain could utilize these slowly changing temporal features

    Pymuvr

    No full text
    A Python package for the fast calculation of Multi-unit Van Rossum neural spike train metrics, with the kernel-based algorithm described in Houghton and Kreuz, On the efficient calculation of Van Rossum distances (Network: Computation in Neural Systems, 2012, 23, 48-58; doi:10.3109/0954898X.2012.673048)

    epiasini/pymuvr: 1.3.2

    No full text
    The pymuvr code in this release is identical to that in 1.3.0. The only difference is in the test suite, which has been updated to ensure compatibility with recent versions of spykeutils (as some of the tests involve comparing results for the same quantities computed with pymuvr and spykeutils)

    cvglmnetR

    No full text
    Simple replacement for the cvglmnet function from the glmnet in MATLAB package that uses the R implementation of glmnet

    metex - Maximum Entopy TEXtures

    No full text
    Utilities for generating maximum entropy textures, according to Victor and Conte 2012. Metex can be used as a standalone software from the command line, or as a Python package to generate and manipulate textures. Metex is maintained on GitLab and published on the Python Package Index

    Network structure and function in the input stage of the cerebellar cortex

    No full text
    It has long been recognised that neuronal networks are complex systems, whose dynamics depend on the properties of the individual synapses and neurons and the way in which they are interconnected. However, establishing clear links between network structure and function has proven difficult. To address this question I applied tools and techniques from computational neuroscience, neuroinformatics, information theory, machine learning, spatial point process theory and network theory, deploying them on a suitable HPC infrastructure where appropriate. Moreover, access to electrophysiological and anatomical data enabled me to develop biologically accurate models and to compare my theoretical predictions with analyses of raw data. In this work, I focused on the granule cell layer (GCL), the input stage of the cerebellar cortex. The GCL is particularly well suited to this type of analysis, as its structural characteristics are comparatively regular, well known and conserved across animal species, and several of its basic functions are relatively well understood. I showed that the synaptic connectivity in simple feed forward networks like the GCL governs the trade-off between information transmission and sparsification of incoming signals. This suggests a link between the functional requirements for the network and the strong evolutionary conservation of the anatomy of the cerebellar GCL. Furthermore, I investigated how the geometry of the GCL interacts with the spatial constraints of synaptic connectivity and gives rise to the statistical features of the chemically and electrically coupled networks formed by mossy fibres, granule cells and Golgi cells. Finally, I studied the influence of the spatial structure of the Golgi cell network on the robustness of the synchronous activity state it can support
    corecore