14 research outputs found

    Are v1 simple cells optimized for visual occlusions? : A comparative study

    Get PDF
    Abstract: Simple cells in primary visual cortex were famously found to respond to low-level image components such as edges. Sparse coding and independent component analysis (ICA) emerged as the standard computational models for simple cell coding because they linked their receptive fields to the statistics of visual stimuli. However, a salient feature of image statistics, occlusions of image components, is not considered by these models. Here we ask if occlusions have an effect on the predicted shapes of simple cell receptive fields. We use a comparative approach to answer this question and investigate two models for simple cells: a standard linear model and an occlusive model. For both models we simultaneously estimate optimal receptive fields, sparsity and stimulus noise. The two models are identical except for their component superposition assumption. We find the image encoding and receptive fields predicted by the models to differ significantly. While both models predict many Gabor-like fields, the occlusive model predicts a much sparser encoding and high percentages of ‘globular’ receptive fields. This relatively new center-surround type of simple cell response is observed since reverse correlation is used in experimental studies. While high percentages of ‘globular’ fields can be obtained using specific choices of sparsity and overcompleteness in linear sparse coding, no or only low proportions are reported in the vast majority of studies on linear models (including all ICA models). Likewise, for the here investigated linear model and optimal sparsity, only low proportions of ‘globular’ fields are observed. In comparison, the occlusive model robustly infers high proportions and can match the experimentally observed high proportions of ‘globular’ fields well. Our computational study, therefore, suggests that ‘globular’ fields may be evidence for an optimal encoding of visual occlusions in primary visual cortex. Author Summary: The statistics of our visual world is dominated by occlusions. Almost every image processed by our brain consists of mutually occluding objects, animals and plants. Our visual cortex is optimized through evolution and throughout our lifespan for such stimuli. Yet, the standard computational models of primary visual processing do not consider occlusions. In this study, we ask what effects visual occlusions may have on predicted response properties of simple cells which are the first cortical processing units for images. Our results suggest that recently observed differences between experiments and predictions of the standard simple cell models can be attributed to occlusions. The most significant consequence of occlusions is the prediction of many cells sensitive to center-surround stimuli. Experimentally, large quantities of such cells are observed since new techniques (reverse correlation) are used. Without occlusions, they are only obtained for specific settings and none of the seminal studies (sparse coding, ICA) predicted such fields. In contrast, the new type of response naturally emerges as soon as occlusions are considered. In comparison with recent in vivo experiments we find that occlusive models are consistent with the high percentages of center-surround simple cells observed in macaque monkeys, ferrets and mice

    ProSper -- A Python Library for Probabilistic Sparse Coding with Non-Standard Priors and Superpositions

    Get PDF
    ProSper is a python library containing probabilistic algorithms to learn dictionaries. Given a set of data points, the implemented algorithms seek to learn the elementary components that have generated the data. The library widens the scope of dictionary learning approaches beyond implementations of standard approaches such as ICA, NMF or standard L1 sparse coding. The implemented algorithms are especially well-suited in cases when data consist of components that combine non-linearly and/or for data requiring flexible prior distributions. Furthermore, the implemented algorithms go beyond standard approaches by inferring prior and noise parameters of the data, and they provide rich a-posteriori approximations for inference. The library is designed to be extendable and it currently includes: Binary Sparse Coding (BSC), Ternary Sparse Coding (TSC), Discrete Sparse Coding (DSC), Maximal Causes Analysis (MCA), Maximum Magnitude Causes Analysis (MMCA), and Gaussian Sparse Coding (GSC, a recent spike-and-slab sparse coding approach). The algorithms are scalable due to a combination of variational approximations and parallelization. Implementations of all algorithms allow for parallel execution on multiple CPUs and multiple machines for medium to large-scale applications. Typical large-scale runs of the algorithms can use hundreds of CPUs to learn hundreds of dictionary elements from data with tens of millions of floating-point numbers such that models with several hundred thousand parameters can be optimized. The library is designed to have minimal dependencies and to be easy to use. It targets users of dictionary learning algorithms and Machine Learning researchers

    ProSper -- A Python Library for Probabilistic Sparse Coding with Non-Standard Priors and Superpositions

    Get PDF
    ProSper is a python library containing probabilistic algorithms to learn dictionaries. Given a set of data points, the implemented algorithms seek to learn the elementary components that have generated the data. The library widens the scope of dictionary learning approaches beyond implementations of standard approaches such as ICA, NMF or standard L1 sparse coding. The implemented algorithms are especially well-suited in cases when data consist of components that combine non-linearly and/or for data requiring flexible prior distributions. Furthermore, the implemented algorithms go beyond standard approaches by inferring prior and noise parameters of the data, and they provide rich a-posteriori approximations for inference. The library is designed to be extendable and it currently includes: Binary Sparse Coding (BSC), Ternary Sparse Coding (TSC), Discrete Sparse Coding (DSC), Maximal Causes Analysis (MCA), Maximum Magnitude Causes Analysis (MMCA), and Gaussian Sparse Coding (GSC, a recent spike-and-slab sparse coding approach). The algorithms are scalable due to a combination of variational approximations and parallelization. Implementations of all algorithms allow for parallel execution on multiple CPUs and multiple machines for medium to large-scale applications. Typical large-scale runs of the algorithms can use hundreds of CPUs to learn hundreds of dictionary elements from data with tens of millions of floating-point numbers such that models with several hundred thousand parameters can be optimized. The library is designed to have minimal dependencies and to be easy to use. It targets users of dictionary learning algorithms and Machine Learning researchers

    Whole proteome analyses on Ruminiclostridium cellulolyticum show a modulation of the cellulolysis machinery in response to cellulosic materials with subtle differences in chemical and structural properties

    Get PDF
    Lignocellulosic materials from municipal solid waste emerge as attractive resources for anaerobic digestion biorefinery. To increase the knowledge required for establishing efficient bioprocesses, dynamics of batch fermentation by the cellulolytic bacterium Ruminiclostridium cellulolyticum were compared using three cellulosic materials, paper handkerchief, cotton discs and Whatman filter paper. Fermentation of paper handkerchief occurred the fastest and resulted in a specific metabolic profile: it resulted in the lowest acetate-to-lactate and acetate-to-ethanol ratios. By shotgun proteomic analyses of paper handkerchief and Whatman paper incubations, 151 proteins with significantly different levels were detected, including 20 of the 65 cellulosomal components, 8 non-cellulosomal CAZymes and 44 distinct extracytoplasmic proteins. Consistent with the specific metabolic profile observed, many enzymes from the central carbon catabolic pathways had higher levels in paper handkerchief incubations. Among the quantified CAZymes and cellulosomal components, 10 endoglucanases mainly from the GH9 families and 7 other cellulosomal subunits had lower levels in paper handkerchief incubations. An in-depth characterization of the materials used showed that the lower levels of endoglucanases in paper handkerchief incubations could hypothetically result from its lower crystallinity index (50%) and degree of polymerization (970). By contrast, the higher hemicellulose rate in paper handkerchief (13.87%) did not result in the enhanced expression of enzyme with xylanase as primary activity, including enzymes from the xyl-doc cluster. It suggests the absence, in this material, of molecular structures that specifically lead to xylanase induction. The integrated approach developed in this work shows that subtle differences among cellulosic materials regarding chemical and structural characteristics have significant effects on expressed bacterial functions, in particular the cellulolysis machinery, resulting in different metabolic patterns and degradation dynamics.This work was supported by a grant [R2DS 2010-08] from Conseil Regional d'Ile-de-France through DIM R2DS programs (http://www.r2ds-ile-de-france.com/). Irstea (www.irstea.fr/) contributed to the funding of a PhD grant for the first author. The funders provided support in the form of salaries for author [NB], funding for consumables and laboratory equipment, but did not have any additional role in the study design, data collection and analysis, decision to publish, or preparation of the manuscript. Omics Services provided support in the form of salaries for authors [VS, MD], but did not have any additional role in the study design, data collection and analysis, decision to publish, or preparation of the manuscript. The specific roles of these authors [NB, VS, MD] are articulated in the 'author contributions' section.info:eu-repo/semantics/publishedVersio

    Percentages of globular receptive fields predicted by the computational models in comparison to <i>in vivo</i> measurements.

    No full text
    <p><b>A</b> Receptive fields predicted if occlusion-like superposition is assumed ( out of receptive fields are shown). <b>B</b> Receptive fields predicted if standard linear superposition is assumed ( out of receptive fields are shown). <b>C</b> Percentages of globular fields predicted by the occlusive model (MCA) and by the linear model (BSC) versus number of hidden units. The experiments for MCA (blue line) and BSC (green line) on DoG preprocessed image patches were repeated five times and the error bars extend two empirical standard deviations. Standard sparse coding (yellow line) on DoG processed data shows the lowest fraction of globular fields. To control for the influence of preprocssing, additional experiments were performed on ZCA whitened data (dashed blue and dashed green lines). The bold red line (and its error bar) shows the fraction of globular fields computed based on <i>in vivo</i> measurements of macaque monkeys <a href="http://www.ploscompbiol.org/article/info:doi/10.1371/journal.pcbi.1003062#pcbi.1003062-Ringach1" target="_blank">[14]</a>. Dashed red lines show the fractions reported for ferrets <a href="http://www.ploscompbiol.org/article/info:doi/10.1371/journal.pcbi.1003062#pcbi.1003062-Usrey1" target="_blank">[15]</a> and mice <a href="http://www.ploscompbiol.org/article/info:doi/10.1371/journal.pcbi.1003062#pcbi.1003062-Niell1" target="_blank">[16]</a>.</p

    Comparison of Gabor shape statistics with <i>in vivo</i> recordings and predicted sparsity.

    No full text
    <p><b>A</b> Analysis of learned Gabor-like receptive fields for experiments with hidden units (and patch size ): distribution of Gabor shaped receptive fields learned by occlusion-like (MCA) and linear sparse coding (BSC). The red triangles in both plots depict the distribution computed based on <i>in vivo</i> measurements of macaque monkeys <a href="http://www.ploscompbiol.org/article/info:doi/10.1371/journal.pcbi.1003062#pcbi.1003062-Ringach1" target="_blank">[14]</a>. <b>B</b> Average number of active units accross image patches as a function of the number of hidden units (note that error bars are very small; experiments on pixel sized DoG preporcessed patches).</p

    Decomposition of image patches into basic components for four example patches.

    No full text
    <p>For each example the figure shows: the original patch (left), its DoG preprocessed version (second to left), and the decomposition of the preprocessed patch by the three models. For better comparison with the original patches, basis functions are shown in grey-scale. The displayed functions correspond to the active units of the most likely hidden state given the patch. In the case of standard sparse coding, the basis functions are displayed in the order of their contributions. Standard sparse coding (SC) uses many basis functions for reconstruction but many of them contribute very little. BSC uses a much smaller subset of the basis functions for reconstruction. MCA typically uses the smallest subset. The basis functions of MCA usually correspond directly to edges or to two dimensional structures of the image while basis functions of BSC and (to a greater degree) of SC are more loosely associated with the true components of the respective patch. The bottom most example illustrates that the globular fields are usually associated with structures such as end-stopping or corners. For the displayed examples, the normalized root-mean-square reconstruction errors (nrmse) allow to quantify the reconstruction quality. For standard sparse coding the errors are (from top to bottom) given by 0.09, 0.08, 0.10 and 0.12, respectively. For the two models with Bernoulli prior they are larger with 0.51, 0.63, 0.53, and 0.42 for MCA, and 0.37, 0.47, 0.44 and 0.39 for BSC. We give reconstruction errors for completeness but note that they are for all models based on their most likely hidden states (MAP estimates). For MCA and BSC the MAP was chosen for illustrative purposes while for most tasks these models can make use of their more elaborate posterior approximations.</p
    corecore