6,661 research outputs found

    Coordinated optimization of visual cortical maps (II) Numerical studies

    Get PDF
    It is an attractive hypothesis that the spatial structure of visual cortical architecture can be explained by the coordinated optimization of multiple visual cortical maps representing orientation preference (OP), ocular dominance (OD), spatial frequency, or direction preference. In part (I) of this study we defined a class of analytically tractable coordinated optimization models and solved representative examples in which a spatially complex organization of the orientation preference map is induced by inter-map interactions. We found that attractor solutions near symmetry breaking threshold predict a highly ordered map layout and require a substantial OD bias for OP pinwheel stabilization. Here we examine in numerical simulations whether such models exhibit biologically more realistic spatially irregular solutions at a finite distance from threshold and when transients towards attractor states are considered. We also examine whether model behavior qualitatively changes when the spatial periodicities of the two maps are detuned and when considering more than 2 feature dimensions. Our numerical results support the view that neither minimal energy states nor intermediate transient states of our coordinated optimization models successfully explain the spatially irregular architecture of the visual cortex. We discuss several alternative scenarios and additional factors that may improve the agreement between model solutions and biological observations.Comment: 55 pages, 11 figures. arXiv admin note: substantial text overlap with arXiv:1102.335

    Coordinated optimization of visual cortical maps : 2. Numerical studies

    Get PDF
    In the juvenile brain, the synaptic architecture of the visual cortex remains in a state of flux for months after the natural onset of vision and the initial emergence of feature selectivity in visual cortical neurons. It is an attractive hypothesis that visual cortical architecture is shaped during this extended period of juvenile plasticity by the coordinated optimization of multiple visual cortical maps such as orientation preference (OP), ocular dominance (OD), spatial frequency, or direction preference. In part (I) of this study we introduced a class of analytically tractable coordinated optimization models and solved representative examples, in which a spatially complex organization of the OP map is induced by interactions between the maps. We found that these solutions near symmetry breaking threshold predict a highly ordered map layout. Here we examine the time course of the convergence towards attractor states and optima of these models. In particular, we determine the timescales on which map optimization takes place and how these timescales can be compared to those of visual cortical development and plasticity. We also assess whether our models exhibit biologically more realistic, spatially irregular solutions at a finite distance from threshold, when the spatial periodicities of the two maps are detuned and when considering more than 2 feature dimensions. We show that, although maps typically undergo substantial rearrangement, no other solutions than pinwheel crystals and stripes dominate in the emerging layouts. Pinwheel crystallization takes place on a rather short timescale and can also occur for detuned wavelengths of different maps. Our numerical results thus support the view that neither minimal energy states nor intermediate transient states of our coordinated optimization models successfully explain the architecture of the visual cortex. We discuss several alternative scenarios that may improve the agreement between model solutions and biological observations

    Dynamic Decomposition of Spatiotemporal Neural Signals

    Full text link
    Neural signals are characterized by rich temporal and spatiotemporal dynamics that reflect the organization of cortical networks. Theoretical research has shown how neural networks can operate at different dynamic ranges that correspond to specific types of information processing. Here we present a data analysis framework that uses a linearized model of these dynamic states in order to decompose the measured neural signal into a series of components that capture both rhythmic and non-rhythmic neural activity. The method is based on stochastic differential equations and Gaussian process regression. Through computer simulations and analysis of magnetoencephalographic data, we demonstrate the efficacy of the method in identifying meaningful modulations of oscillatory signals corrupted by structured temporal and spatiotemporal noise. These results suggest that the method is particularly suitable for the analysis and interpretation of complex temporal and spatiotemporal neural signals

    A Neural Model for Self Organizing Feature Detectors and Classifiers in a Network Hierarchy

    Full text link
    Many models of early cortical processing have shown how local learning rules can produce efficient, sparse-distributed codes in which nodes have responses that are statistically independent and low probability. However, it is not known how to develop a useful hierarchical representation, containing sparse-distributed codes at each level of the hierarchy, that incorporates predictive feedback from the environment. We take a step in that direction by proposing a biologically plausible neural network model that develops receptive fields, and learns to make class predictions, with or without the help of environmental feedback. The model is a new type of predictive adaptive resonance theory network called Receptive Field ARTMAP, or RAM. RAM self organizes internal category nodes that are tuned to activity distributions in topographic input maps. Each receptive field is composed of multiple weight fields that are adapted via local, on-line learning, to form smooth receptive ftelds that reflect; the statistics of the activity distributions in the input maps. When RAM generates incorrect predictions, its vigilance is raised, amplifying subtractive inhibition and sharpening receptive fields until the error is corrected. Evaluation on several classification benchmarks shows that RAM outperforms a related (but neurally implausible) model called Gaussian ARTMAP, as well as several standard neural network and statistical classifters. A topographic version of RAM is proposed, which is capable of self organizing hierarchical representations. Topographic RAM is a model for receptive field development at any level of the cortical hierarchy, and provides explanations for a variety of perceptual learning data.Defense Advanced Research Projects Agency and Office of Naval Research (N00014-95-1-0409

    Fully Complex Magnetoencephalography

    Full text link
    Complex numbers appear naturally in biology whenever a system can be analyzed in the frequency domain, such as physiological data from magnetoencephalography (MEG). For example, the MEG steady state response to a modulated auditory stimulus generates a complex magnetic field for each MEG channel, equal to the Fourier transform at the stimulus modulation frequency. The complex nature of these data sets, often not taken advantage of, is fully exploited here with new methods. Whole-head, complex magnetic data can be used to estimate complex neural current sources, and standard methods of source estimation naturally generalize for complex sources. We show that a general complex neural vector source is described by its location, magnitude, and direction, but also by a phase and by an additional perpendicular component. We give natural interpretations of all the parameters for the complex equivalent-current dipole by linking them to the underlying neurophysiology. We demonstrate complex magnetic fields, and their equivalent fully complex current sources, with both simulations and experimental data.Comment: 23 pages, 1 table, 5 figures; to appear in Journal of Neuroscience Method

    Towards a learning-theoretic analysis of spike-timing dependent plasticity

    Full text link
    This paper suggests a learning-theoretic perspective on how synaptic plasticity benefits global brain functioning. We introduce a model, the selectron, that (i) arises as the fast time constant limit of leaky integrate-and-fire neurons equipped with spiking timing dependent plasticity (STDP) and (ii) is amenable to theoretical analysis. We show that the selectron encodes reward estimates into spikes and that an error bound on spikes is controlled by a spiking margin and the sum of synaptic weights. Moreover, the efficacy of spikes (their usefulness to other reward maximizing selectrons) also depends on total synaptic strength. Finally, based on our analysis, we propose a regularized version of STDP, and show the regularization improves the robustness of neuronal learning when faced with multiple stimuli.Comment: To appear in Adv. Neural Inf. Proc. System

    The cross-frequency mediation mechanism of intracortical information transactions

    Full text link
    In a seminal paper by von Stein and Sarnthein (2000), it was hypothesized that "bottom-up" information processing of "content" elicits local, high frequency (beta-gamma) oscillations, whereas "top-down" processing is "contextual", characterized by large scale integration spanning distant cortical regions, and implemented by slower frequency (theta-alpha) oscillations. This corresponds to a mechanism of cortical information transactions, where synchronization of beta-gamma oscillations between distant cortical regions is mediated by widespread theta-alpha oscillations. It is the aim of this paper to express this hypothesis quantitatively, in terms of a model that will allow testing this type of information transaction mechanism. The basic methodology used here corresponds to statistical mediation analysis, originally developed by (Baron and Kenny 1986). We generalize the classical mediator model to the case of multivariate complex-valued data, consisting of the discrete Fourier transform coefficients of signals of electric neuronal activity, at different frequencies, and at different cortical locations. The "mediation effect" is quantified here in a novel way, as the product of "dual frequency RV-coupling coefficients", that were introduced in (Pascual-Marqui et al 2016, http://arxiv.org/abs/1603.05343). Relevant statistical procedures are presented for testing the cross-frequency mediation mechanism in general, and in particular for testing the von Stein & Sarnthein hypothesis.Comment: https://doi.org/10.1101/119362 licensed as CC-BY-NC-ND 4.0 International license: http://creativecommons.org/licenses/by-nc-nd/4.0

    Quantum processes, space-time representation and brain dynamics

    Full text link
    The recent controversy of applicability of quantum formalism to brain dynamics has been critically analysed. The prerequisites for any type of quantum formalism or quantum field theory is to investigate whether the anatomical structure of brain permits any kind of smooth geometric notion like Hilbert structure or four dimensional Minkowskian structure for quantum field theory. The present understanding of brain function clearly denies any kind of space-time representation in Minkowskian sense. However, three dimensional space and one time can be assigned to the neuromanifold and the concept of probabilistic geometry is shown to be appropriate framework to understand the brain dynamics. The possibility of quantum structure is also discussed in this framework.Comment: Latex, 28 page
    • …
    corecore