18,509 research outputs found

    Matter-antimatter asymmetry restrains the dimensionality of neural representations: quantum decryption of large-scale neural coding

    Full text link
    Projections from the study of the human universe onto the study of the self-organizing brain are herein leveraged to address certain concerns raised in latest neuroscience research, namely (i) the extent to which neural codes are multidimensional; (ii) the functional role of neural dark matter; (iii) the challenge to traditional model frameworks posed by the needs for accurate interpretation of large-scale neural recordings linking brain and behavior. On the grounds of (hyper-)self-duality under (hyper-)mirror supersymmetry, inter-relativistic quantum principles are introduced, whose consolidation, as spin-geometrical pillars of a network- and game-theoretical construction, is conducive to (i) the high-precision reproduction and reinterpretation of core experimental observations on neural coding in the self-organizing brain, with the instantaneous geometric dimensionality of neural representations of a spontaneous behavioral state being proven to be at most 16, unidirectionally; (ii) a possible role for spinor (co-)representations, as the latent building blocks of self-organizing cortical circuits subserving (co-)behavioral states; (iii) an early crystallization of pertinent multidimensional synaptic (co-)architectures, whereby Lorentz (co-)partitions are in principle verifiable; and, ultimately, (iv) potentially inverse insights into matter-antimatter asymmetry. New avenues for the decryption of large-scale neural coding in health and disease are being discussed.Comment: 33 pages;3 figures;1 table;minor edit

    Sensor encoding using lateral inhibited, self-organized cellular neural networks

    Get PDF
    The paper focuses on the division of the sensor field into subsets of sensor events and proposes the linear transformation with the smallest achievable error for reproduction: the transform coding approach using the principal component analysis (PCA). For the implementation of the PCA, this paper introduces a new symmetrical, lateral inhibited neural network model, proposes an objective function for it and deduces the corresponding learning rules. The necessary conditions for the learning rate and the inhibition parameter for balancing the crosscorrelations vs. the autocorrelations are computed. The simulation reveals that an increasing inhibition can speed up the convergence process in the beginning slightly. In the remaining paper, the application of the network in picture encoding is discussed. Here, the use of non-completely connected networks for the self-organized formation of templates in cellular neural networks is shown. It turns out that the self-organizing Kohonen map is just the non-linear, first order approximation of a general self-organizing scheme. Hereby, the classical transform picture coding is changed to a parallel, local model of linear transformation by locally changing sets of self-organized eigenvector projections with overlapping input receptive fields. This approach favors an effective, cheap implementation of sensor encoding directly on the sensor chip. Keywords: Transform coding, Principal component analysis, Lateral inhibited network, Cellular neural network, Kohonen map, Self-organized eigenvector jets

    PocketGraph : graph representation of binding site volumes

    Get PDF
    The representation of small molecules as molecular graphs is a common technique in various fields of cheminformatics. This approach employs abstract descriptions of topology and properties for rapid analyses and comparison. Receptor-based methods in contrast mostly depend on more complex representations impeding simplified analysis and limiting the possibilities of property assignment. In this study we demonstrate that ligand-based methods can be applied to receptor-derived binding site analysis. We introduce the new method PocketGraph that translates representations of binding site volumes into linear graphs and enables the application of graph-based methods to the world of protein pockets. The method uses the PocketPicker algorithm for characterization of binding site volumes and employs a Growing Neural Gas procedure to derive graph representations of pocket topologies. Self-organizing map (SOM) projections revealed a limited number of pocket topologies. We argue that there is only a small set of pocket shapes realized in the known ligand-receptor complexes

    Empirical Analysis of the Necessary and Sufficient Conditions of the Echo State Property

    Full text link
    The Echo State Network (ESN) is a specific recurrent network, which has gained popularity during the last years. The model has a recurrent network named reservoir, that is fixed during the learning process. The reservoir is used for transforming the input space in a larger space. A fundamental property that provokes an impact on the model accuracy is the Echo State Property (ESP). There are two main theoretical results related to the ESP. First, a sufficient condition for the ESP existence that involves the singular values of the reservoir matrix. Second, a necessary condition for the ESP. The ESP can be violated according to the spectral radius value of the reservoir matrix. There is a theoretical gap between these necessary and sufficient conditions. This article presents an empirical analysis of the accuracy and the projections of reservoirs that satisfy this theoretical gap. It gives some insights about the generation of the reservoir matrix. From previous works, it is already known that the optimal accuracy is obtained near to the border of stability control of the dynamics. Then, according to our empirical results, we can see that this border seems to be closer to the sufficient conditions than to the necessary conditions of the ESP.Comment: 23 pages, 14 figures, accepted paper for the IEEE IJCNN, 201

    Beta Oscillations and Hippocampal Place Cell Learning during Exploration of Novel Environments

    Full text link
    Berke et al. (2008) reported that beta oscillations occur during the learning of hippocampal place cell receptive fields in novel environments. Place cell selectivity can develop within seconds to minutes, and can remain stable for months. Paradoxically, beta power was very low during the first lap of exploration, grew to full strength as a mouse traversed a lap for the second and third times, and became and remained low again after the first two minutes of exploration. Beta oscillation power also correlated with the rate at which place cells became spatially selective, and not with theta oscillations. We explain such beta oscillations as a consequence of how place cell receptive fields may be learned as spatially selective categories due to feedback interactions between entorhinal cortex and hippocampus. Top-down attentive feedback helps to ensure rapid learning and stable memory of place cells. Beta oscillations are generated when top-down feedback mismatches bottom-up data as place cell receptive fields are refined. Beta oscillations do not occur on the first trial because adaptive weights in feedback pathways are all sufficiently large then to match any input pattern. On subsequent trials, adaptive weights become pruned as they learn to match the sharpening receptive fields of the place cell categories, thereby causing mismatches until place cell receptive fields stabilize.National Science Foundation (SBE-0354378

    Grid Cell Hexagonal Patterns Formed by Fast Self-Organized Learning within Entorhinal Cortex

    Full text link
    Grid cells in the dorsal segment of the medial entorhinal cortex (dMEC) show remarkable hexagonal activity patterns, at multiple spatial scales, during spatial navigation. How these hexagonal patterns arise has excited intense interest. It has previously been shown how a selforganizing map can convert firing patterns across entorhinal grid cells into hippocampal place cells that are capable of representing much larger spatial scales. Can grid cell firing fields also arise during navigation through learning within a self-organizing map? A neural model is proposed that converts path integration signals into hexagonal grid cell patterns of multiple scales. This GRID model creates only grid cell patterns with the observed hexagonal structure, predicts how these hexagonal patterns can be learned from experience, and can process biologically plausible neural input and output signals during navigation. These results support a unified computational framework for explaining how entorhinal-hippocampal interactions support spatial navigation.CELEST, a National Science Foundation Science of Learning Center (SBE-0354378); SyNAPSE program of Defense Advanced Research Projects Agency (HR00ll-09-3-0001, HR0011-09-C-0011
    • …
    corecore