129 research outputs found

    Graph Models of Neurodynamics to Support Oscillatory Associative Memories

    Get PDF
    Recent advances in brain imaging techniques require the development of advanced models of brain networks and graphs. Previous work on percolation on lattices and random graphs demonstrated emergent dynamical regimes, including zero- and non-zero fixed points, and limit cycle oscillations. Here we introduce graph processes using lattices with excitatory and inhibitory nodes, and study conditions leading to spatio-temporal oscillations. Rigorous mathematical analysis provides insights on the possible dynamics and, of particular concern to this work, conditions producing cycles with very long periods. A systematic parameter study demonstrates the presence of phase transitions between various regimes, including oscillations with emergent metastable patterns. We studied the impact of external stimuli on the dynamic patterns, which can be used for encoding and recall in robust associative memories. © 2018 IEEE

    A Theory of Cortical Neural Processing.

    Get PDF
    This dissertation puts forth an original theory of cortical neural processing that is unique in its view of the interplay of chaotic and stable oscillatory neurodynamics and is meant to stimulate new ideas in artificial neural network modeling. Our theory is the first to suggest two new purposes for chaotic neurodynamics: (i) as a natural means of representing the uncertainty in the outcome of performed tasks, such as memory retrieval or classification, and (ii) as an automatic way of producing an economic representation of distributed information. We developed new models, to better understand how the cerebral cortex processes information, which led to our theory. Common to these models is a neuron interaction function that alternates between excitatory and inhibitory neighborhoods. Our theory allows characteristics of the input environment to influence the structural development of the cortex. We view low intensity chaotic activity as the a priori uncertain base condition of the cortex, resulting from the interaction of a multitude of stronger potential responses. Data, distinguishing one response from many others, drives bifurcations back toward the direction of less complex (stable) behavior. Stability appears as temporary bubble-like clusters within the boundaries of cortical columns and begins to propagate through frequency sensitive and non-specific neurons. But this is limited by destabilizing long-path connections. An original model of the post-natal development of ocular dominance columns in the striate cortex is presented and compared to autoradiographic images from the literature with good matching results. Finally, experiments are shown to favor computed update order over traditional approaches for better performance of the pattern completion process

    Corticonic models of brain mechanisms underlying cognition and intelligence

    Get PDF
    The concern of this review is brain theory or more specifically, in its first part, a model of the cerebral cortex and the way it:(a) interacts with subcortical regions like the thalamus and the hippocampus to provide higher-level-brain functions that underlie cognition and intelligence, (b) handles and represents dynamical sensory patterns imposed by a constantly changing environment, (c) copes with the enormous number of such patterns encountered in a lifetime bymeans of dynamic memory that offers an immense number of stimulus-specific attractors for input patterns (stimuli) to select from, (d) selects an attractor through a process of “conjugation” of the input pattern with the dynamics of the thalamo–cortical loop, (e) distinguishes between redundant (structured)and non-redundant (random) inputs that are void of information, (f) can do categorical perception when there is access to vast associative memory laid out in the association cortex with the help of the hippocampus, and (g) makes use of “computation” at the edge of chaos and information driven annealing to achieve all this. Other features and implications of the concepts presented for the design of computational algorithms and machines with brain-like intelligence are also discussed. The material and results presented suggest, that a Parametrically Coupled Logistic Map network (PCLMN) is a minimal model of the thalamo–cortical complex and that marrying such a network to a suitable associative memory with re-entry or feedback forms a useful, albeit, abstract model of a cortical module of the brain that could facilitate building a simple artificial brain. In the second part of the review, the results of numerical simulations and drawn conclusions in the first part are linked to the most directly relevant works and views of other workers. What emerges is a picture of brain dynamics on the mesoscopic and macroscopic scales that gives a glimpse of the nature of the long sought after brain code underlying intelligence and other higher level brain functions. Physics of Life Reviews 4 (2007) 223–252 © 2007 Elsevier B.V. All rights reserved

    A Computational Predictor of Human Episodic Memory Based on a Theta Phase Precession Network

    Get PDF
    In the rodent hippocampus, a phase precession phenomena of place cell firing with the local field potential (LFP) theta is called “theta phase precession” and is considered to contribute to memory formation with spike time dependent plasticity (STDP). On the other hand, in the primate hippocampus, the existence of theta phase precession is unclear. Our computational studies have demonstrated that theta phase precession dynamics could contribute to primate–hippocampal dependent memory formation, such as object–place association memory. In this paper, we evaluate human theta phase precession by using a theory–experiment combined analysis. Human memory recall of object–place associations was analyzed by an individual hippocampal network simulated by theta phase precession dynamics of human eye movement and EEG data during memory encoding. It was found that the computational recall of the resultant network is significantly correlated with human memory recall performance, while other computational predictors without theta phase precession are not significantly correlated with subsequent memory recall. Moreover the correlation is larger than the correlation between human recall and traditional experimental predictors. These results indicate that theta phase precession dynamics are necessary for the better prediction of human recall performance with eye movement and EEG data. In this analysis, theta phase precession dynamics appear useful for the extraction of memory-dependent components from the spatio–temporal pattern of eye movement and EEG data as an associative network. Theta phase precession may be a common neural dynamic between rodents and humans for the formation of environmental memories

    Fast and robust learning by reinforcement signals: explorations in the insect brain

    Get PDF
    We propose a model for pattern recognition in the insect brain. Departing from a well-known body of knowledge about the insect brain, we investigate which of the potentially present features may be useful to learn input patterns rapidly and in a stable manner. The plasticity underlying pattern recognition is situated in the insect mushroom bodies and requires an error signal to associate the stimulus with a proper response. As a proof of concept, we used our model insect brain to classify the well-known MNIST database of handwritten digits, a popular benchmark for classifiers. We show that the structural organization of the insect brain appears to be suitable for both fast learning of new stimuli and reasonable performance in stationary conditions. Furthermore, it is extremely robust to damage to the brain structures involved in sensory processing. Finally, we suggest that spatiotemporal dynamics can improve the level of confidence in a classification decision. The proposed approach allows testing the effect of hypothesized mechanisms rather than speculating on their benefit for system performance or confidence in its responses

    Off-line simulation inspires insight: a neurodynamics approach to efficient robot task learning

    Get PDF
    There is currently an increasing demand for robots able to acquire the sequential organization of tasks from social learning interactions with ordinary people. Interactive learning-by-demonstration and communication is a promising research topic in current robotics research. However, the efficient acquisition of generalized task representations that allow the robot to adapt to different users and contexts is a major challenge. In this paper, we present a dynamic neural field (DNF) model that is inspired by the hypothesis that the nervous system uses the off-line re-activation of initial memory traces to incrementally incorporate new information into structured knowledge. To achieve this, the model combines fast activation-based learning to robustly represent sequential information from single task demonstrations with slower, weight-based learning during internal simulations to establish longer-term associations between neural populations representing individual subtasks. The efficiency of the learning process is tested in an assembly paradigm in which the humanoid robot ARoS learns to construct a toy vehicle from its parts. User demonstrations with different serial orders together with the correction of initial prediction errors allow the robot to acquire generalized task knowledge about possible serial orders and the longer term dependencies between subgoals in very few social learning interactions. This success is shown in a joint action scenario in which ARoS uses the newly acquired assembly plan to construct the toy together with a human partner.The work was funded by FCT - Fundacao para a Ciencia e Tecnologia, through the PhD Grants SFRH/BD/48529/2008 and SFRH/BD/41179/2007 and Project NETT: Neural Engineering Transformative Technologies, EU-FP7 ITN (nr. 289146) and the FCT-Research Center CMAT (PEst-OE/MAT/UI0013/2014)

    Fractals in the Nervous System: conceptual Implications for Theoretical Neuroscience

    Get PDF
    This essay is presented with two principal objectives in mind: first, to document the prevalence of fractals at all levels of the nervous system, giving credence to the notion of their functional relevance; and second, to draw attention to the as yet still unresolved issues of the detailed relationships among power law scaling, self-similarity, and self-organized criticality. As regards criticality, I will document that it has become a pivotal reference point in Neurodynamics. Furthermore, I will emphasize the not yet fully appreciated significance of allometric control processes. For dynamic fractals, I will assemble reasons for attributing to them the capacity to adapt task execution to contextual changes across a range of scales. The final Section consists of general reflections on the implications of the reviewed data, and identifies what appear to be issues of fundamental importance for future research in the rapidly evolving topic of this review
    corecore