325 research outputs found

    Analysis of Oscillator Neural Networks for Sparsely Coded Phase Patterns

    Full text link
    We study a simple extended model of oscillator neural networks capable of storing sparsely coded phase patterns, in which information is encoded both in the mean firing rate and in the timing of spikes. Applying the methods of statistical neurodynamics to our model, we theoretically investigate the model's associative memory capability by evaluating its maximum storage capacities and deriving its basins of attraction. It is shown that, as in the Hopfield model, the storage capacity diverges as the activity level decreases. We consider various practically and theoretically important cases. For example, it is revealed that a dynamically adjusted threshold mechanism enhances the retrieval ability of the associative memory. It is also found that, under suitable conditions, the network can recall patterns even in the case that patterns with different activity levels are stored at the same time. In addition, we examine the robustness with respect to damage of the synaptic connections. The validity of these theoretical results is confirmed by reasonable agreement with numerical simulations.Comment: 23 pages, 11 figure

    A Markovian event-based framework for stochastic spiking neural networks

    Full text link
    In spiking neural networks, the information is conveyed by the spike times, that depend on the intrinsic dynamics of each neuron, the input they receive and on the connections between neurons. In this article we study the Markovian nature of the sequence of spike times in stochastic neural networks, and in particular the ability to deduce from a spike train the next spike time, and therefore produce a description of the network activity only based on the spike times regardless of the membrane potential process. To study this question in a rigorous manner, we introduce and study an event-based description of networks of noisy integrate-and-fire neurons, i.e. that is based on the computation of the spike times. We show that the firing times of the neurons in the networks constitute a Markov chain, whose transition probability is related to the probability distribution of the interspike interval of the neurons in the network. In the cases where the Markovian model can be developed, the transition probability is explicitly derived in such classical cases of neural networks as the linear integrate-and-fire neuron models with excitatory and inhibitory interactions, for different types of synapses, possibly featuring noisy synaptic integration, transmission delays and absolute and relative refractory period. This covers most of the cases that have been investigated in the event-based description of spiking deterministic neural networks

    Dynamical principles in neuroscience

    Full text link
    Dynamical modeling of neural systems and brain functions has a history of success over the last half century. This includes, for example, the explanation and prediction of some features of neural rhythmic behaviors. Many interesting dynamical models of learning and memory based on physiological experiments have been suggested over the last two decades. Dynamical models even of consciousness now exist. Usually these models and results are based on traditional approaches and paradigms of nonlinear dynamics including dynamical chaos. Neural systems are, however, an unusual subject for nonlinear dynamics for several reasons: (i) Even the simplest neural network, with only a few neurons and synaptic connections, has an enormous number of variables and control parameters. These make neural systems adaptive and flexible, and are critical to their biological function. (ii) In contrast to traditional physical systems described by well-known basic principles, first principles governing the dynamics of neural systems are unknown. (iii) Many different neural systems exhibit similar dynamics despite having different architectures and different levels of complexity. (iv) The network architecture and connection strengths are usually not known in detail and therefore the dynamical analysis must, in some sense, be probabilistic. (v) Since nervous systems are able to organize behavior based on sensory inputs, the dynamical modeling of these systems has to explain the transformation of temporal information into combinatorial or combinatorial-temporal codes, and vice versa, for memory and recognition. In this review these problems are discussed in the context of addressing the stimulating questions: What can neuroscience learn from nonlinear dynamics, and what can nonlinear dynamics learn from neuroscience?This work was supported by NSF Grant No. NSF/EIA-0130708, and Grant No. PHY 0414174; NIH Grant No. 1 R01 NS50945 and Grant No. NS40110; MEC BFI2003-07276, and FundaciĂłn BBVA

    Interferences in the Transformation of Reference Frames during a Posture Imitation Task

    Get PDF
    We present a biologically-inspired neural model addressing the problem of transformations across frames of reference in a posture imitation task. Our modeling is based on the hypothesis that imitation is mediated by two concurrent transformations selectively sensitive to spatial and anatomical cues. In contrast to classical approaches, we also assume that separate instances of this pair of transformations are responsible for the control of each side of the body. We also devised an experimental paradigm which allowed us to model the interference patterns caused by the interaction between the anatomical on one hand, and the spatial imitative strategy on the other hand. The results from our simulation studies thus provide predictions of real behavioral responses

    Symmetry structure in discrete models of biochemical systems : natural subsystems and the weak control hierarchy in a new model of computation driven by interactions

    Get PDF
    © 2015 The Authors. Published by the Royal Society under the terms of the Creative Commons Attribution License http://creativecommons.org/licenses/by/4.0/, which permits unrestricted use, provided the original author and source are credited.Interaction Computing (IC) is inspired by the observation that cell metabolic/regulatory systems construct order dynamically, through constrained interactions between their components and based on a wide range of possible inputs and environmental conditions. The goals of this work are (1) to identify and understand mathematically the natural subsystems and hierarchical relations in natural systems enabling this, and (2) to use the resulting insights to define a new model of computation based on interactions that is useful for both biology and computation. The dynamical characteristics of the cellular pathways studied in Systems Biology relate, mathematically, to the computational characteristics of automata derived from them, and their internal symmetry structures to computational power. Finite discrete automata models of biological systems such as the lac operon, Krebs cycle, and p53-mdm2 genetic regulation constructed from Systems Biology models have canonically associated algebraic structures { transformation semigroups. These contain permutation groups (local substructures exhibiting symmetry) that correspond to "pools of reversibility". These natural subsystems are related to one another in a hierarchical manner by the notion of "weak control ". We present natural subsystems arising from several biological examples and their weak control hierarchies in detail. Finite simple non-abelian groups (SNAGs) are found in biological examples and can be harnessed to realize nitary universal computation. This allows ensembles of cells to achieve any desired finitary computational transformation, depending on external inputs, via suitably constrained interactions. Based on this, interaction machines that grow and change their structure recursively are introduced and applied, providing a natural model of computation driven by interactions.Peer reviewe

    Observation of Static Pictures of Dynamic Actions Enhances the Activity of Movement-Related Brain Areas

    Get PDF
    Physiological studies of perfectly still observers have shown interesting correlations between increasing effortfulness of observed actions and increases in heart and respiration rates. Not much is known about the cortical response induced by observing effortful actions. The aim of this study was to investigate the time course and neural correlates of perception of implied motion, by presenting 260 pictures of human actions differing in degrees of dynamism and muscular exertion. ERPs were recorded from 128 sites in young male and female adults engaged in a secondary perceptual task.Our results indicate that even when the stimulus shows no explicit motion, observation of static photographs of human actions with implied motion produces a clear increase in cortical activation, manifest in a long-lasting positivity (LP) between 350–600 ms that is much greater to dynamic than less dynamic actions, especially in men. A swLORETA linear inverse solution computed on the dynamic-minus-static difference wave in the time window 380–430 ms showed that a series of regions was activated, including the right V5/MT, left EBA, left STS (BA38), left premotor (BA6) and motor (BA4) areas, cingulate and IF cortex.Overall, the data suggest that corresponding mirror neurons respond more strongly to implied dynamic than to less dynamic actions. The sex difference might be partially cultural and reflect a preference of young adult males for highly dynamic actions depicting intense muscular activity, or a sporty context

    Complex type 4 structure changing dynamics of digital agents: Nash equilibria of a game with arms race in innovations

    Get PDF
    The new digital economy has renewed interest in how digital agents can innovate. This follows the legacy of John von Neumann dynamical systems theory on complex biological systems as computation. The Gödel-Turing-Post (GTP) logic is shown to be necessary to generate innovation based structure changing Type 4 dynamics of the Wolfram-Chomsky schema. Two syntactic procedures of GTP logic permit digital agents to exit from listable sets of digital technologies to produce novelty and surprises. The first is meta-analyses or offline simulations. The second is a fixed point with a two place encoding of negation or opposition, referred to as the Gödel sentence. It is postulated that in phenomena ranging from the genome to human proteanism, the Gödel sentence is a ubiquitous syntactic construction without which escape from hostile agents qua the Liar is impossible and digital agents become entrained within fixed repertoires. The only recursive best response function of a 2-person adversarial game that can implement strategic innovation in lock-step formation of an arms race is the productive function of the Emil Post [58] set theoretic proof of the Gödel incompleteness result. This overturns the view of game theorists that surprise and innovation cannot be a Nash equilibrium of a game

    Stroke Rehabilitation Reaches a Threshold

    Get PDF
    Motor training with the upper limb affected by stroke partially reverses the loss of cortical representation after lesion and has been proposed to increase spontaneous arm use. Moreover, repeated attempts to use the affected hand in daily activities create a form of practice that can potentially lead to further improvement in motor performance. We thus hypothesized that if motor retraining after stroke increases spontaneous arm use sufficiently, then the patient will enter a virtuous circle in which spontaneous arm use and motor performance reinforce each other. In contrast, if the dose of therapy is not sufficient to bring spontaneous use above threshold, then performance will not increase and the patient will further develop compensatory strategies with the less affected hand. To refine this hypothesis, we developed a computational model of bilateral hand use in arm reaching to study the interactions between adaptive decision making and motor relearning after motor cortex lesion. The model contains a left and a right motor cortex, each controlling the opposite arm, and a single action choice module. The action choice module learns, via reinforcement learning, the value of using each arm for reaching in specific directions. Each motor cortex uses a neural population code to specify the initial direction along which the contralateral hand moves towards a target. The motor cortex learns to minimize directional errors and to maximize neuronal activity for each movement. The derived learning rule accounts for the reversal of the loss of cortical representation after rehabilitation and the increase of this loss after stroke with insufficient rehabilitation. Further, our model exhibits nonlinear and bistable behavior: if natural recovery, motor training, or both, brings performance above a certain threshold, then training can be stopped, as the repeated spontaneous arm use provides a form of motor learning that further bootstraps performance and spontaneous use. Below this threshold, motor training is “in vain”: there is little spontaneous arm use after training, the model exhibits learned nonuse, and compensatory movements with the less affected hand are reinforced. By exploring the nonlinear dynamics of stroke recovery using a biologically plausible neural model that accounts for reversal of the loss of motor cortex representation following rehabilitation or the lack thereof, respectively, we can explain previously hard to reconcile data on spontaneous arm use in stroke recovery. Further, our threshold prediction could be tested with an adaptive train–wait–train paradigm: if spontaneous arm use has increased in the “wait” period, then the threshold has been reached, and rehabilitation can be stopped. If spontaneous arm use is still low or has decreased, then another bout of rehabilitation is to be provided

    Do Postures of Distal Effectors Affect the Control of Actions of Other Distal Effectors? Evidence for a System of Interactions between Hand and Mouth

    Get PDF
    The present study aimed at determining whether, in healthy humans, postures assumed by distal effectors affect the control of the successive grasp executed with other distal effectors. In experiments 1 and 2, participants reached different objects with their head and grasped them with their mouth, after assuming different hand postures. The postures could be implicitly associated with interactions with large or small objects. The kinematics of lip shaping during grasp varied congruently with the hand posture, i.e. it was larger or smaller when it could be associated with the grasping of large or small objects, respectively. In experiments 3 and 4, participants reached and grasped different objects with their hand, after assuming the postures of mouth aperture or closure (experiment 3) and the postures of toe extension or flexion (experiment 4). The mouth postures affected the kinematics of finger shaping during grasp, that is larger finger shaping corresponded with opened mouth and smaller finger shaping with closed mouth. In contrast, the foot postures did not influence the hand grasp kinematics. Finally, in experiment 5 participants reached-grasped different objects with their hand while pronouncing opened and closed vowels, as verified by the analysis of their vocal spectra. Open and closed vowels induced larger and smaller finger shaping, respectively. In all experiments postures of the distal effectors induced no effect, or only unspecific effects on the kinematics of the reach proximal/axial component. The data from the present study support the hypothesis that there exists a system involved in establishing interactions between movements and postures of hand and mouth. This system might have been used to transfer a repertoire of hand gestures to mouth articulation postures during language evolution and, in modern humans, it may have evolved a system controlling the interactions existing between speech and gestures
    • …
    corecore