1,299 research outputs found

    Analysis of various steady states and transient phenomena in digital maps : foundation for theory construction and engineering applications

    Get PDF
    研究成果の概要 (和文) : デジタルマップ(Dmap)の解析と実装に関して以下のような成果を得た。まず、周期軌道の豊富さと安定性に関する特徴量を用いた解析法を考案し、典型例を解析し、現象の基本的な分類を行った。次に、簡素な進化計算によって所望のDmapを合成するアルゴリズムを考案した。アルゴリズムの個体はDmapに対応し、個体数は柔軟に変化する。典型的な例題によってアルゴリズムの妥当性を確認した。さらに、Dmapをデジタルスパイキングニューロン(DSN)によって実現する方法を構築した。DSNは2つのシフトレジスタと配線回路で構成され、様々なスパイク列を生成する。FPGAによる簡素な試作回路を構成し、動作を確認した。研究成果の概要 (英文) : We have studied analysis and implementation of digital maps (Dmaps). The major results are as the following. First, we have developed an analysis method based on two feature quantities. The first quantity characterizes plentifulness of periodic orbits and the second quantity characterizes stability of the periodic orbits. Applying the method, typical Dmap examples are analyzed and basic phenomena are classified. Second, we have developed a simple evolutionary algorithm to realize a desired Dmap. The algorithm uses individuals each of which corresponds to one Dmap and the number of individuals can vary flexibly. Using typical example problems, the algorithm efficiency is confirmed. Third, we have developed a realization method of Dmaps by means of digital spiking neurons (DSNs). The DSN consists of two shift registers connected by a wiring circuit and can generate various periodic spike-trains. Presenting a FPGA based simple test circuit, the DSN dynamics is confirmed

    Complex Dynamics in Dedicated / Multifunctional Neural Networks and Chaotic Nonlinear Systems

    Get PDF
    We study complex behaviors arising in neuroscience and other nonlinear systems by combining dynamical systems analysis with modern computational approaches including GPU parallelization and unsupervised machine learning. To gain insights into the behaviors of brain networks and complex central pattern generators (CPGs), it is important to understand the dynamical principles regulating individual neurons as well as the basic structural and functional building blocks of neural networks. In the first section, we discuss how symbolic methods can help us analyze neural dynamics such as bursting, tonic spiking and chaotic mixed-mode oscillations in various models of individual neurons, the bifurcations that underlie transitions between activity types, as well as emergent network phenomena through synergistic interactions seen in realistic neural circuits, such as network bursting from non-intrinsic bursters. The second section is focused on the origin and coexistence of multistable rhythms in oscillatory neural networks of inhibitory coupled cells. We discuss how network connectivity and intrinsic properties of the cells affect the dynamics, and how even simple circuits can exhibit a variety of mono/multi-stable rhythms including pacemakers, half-center oscillators, multiple traveling-waves, fully synchronous states, as well as various chimeras. Our analyses can help generate verifiable hypotheses for neurophysiological experiments on central pattern generators. In the last section, we demonstrate the inter-disciplinary nature of this research through the applications of these techniques to identify the universal principles governing both simple and complex dynamics, and chaotic structure in diverse nonlinear systems. Using a classical example from nonlinear laser optics, we elaborate on the multiplicity and self-similarity of key organizing structures in 2D parameter space such as homoclinic and heteroclinic bifurcation curves, Bykov T-point spirals, and inclination flips. This is followed by detailed computational reconstructions of the spatial organization and 3D embedding of bifurcation surfaces, parametric saddles, and isolated closed curves (isolas). The generality of our modeling approaches could lead to novel methodologies and nonlinear science applications in biological, medical and engineering systems

    Nonlinear dynamics of pattern recognition and optimization

    Get PDF
    We associate learning in living systems with the shaping of the velocity vector field of a dynamical system in response to external, generally random, stimuli. We consider various approaches to implement a system that is able to adapt the whole vector field, rather than just parts of it - a drawback of the most common current learning systems: artificial neural networks. This leads us to propose the mathematical concept of self-shaping dynamical systems. To begin, there is an empty phase space with no attractors, and thus a zero velocity vector field. Upon receiving the random stimulus, the vector field deforms and eventually becomes smooth and deterministic, despite the random nature of the applied force, while the phase space develops various geometrical objects. We consider the simplest of these - gradient self-shaping systems, whose vector field is the gradient of some energy function, which under certain conditions develops into the multi-dimensional probability density distribution of the input. We explain how self-shaping systems are relevant to artificial neural networks. Firstly, we show that they can potentially perform pattern recognition tasks typically implemented by Hopfield neural networks, but without any supervision and on-line, and without developing spurious minima in the phase space. Secondly, they can reconstruct the probability density distribution of input signals, like probabilistic neural networks, but without the need for new training patterns to have to enter the network as new hardware units. We therefore regard self-shaping systems as a generalisation of the neural network concept, achieved by abandoning the "rigid units - flexible couplings'' paradigm and making the vector field fully flexible and amenable to external force. It is not clear how such systems could be implemented in hardware, and so this new concept presents an engineering challenge. It could also become an alternative paradigm for the modelling of both living and learning systems. Mathematically it is interesting to find how a self shaping system could develop non-trivial objects in the phase space such as periodic orbits or chaotic attractors. We investigate how a delayed vector field could form such objects. We show that this method produces chaos in a class systems which have very simple dynamics in the non-delayed case. We also demonstrate the coexistence of bounded and unbounded solutions dependent on the initial conditions and the value of the delay. Finally, we speculate about how such a method could be used in global optimization

    A dynamical model of the distributed interaction of intracellular signals

    Get PDF
    A major goal of modern cell biology is to understand the regulation of cell behavior in the reductive terms of all the molecular interactions. This aim is made explicit by the assertion that understanding a cell\u27s response to stimuli requires a full inventory of details. Currently, no satisfactory explanation exists to explain why cells exhibit only a relatively small number of different behavioral modes. In this thesis, a discrete dynamical model is developed to study interactions between certain types of signaling proteins. The model is generic and connectionist in nature and incorporates important concepts from the biology. The emphasis is on examining dynamic properties that occur on short-term time scales and are independent of gene expression. A number of modeling assumptions are made. However, the framework is flexible enough to be extended in future studies. The dynamical states of the system are explored both computationally and analytically. Monte Carlo methods are used to study the state space of simulated networks over selected parameter regimes. Networks show a tendency to settle into fixed points or oscillations over a wide range of initial conditions. A genetic algorithm (GA) is also designed to explore properties of networks. It evolves a population of modeled cells, selecting and ranking them according to a fitness function, which is designed to mimic features of real biological evolution. An analogue of protein domain shuffling is used as the crossover operator and cells are reproduced asexually. The effects of changing the parameters of the GA are explored. A clustering algorithm is developed to test the effectiveness of the GA search at generating cells, which display a limited number of different behavioral modes. Stability properties of equilibrium states in small networks are analyzed. The ability to generalize these techniques to larger networks is discussed. Topological properties of networks generated by the GA are examined. Structural properties of networks are used to provide insight into their dynamic properties. The dynamic attractors exhibited by such signaling networks may provide a framework for understanding why cells persist in only a small number of stable behavioral modes

    Multi-Scale Mathematical Modelling of Brain Networks in Alzheimer's Disease

    Get PDF
    Perturbations to brain network dynamics on a range of spatial and temporal scales are believed to underpin neurological disorders such as Alzheimer’s disease (AD). This thesis combines quantitative data analysis with tools such as dynamical systems and graph theory to understand how the network dynamics of the brain are altered in AD and experimental models of related pathologies. Firstly, we use a biophysical neuron model to elucidate ionic mechanisms underpinning alterations to the dynamics of principal neurons in the brain’s spatial navigation systems in an animal model of tauopathy. To uncover how synaptic deficits result in alterations to brain dynamics, we subsequently study an animal model featuring local and long-range synaptic degeneration. Synchronous activity (functional connectivity; FC) between neurons within a region of the cortex is analysed using two-photon calcium imaging data. Long-range FC between regions of the brain is analysed using EEG data. Furthermore, a computational model is used to study relationships between networks on these different spatial scales. The latter half of this thesis studies EEG to characterize alterations to macro-scale brain dynamics in clinical AD. Spectral and FC measures are correlated with cognitive test scores to study the hypothesis that impaired integration of the brain’s processing systems underpin cognitive impairment in AD. Whole brain computational modelling is used to gain insight into the role of spectral slowing on FC, and elucidate potential synaptic mechanisms of FC differences in AD. On a finer temporal scale, microstate analyses are used to identify changes to the rapid transitioning behaviour of the brain’s resting state in AD. Finally, the electrophysiological signatures of AD identified throughout the thesis are combined into a predictive model which can accurately separate people with AD and healthy controls based on their EEG, results which are validated on an independent patient cohort. Furthermore, we demonstrate in a small preliminary cohort that this model is a promising tool for predicting future conversion to AD in patients with mild cognitive impairment

    Corticonic models of brain mechanisms underlying cognition and intelligence

    Get PDF
    The concern of this review is brain theory or more specifically, in its first part, a model of the cerebral cortex and the way it:(a) interacts with subcortical regions like the thalamus and the hippocampus to provide higher-level-brain functions that underlie cognition and intelligence, (b) handles and represents dynamical sensory patterns imposed by a constantly changing environment, (c) copes with the enormous number of such patterns encountered in a lifetime bymeans of dynamic memory that offers an immense number of stimulus-specific attractors for input patterns (stimuli) to select from, (d) selects an attractor through a process of “conjugation” of the input pattern with the dynamics of the thalamo–cortical loop, (e) distinguishes between redundant (structured)and non-redundant (random) inputs that are void of information, (f) can do categorical perception when there is access to vast associative memory laid out in the association cortex with the help of the hippocampus, and (g) makes use of “computation” at the edge of chaos and information driven annealing to achieve all this. Other features and implications of the concepts presented for the design of computational algorithms and machines with brain-like intelligence are also discussed. The material and results presented suggest, that a Parametrically Coupled Logistic Map network (PCLMN) is a minimal model of the thalamo–cortical complex and that marrying such a network to a suitable associative memory with re-entry or feedback forms a useful, albeit, abstract model of a cortical module of the brain that could facilitate building a simple artificial brain. In the second part of the review, the results of numerical simulations and drawn conclusions in the first part are linked to the most directly relevant works and views of other workers. What emerges is a picture of brain dynamics on the mesoscopic and macroscopic scales that gives a glimpse of the nature of the long sought after brain code underlying intelligence and other higher level brain functions. Physics of Life Reviews 4 (2007) 223–252 © 2007 Elsevier B.V. All rights reserved

    Spatio-temporal spike trains analysis for large scale networks using maximum entropy principle and Monte-Carlo method

    Full text link
    Understanding the dynamics of neural networks is a major challenge in experimental neuroscience. For that purpose, a modelling of the recorded activity that reproduces the main statistics of the data is required. In a first part, we present a review on recent results dealing with spike train statistics analysis using maximum entropy models (MaxEnt). Most of these studies have been focusing on modelling synchronous spike patterns, leaving aside the temporal dynamics of the neural activity. However, the maximum entropy principle can be generalized to the temporal case, leading to Markovian models where memory effects and time correlations in the dynamics are properly taken into account. In a second part, we present a new method based on Monte-Carlo sampling which is suited for the fitting of large-scale spatio-temporal MaxEnt models. The formalism and the tools presented here will be essential to fit MaxEnt spatio-temporal models to large neural ensembles.Comment: 41 pages, 10 figure

    Local Causal States and Discrete Coherent Structures

    Get PDF
    Coherent structures form spontaneously in nonlinear spatiotemporal systems and are found at all spatial scales in natural phenomena from laboratory hydrodynamic flows and chemical reactions to ocean, atmosphere, and planetary climate dynamics. Phenomenologically, they appear as key components that organize the macroscopic behaviors in such systems. Despite a century of effort, they have eluded rigorous analysis and empirical prediction, with progress being made only recently. As a step in this, we present a formal theory of coherent structures in fully-discrete dynamical field theories. It builds on the notion of structure introduced by computational mechanics, generalizing it to a local spatiotemporal setting. The analysis' main tool employs the \localstates, which are used to uncover a system's hidden spatiotemporal symmetries and which identify coherent structures as spatially-localized deviations from those symmetries. The approach is behavior-driven in the sense that it does not rely on directly analyzing spatiotemporal equations of motion, rather it considers only the spatiotemporal fields a system generates. As such, it offers an unsupervised approach to discover and describe coherent structures. We illustrate the approach by analyzing coherent structures generated by elementary cellular automata, comparing the results with an earlier, dynamic-invariant-set approach that decomposes fields into domains, particles, and particle interactions.Comment: 27 pages, 10 figures; http://csc.ucdavis.edu/~cmg/compmech/pubs/dcs.ht
    corecore