214 research outputs found

    Dynamical principles in neuroscience

    Full text link
    Dynamical modeling of neural systems and brain functions has a history of success over the last half century. This includes, for example, the explanation and prediction of some features of neural rhythmic behaviors. Many interesting dynamical models of learning and memory based on physiological experiments have been suggested over the last two decades. Dynamical models even of consciousness now exist. Usually these models and results are based on traditional approaches and paradigms of nonlinear dynamics including dynamical chaos. Neural systems are, however, an unusual subject for nonlinear dynamics for several reasons: (i) Even the simplest neural network, with only a few neurons and synaptic connections, has an enormous number of variables and control parameters. These make neural systems adaptive and flexible, and are critical to their biological function. (ii) In contrast to traditional physical systems described by well-known basic principles, first principles governing the dynamics of neural systems are unknown. (iii) Many different neural systems exhibit similar dynamics despite having different architectures and different levels of complexity. (iv) The network architecture and connection strengths are usually not known in detail and therefore the dynamical analysis must, in some sense, be probabilistic. (v) Since nervous systems are able to organize behavior based on sensory inputs, the dynamical modeling of these systems has to explain the transformation of temporal information into combinatorial or combinatorial-temporal codes, and vice versa, for memory and recognition. In this review these problems are discussed in the context of addressing the stimulating questions: What can neuroscience learn from nonlinear dynamics, and what can nonlinear dynamics learn from neuroscience?This work was supported by NSF Grant No. NSF/EIA-0130708, and Grant No. PHY 0414174; NIH Grant No. 1 R01 NS50945 and Grant No. NS40110; MEC BFI2003-07276, and Fundación BBVA

    Evolutionary robotics and neuroscience

    Get PDF
    No description supplie

    A Theory of Cortical Neural Processing.

    Get PDF
    This dissertation puts forth an original theory of cortical neural processing that is unique in its view of the interplay of chaotic and stable oscillatory neurodynamics and is meant to stimulate new ideas in artificial neural network modeling. Our theory is the first to suggest two new purposes for chaotic neurodynamics: (i) as a natural means of representing the uncertainty in the outcome of performed tasks, such as memory retrieval or classification, and (ii) as an automatic way of producing an economic representation of distributed information. We developed new models, to better understand how the cerebral cortex processes information, which led to our theory. Common to these models is a neuron interaction function that alternates between excitatory and inhibitory neighborhoods. Our theory allows characteristics of the input environment to influence the structural development of the cortex. We view low intensity chaotic activity as the a priori uncertain base condition of the cortex, resulting from the interaction of a multitude of stronger potential responses. Data, distinguishing one response from many others, drives bifurcations back toward the direction of less complex (stable) behavior. Stability appears as temporary bubble-like clusters within the boundaries of cortical columns and begins to propagate through frequency sensitive and non-specific neurons. But this is limited by destabilizing long-path connections. An original model of the post-natal development of ocular dominance columns in the striate cortex is presented and compared to autoradiographic images from the literature with good matching results. Finally, experiments are shown to favor computed update order over traditional approaches for better performance of the pattern completion process

    Fractals in the Nervous System: conceptual Implications for Theoretical Neuroscience

    Get PDF
    This essay is presented with two principal objectives in mind: first, to document the prevalence of fractals at all levels of the nervous system, giving credence to the notion of their functional relevance; and second, to draw attention to the as yet still unresolved issues of the detailed relationships among power law scaling, self-similarity, and self-organized criticality. As regards criticality, I will document that it has become a pivotal reference point in Neurodynamics. Furthermore, I will emphasize the not yet fully appreciated significance of allometric control processes. For dynamic fractals, I will assemble reasons for attributing to them the capacity to adapt task execution to contextual changes across a range of scales. The final Section consists of general reflections on the implications of the reviewed data, and identifies what appear to be issues of fundamental importance for future research in the rapidly evolving topic of this review

    Synchrony and bifurcations in coupled dynamical systems and effects of time delay

    Get PDF
    Dynamik auf Netzwerken ist ein mathematisches Feld, das in den letzten Jahrzehnten schnell gewachsen ist und Anwendungen in zahlreichen Disziplinen wie z.B. Physik, Biologie und Soziologie findet. Die Funktion vieler Netzwerke hängt von der Fähigkeit ab, die Elemente des Netzwerkes zu synchronisieren. Mit anderen Worten, die Existenz und die transversale Stabilität der synchronen Mannigfaltigkeit sind zentrale Eigenschaften. Erst seit einigen Jahren wird versucht, den verwickelten Zusammenhang zwischen der Kopplungsstruktur und den Stabilitätseigenschaften synchroner Zustände zu verstehen. Genau das ist das zentrale Thema dieser Arbeit. Zunächst präsentiere ich erste Ergebnisse zur Klassifizierung der Kanten eines gerichteten Netzwerks bezüglich ihrer Bedeutung für die Stabilität des synchronen Zustands. Folgend untersuche ich ein komplexes Verzweigungsszenario in einem gerichteten Ring von Stuart-Landau Oszillatoren und zeige, dass das Szenario persistent ist, wenn dem Netzwerk eine schwach gewichtete Kante hinzugefügt wird. Daraufhin untersuche ich synchrone Zustände in Ringen von Phasenoszillatoren die mit Zeitverzögerung gekoppelt sind. Ich bespreche die Koexistenz synchroner Lösungen und analysiere deren Stabilität und Verzweigungen. Weiter zeige ich, dass eine Zeitverschiebung genutzt werden kann, um Muster im Ring zu speichern und wiederzuerkennen. Diese Zeitverschiebung untersuche ich daraufhin für beliebige Kopplungsstrukturen. Ich zeige, dass invariante Mannigfaltigkeiten des Flusses sowie ihre Stabilität unter der Zeitverschiebung erhalten bleiben. Darüber hinaus bestimme ich die minimale Anzahl von Zeitverzögerungen, die gebraucht werden, um das System äquivalent zu beschreiben. Schließlich untersuche ich das auffällige Phänomen eines nichtstetigen Übergangs zu Synchronizität in Klassen großer Zufallsnetzwerke indem ich einen kürzlich eingeführten Zugang zur Beschreibung großer Zufallsnetzwerke auf den Fall zeitverzögerter Kopplungen verallgemeinere.Since a couple of decades, dynamics on networks is a rapidly growing branch of mathematics with applications in various disciplines such as physics, biology or sociology. The functioning of many networks heavily relies on the ability to synchronize the network’s nodes. More precisely, the existence and the transverse stability of the synchronous manifold are essential properties. It was only in the last few years that people tried to understand the entangled relation between the coupling structure of a network, given by a (di-)graph, and the stability properties of synchronous states. This is the central theme of this dissertation. I first present results towards a classification of the links in a directed, diffusive network according to their impact on the stability of synchronization. Then I investigate a complex bifurcation scenario observed in a directed ring of Stuart-Landau oscillators. I show that under the addition of a single weak link, this scenario is persistent. Subsequently, I investigate synchronous patterns in a directed ring of phase oscillators coupled with time delay. I discuss the coexistence of multiple of synchronous solutions and investigate their stability and bifurcations. I apply these results by showing that a certain time-shift transformation can be used in order to employ the ring as a pattern recognition device. Next, I investigate the same time-shift transformation for arbitrary coupling structures in a very general setting. I show that invariant manifolds of the flow together with their stability properties are conserved under the time-shift transformation. Furthermore, I determine the minimal number of delays needed to equivalently describe the system’s dynamics. Finally, I investigate a peculiar phenomenon of non-continuous transition to synchrony observed in certain classes of large random networks, generalizing a recently introduced approach for the description of large random networks to the case of delayed couplings

    Towards a continuous dynamic model of the Hopfield theory on neuronal interaction and memory storage

    Get PDF
    The purpose of this work is to study the Hopfield model for neuronal interaction and memory storage, in particular the convergence to the stored patterns. Since the hypothesis of symmetric synapses is not true for the brain, we will study how we can extend it to the case of asymmetric synapses using a probabilistic approach. We then focus on the description of another feature of the memory process and brain: oscillations. Using the Kuramoto model we will be able to describe them completely, gaining the presence of synchronization between neurons. Our aim is therefore to understand how and why neurons can be seen as oscillators and to establish a strong link between this model and the Hopfield approach

    Neural Networks With Asynchronous Control.

    Get PDF
    Neural network studies have previously focused on monolithic structures. The brain has a bicameral nature, however, and so it is natural to expect that bicameral structures will perform better. This dissertation offers an approach to the development of such bicameral structures. The companion neural structure takes advantage of the global and subset characteristics of the stored memories. Specifically we propose the use of an asynchronous controller C that implies the following update of a probe vector x by the connection matrix T: x\sp\prime = sgn (C(x, TX)). For a VLSI-implemented neural network the controller block can be easily placed in the feedback loop. In a network running asynchronously, the updating of the probe generally offers a choice among several components. If the right components are not updated the network may converge to an incorrect stable point. The proposed asynchronous controller together with the basic neural net forms a bicameral network that can be programmed in various ways to exploit global and local characteristics of stored memory. Several methods to do this are proposed. In one of the methods the update choices are based on bit frequencies. In another method handles are appended to the memories to improve retrieval. The new methods have been analyzed and their performance studies it is shown that there is a marked improvement in performance. This is illustrated by means of simulations. The use of an asynchronous controller allows the implementation of conditional rules that occur frequently in AI applications. It is shown that a neural network that uses conditional rules can solve problems in natural language understanding. The introduction of the asynchronous controller may be viewed as a first step in the development of truly bicameral structures that may be seen as the next generation of neural computers
    corecore