3,346 research outputs found

    Dynamical principles in neuroscience

    Full text link
    Dynamical modeling of neural systems and brain functions has a history of success over the last half century. This includes, for example, the explanation and prediction of some features of neural rhythmic behaviors. Many interesting dynamical models of learning and memory based on physiological experiments have been suggested over the last two decades. Dynamical models even of consciousness now exist. Usually these models and results are based on traditional approaches and paradigms of nonlinear dynamics including dynamical chaos. Neural systems are, however, an unusual subject for nonlinear dynamics for several reasons: (i) Even the simplest neural network, with only a few neurons and synaptic connections, has an enormous number of variables and control parameters. These make neural systems adaptive and flexible, and are critical to their biological function. (ii) In contrast to traditional physical systems described by well-known basic principles, first principles governing the dynamics of neural systems are unknown. (iii) Many different neural systems exhibit similar dynamics despite having different architectures and different levels of complexity. (iv) The network architecture and connection strengths are usually not known in detail and therefore the dynamical analysis must, in some sense, be probabilistic. (v) Since nervous systems are able to organize behavior based on sensory inputs, the dynamical modeling of these systems has to explain the transformation of temporal information into combinatorial or combinatorial-temporal codes, and vice versa, for memory and recognition. In this review these problems are discussed in the context of addressing the stimulating questions: What can neuroscience learn from nonlinear dynamics, and what can nonlinear dynamics learn from neuroscience?This work was supported by NSF Grant No. NSF/EIA-0130708, and Grant No. PHY 0414174; NIH Grant No. 1 R01 NS50945 and Grant No. NS40110; MEC BFI2003-07276, and Fundación BBVA

    Complexity without chaos: Plasticity within random recurrent networks generates robust timing and motor control

    Get PDF
    It is widely accepted that the complex dynamics characteristic of recurrent neural circuits contributes in a fundamental manner to brain function. Progress has been slow in understanding and exploiting the computational power of recurrent dynamics for two main reasons: nonlinear recurrent networks often exhibit chaotic behavior and most known learning rules do not work in robust fashion in recurrent networks. Here we address both these problems by demonstrating how random recurrent networks (RRN) that initially exhibit chaotic dynamics can be tuned through a supervised learning rule to generate locally stable neural patterns of activity that are both complex and robust to noise. The outcome is a novel neural network regime that exhibits both transiently stable and chaotic trajectories. We further show that the recurrent learning rule dramatically increases the ability of RRNs to generate complex spatiotemporal motor patterns, and accounts for recent experimental data showing a decrease in neural variability in response to stimulus onset

    NASA JSC neural network survey results

    Get PDF
    A survey of Artificial Neural Systems in support of NASA's (Johnson Space Center) Automatic Perception for Mission Planning and Flight Control Research Program was conducted. Several of the world's leading researchers contributed papers containing their most recent results on artificial neural systems. These papers were broken into categories and descriptive accounts of the results make up a large part of this report. Also included is material on sources of information on artificial neural systems such as books, technical reports, software tools, etc

    Interacting Turing-Hopf Instabilities Drive Symmetry-Breaking Transitions in a Mean-Field Model of the Cortex: A Mechanism for the Slow Oscillation

    Get PDF
    Electrical recordings of brain activity during the transition from wake to anesthetic coma show temporal and spectral alterations that are correlated with gross changes in the underlying brain state. Entry into anesthetic unconsciousness is signposted by the emergence of large, slow oscillations of electrical activity (≲1  Hz) similar to the slow waves observed in natural sleep. Here we present a two-dimensional mean-field model of the cortex in which slow spatiotemporal oscillations arise spontaneously through a Turing (spatial) symmetry-breaking bifurcation that is modulated by a Hopf (temporal) instability. In our model, populations of neurons are densely interlinked by chemical synapses, and by interneuronal gap junctions represented as an inhibitory diffusive coupling. To demonstrate cortical behavior over a wide range of distinct brain states, we explore model dynamics in the vicinity of a general-anesthetic-induced transition from “wake” to “coma.” In this region, the system is poised at a codimension-2 point where competing Turing and Hopf instabilities coexist. We model anesthesia as a moderate reduction in inhibitory diffusion, paired with an increase in inhibitory postsynaptic response, producing a coma state that is characterized by emergent low-frequency oscillations whose dynamics is chaotic in time and space. The effect of long-range axonal white-matter connectivity is probed with the inclusion of a single idealized point-to-point connection. We find that the additional excitation from the long-range connection can provoke seizurelike bursts of cortical activity when inhibitory diffusion is weak, but has little impact on an active cortex. Our proposed dynamic mechanism for the origin of anesthetic slow waves complements—and contrasts with—conventional explanations that require cyclic modulation of ion-channel conductances. We postulate that a similar bifurcation mechanism might underpin the slow waves of natural sleep and comment on the possible consequences of chaotic dynamics for memory processing and learning

    Trajectory prediction of moving objects by means of neural networks

    Get PDF
    Thesis (Master)--Izmir Institute of Technology, Computer Engineering, Izmir, 1997Includes bibliographical references (leaves: 103-105)Text in English; Abstract: Turkish and Englishviii, 105 leavesEstimating the three-dimensional motion of an object from a sequence of object positions and orientation is of significant importance in variety of applications in control and robotics. For instance, autonomous navigation, manipulation, servo, tracking, planning and surveillance needs prediction of motion parameters. Although "motion estimation" is an old problem (the formulations date back to the beginning of the century), only recently scientists have provided with the tools from nonlinear system estimation theory to solve this problem eural Networks are the ones which have recently been used in many nonlinear dynamic system parameter estimation context. The approximating ability of the neural network is used to identifY the relation between system variables and parameters of a dynamic system. The position, velocity and acceleration of the object are estimated by several neural networks using the II most recent measurements of the object coordinates as input to the system Several neural network topologies with different configurations are introduced and utilized in the solution of the problem. Training schemes for each configuration are given in detail. Simulation results for prediction of motion having different characteristics via different architectures with alternative configurations are presented comparatively

    Formal Modeling of Connectionism using Concurrency Theory, an Approach Based on Automata and Model Checking

    Get PDF
    This paper illustrates a framework for applying formal methods techniques, which are symbolic in nature, to specifying and verifying neural networks, which are sub-symbolic in nature. The paper describes a communicating automata [Bowman & Gomez, 2006] model of neural networks. We also implement the model using timed automata [Alur & Dill, 1994] and then undertake a verification of these models using the model checker Uppaal [Pettersson, 2000] in order to evaluate the performance of learning algorithms. This paper also presents discussion of a number of broad issues concerning cognitive neuroscience and the debate as to whether symbolic processing or connectionism is a suitable representation of cognitive systems. Additionally, the issue of integrating symbolic techniques, such as formal methods, with complex neural networks is discussed. We then argue that symbolic verifications may give theoretically well-founded ways to evaluate and justify neural learning systems in the field of both theoretical research and real world applications
    corecore