1,081 research outputs found

    Design for a Darwinian Brain: Part 1. Philosophy and Neuroscience

    Full text link
    Physical symbol systems are needed for open-ended cognition. A good way to understand physical symbol systems is by comparison of thought to chemistry. Both have systematicity, productivity and compositionality. The state of the art in cognitive architectures for open-ended cognition is critically assessed. I conclude that a cognitive architecture that evolves symbol structures in the brain is a promising candidate to explain open-ended cognition. Part 2 of the paper presents such a cognitive architecture.Comment: Darwinian Neurodynamics. Submitted as a two part paper to Living Machines 2013 Natural History Museum, Londo

    Dynamical principles in neuroscience

    Full text link
    Dynamical modeling of neural systems and brain functions has a history of success over the last half century. This includes, for example, the explanation and prediction of some features of neural rhythmic behaviors. Many interesting dynamical models of learning and memory based on physiological experiments have been suggested over the last two decades. Dynamical models even of consciousness now exist. Usually these models and results are based on traditional approaches and paradigms of nonlinear dynamics including dynamical chaos. Neural systems are, however, an unusual subject for nonlinear dynamics for several reasons: (i) Even the simplest neural network, with only a few neurons and synaptic connections, has an enormous number of variables and control parameters. These make neural systems adaptive and flexible, and are critical to their biological function. (ii) In contrast to traditional physical systems described by well-known basic principles, first principles governing the dynamics of neural systems are unknown. (iii) Many different neural systems exhibit similar dynamics despite having different architectures and different levels of complexity. (iv) The network architecture and connection strengths are usually not known in detail and therefore the dynamical analysis must, in some sense, be probabilistic. (v) Since nervous systems are able to organize behavior based on sensory inputs, the dynamical modeling of these systems has to explain the transformation of temporal information into combinatorial or combinatorial-temporal codes, and vice versa, for memory and recognition. In this review these problems are discussed in the context of addressing the stimulating questions: What can neuroscience learn from nonlinear dynamics, and what can nonlinear dynamics learn from neuroscience?This work was supported by NSF Grant No. NSF/EIA-0130708, and Grant No. PHY 0414174; NIH Grant No. 1 R01 NS50945 and Grant No. NS40110; MEC BFI2003-07276, and FundaciĂłn BBVA

    A synaptic learning rule for exploiting nonlinear dendritic computation

    Get PDF
    Information processing in the brain depends on the integration of synaptic input distributed throughout neuronal dendrites. Dendritic integration is a hierarchical process, proposed to be equivalent to integration by a multilayer network, potentially endowing single neurons with substantial computational power. However, whether neurons can learn to harness dendritic properties to realize this potential is unknown. Here, we develop a learning rule from dendritic cable theory and use it to investigate the processing capacity of a detailed pyramidal neuron model. We show that computations using spatial or temporal features of synaptic input patterns can be learned, and even synergistically combined, to solve a canonical nonlinear feature-binding problem. The voltage dependence of the learning rule drives coactive synapses to engage dendritic nonlinearities, whereas spike-timing dependence shapes the time course of subthreshold potentials. Dendritic input-output relationships can therefore be flexibly tuned through synaptic plasticity, allowing optimal implementation of nonlinear functions by single neurons

    PRINCIPLES OF INFORMATION PROCESSING IN NEURONAL AVALANCHES

    Get PDF
    How the brain processes information is poorly understood. It has been suggested that the imbalance of excitation and inhibition (E/I) can significantly affect information processing in the brain. Neuronal avalanches, a type of spontaneous activity recently discovered, have been ubiquitously observed in vitro and in vivo when the cortical network is in the E/I balanced state. In this dissertation, I experimentally demonstrate that several properties regarding information processing in the cortex, i.e. the entropy of spontaneous activity, the information transmission between stimulus and response, the diversity of synchronized states and the discrimination of external stimuli, are optimized when the cortical network is in the E/I balanced state, exhibiting neuronal avalanche dynamics. These experimental studies not only support the hypothesis that the cortex operates in the critical state, but also suggest that criticality is a potential principle of information processing in the cortex. Further, we study the interaction structure in population neuronal dynamics, and discovered a special structure of higher order interactions that are inherent in the neuronal dynamics

    A laminar organization for selective cortico-cortical communication

    Get PDF
    The neocortex is central to mammalian cognitive ability, playing critical roles in sensory perception, motor skills and executive function. This thin, layered structure comprises distinct, functionally specialized areas that communicate with each other through the axons of pyramidal neurons. For the hundreds of such cortico-cortical pathways to underlie diverse functions, their cellular and synaptic architectures must differ so that they result in distinct computations at the target projection neurons. In what ways do these pathways differ? By originating and terminating in different laminae, and by selectively targeting specific populations of excitatory and inhibitory neurons, these “interareal” pathways can differentially control the timing and strength of synaptic inputs onto individual neurons, resulting in layer-specific computations. Due to the rapid development in transgenic techniques, the mouse has emerged as a powerful mammalian model for understanding the rules by which cortical circuits organize and function. Here we review our understanding of how cortical lamination constrains long-range communication in the mammalian brain, with an emphasis on the mouse visual cortical network. We discuss the laminar architecture underlying interareal communication, the role of neocortical layers in organizing the balance of excitatory and inhibitory actions, and highlight the structure and function of layer 1 in mouse visual cortex

    GPU-based implementation of real-time system for spiking neural networks

    Get PDF
    Real-time simulations of biological neural networks (BNNs) provide a natural platform for applications in a variety of fields: data classification and pattern recognition, prediction and estimation, signal processing, control and robotics, prosthetics, neurological and neuroscientific modeling. BNNs possess inherently parallel architecture and operate in continuous signal domain. Spiking neural networks (SNNs) are type of BNNs with reduced signal dynamic range: communication between neurons occurs by means of time-stamped events (spikes). SNNs allow reduction of algorithmic complexity and communication data size at a price of little loss in accuracy. Simulation of SNNs using traditional sequential computer architectures results in significant time penalty. This penalty prohibits application of SNNs in real-time systems. Graphical processing units (GPUs) are cost effective devices specifically designed to exploit parallel shared memory-based floating point operations applied not only to computer graphics, but also to scientific computations. This makes them an attractive solution for SNN simulation compared to that of FPGA, ASIC and cluster message passing computing systems. Successful implementations of GPU-based SNN simulations have been already reported. The contribution of this thesis is the development of a scalable GPU-based realtime system that provides initial framework for design and application of SNNs in various domains. The system delivers an interface that establishes communication with neurons in the network as well as visualizes the outcome produced by the network. Accuracy of the simulation is emphasized due to its importance in the systems that exploit spike time dependent plasticity, classical conditioning and learning. As a result, a small network of 3840 Izhikevich neurons implemented as a hybrid system with Parker-Sochacki numerical integration method achieves real time operation on GTX260 device. An application case study of the system modeling receptor layer of retina is reviewed
    • …
    corecore