360 research outputs found

    Data-driven modeling of the olfactory neural codes and their dynamics in the insect antennal lobe

    Get PDF
    Recordings from neurons in the insects' olfactory primary processing center, the antennal lobe (AL), reveal that the AL is able to process the input from chemical receptors into distinct neural activity patterns, called olfactory neural codes. These exciting results show the importance of neural codes and their relation to perception. The next challenge is to \emph{model the dynamics} of neural codes. In our study, we perform multichannel recordings from the projection neurons in the AL driven by different odorants. We then derive a neural network from the electrophysiological data. The network consists of lateral-inhibitory neurons and excitatory neurons, and is capable of producing unique olfactory neural codes for the tested odorants. Specifically, we (i) design a projection, an odor space, for the neural recording from the AL, which discriminates between distinct odorants trajectories (ii) characterize scent recognition, i.e., decision-making based on olfactory signals and (iii) infer the wiring of the neural circuit, the connectome of the AL. We show that the constructed model is consistent with biological observations, such as contrast enhancement and robustness to noise. The study answers a key biological question in identifying how lateral inhibitory neurons can be wired to excitatory neurons to permit robust activity patterns

    Dynamical principles in neuroscience

    Full text link
    Dynamical modeling of neural systems and brain functions has a history of success over the last half century. This includes, for example, the explanation and prediction of some features of neural rhythmic behaviors. Many interesting dynamical models of learning and memory based on physiological experiments have been suggested over the last two decades. Dynamical models even of consciousness now exist. Usually these models and results are based on traditional approaches and paradigms of nonlinear dynamics including dynamical chaos. Neural systems are, however, an unusual subject for nonlinear dynamics for several reasons: (i) Even the simplest neural network, with only a few neurons and synaptic connections, has an enormous number of variables and control parameters. These make neural systems adaptive and flexible, and are critical to their biological function. (ii) In contrast to traditional physical systems described by well-known basic principles, first principles governing the dynamics of neural systems are unknown. (iii) Many different neural systems exhibit similar dynamics despite having different architectures and different levels of complexity. (iv) The network architecture and connection strengths are usually not known in detail and therefore the dynamical analysis must, in some sense, be probabilistic. (v) Since nervous systems are able to organize behavior based on sensory inputs, the dynamical modeling of these systems has to explain the transformation of temporal information into combinatorial or combinatorial-temporal codes, and vice versa, for memory and recognition. In this review these problems are discussed in the context of addressing the stimulating questions: What can neuroscience learn from nonlinear dynamics, and what can nonlinear dynamics learn from neuroscience?This work was supported by NSF Grant No. NSF/EIA-0130708, and Grant No. PHY 0414174; NIH Grant No. 1 R01 NS50945 and Grant No. NS40110; MEC BFI2003-07276, and Fundación BBVA

    AI of Brain and Cognitive Sciences: From the Perspective of First Principles

    Full text link
    Nowadays, we have witnessed the great success of AI in various applications, including image classification, game playing, protein structure analysis, language translation, and content generation. Despite these powerful applications, there are still many tasks in our daily life that are rather simple to humans but pose great challenges to AI. These include image and language understanding, few-shot learning, abstract concepts, and low-energy cost computing. Thus, learning from the brain is still a promising way that can shed light on the development of next-generation AI. The brain is arguably the only known intelligent machine in the universe, which is the product of evolution for animals surviving in the natural environment. At the behavior level, psychology and cognitive sciences have demonstrated that human and animal brains can execute very intelligent high-level cognitive functions. At the structure level, cognitive and computational neurosciences have unveiled that the brain has extremely complicated but elegant network forms to support its functions. Over years, people are gathering knowledge about the structure and functions of the brain, and this process is accelerating recently along with the initiation of giant brain projects worldwide. Here, we argue that the general principles of brain functions are the most valuable things to inspire the development of AI. These general principles are the standard rules of the brain extracting, representing, manipulating, and retrieving information, and here we call them the first principles of the brain. This paper collects six such first principles. They are attractor network, criticality, random network, sparse coding, relational memory, and perceptual learning. On each topic, we review its biological background, fundamental property, potential application to AI, and future development.Comment: 59 pages, 5 figures, review articl

    Real time unsupervised learning of visual stimuli in neuromorphic VLSI systems

    Full text link
    Neuromorphic chips embody computational principles operating in the nervous system, into microelectronic devices. In this domain it is important to identify computational primitives that theory and experiments suggest as generic and reusable cognitive elements. One such element is provided by attractor dynamics in recurrent networks. Point attractors are equilibrium states of the dynamics (up to fluctuations), determined by the synaptic structure of the network; a `basin' of attraction comprises all initial states leading to a given attractor upon relaxation, hence making attractor dynamics suitable to implement robust associative memory. The initial network state is dictated by the stimulus, and relaxation to the attractor state implements the retrieval of the corresponding memorized prototypical pattern. In a previous work we demonstrated that a neuromorphic recurrent network of spiking neurons and suitably chosen, fixed synapses supports attractor dynamics. Here we focus on learning: activating on-chip synaptic plasticity and using a theory-driven strategy for choosing network parameters, we show that autonomous learning, following repeated presentation of simple visual stimuli, shapes a synaptic connectivity supporting stimulus-selective attractors. Associative memory develops on chip as the result of the coupled stimulus-driven neural activity and ensuing synaptic dynamics, with no artificial separation between learning and retrieval phases.Comment: submitted to Scientific Repor

    Information processing in dissociated neuronal cultures of rat hippocampal neurons

    Get PDF
    One of the major aims of Systems Neuroscience is to understand how the nervous system transforms sensory inputs into appropriate motor reactions. In very simple cases sensory neurons are immediately coupled to motoneurons and the entire transformation becomes a simple reflex, in which a noxious signal is immediately transformed into an escape reaction. However, in the most complex behaviours, the nervous system seems to analyse in detail the sensory inputs and is performing some kind of information processing (IP). IP takes place at many different levels of the nervous system: from the peripheral nervous system, where sensory stimuli are detected and converted into electrical pulses, to the central nervous system, where features of sensory stimuli are extracted, perception takes place and actions and motions are coordinated. Moreover, understanding the basic computational properties of the nervous system, besides being at the core of Neuroscience, also arouses great interest even in the field of Neuroengineering and in the field of Computer Science. In fact, being able to decode the neural activity can lead to the development of a new generation of neuroprosthetic devices aimed, for example, at restoring motor functions in severely paralysed patients (Chapin, 2004). On the other side, the development of Artificial Neural Networks (ANNs) (Marr, 1982; Rumelhart & McClelland, 1988; Herz et al., 1981; Hopfield, 1982; Minsky & Papert, 1988) has already proved that the study of biological neural networks may lead to the development and to the design of new computing algorithms and devices. All nervous systems are based on the same elements, the neurons, which are computing devices which, compared to silicon components, are much slower and much less reliable. How are nervous systems of all living species able to survive being based on slow and poorly reliable components? This obvious and na\uefve question is equivalent to characterizing IP in a more quantitative way. In order to study IP and to capture the basic computational properties of the nervous system, two major questions seem to arise. Firstly, which is the fundamental unit of information processing: 2 single neurons or neuronal ensembles? Secondly, how is information encoded in the neuronal firing? These questions - in my view - summarize the problem of the neural code. The subject of my PhD research was to study information processing in dissociated neuronal cultures of rat hippocampal neurons. These cultures, with random connections, provide a more general view of neuronal networks and assemblies, not depending on the circuitry of a neuronal network in vivo, and allow a more detailed and careful experimental investigation. In order to record the activity of a large ensemble of neurons, these neurons were cultured on multielectrode arrays (MEAs) and multi-site stimulation was used to activate different neurons and pathways of the network. In this way, it was possible to vary the properties of the stimulus applied under a controlled extracellular environment. Given this experimental system, my investigation had two major approaches. On one side, I focused my studies on the problem of the neural code, where I studied in particular information processing at the single neuron level and at an ensemble level, investigating also putative neural coding mechanisms. On the other side, I tried to explore the possibility of using biological neurons as computing elements in a task commonly solved by conventional silicon devices: image processing and pattern recognition. The results reported in the first two chapters of my thesis have been published in two separate articles. The third chapter of my thesis represents an article in preparation

    Embryonic stem cell-derived neurons form functional networks in vitro

    Get PDF
    corecore