440 research outputs found

    The Evolution, Analysis, and Design of Minimal Spiking Neural Networks for Temporal Pattern Recognition

    Get PDF
    All sensory stimuli are temporal in structure. How a pattern of action potentials encodes the information received from the sensory stimuli is an important research question in neurosciencce. Although it is clear that information is carried by the number or the timing of spikes, the information processing in the nervous system is poorly understood. The desire to understand information processing in the animal brain led to the development of spiking neural networks (SNNs). Understanding information processing in spiking neural networks may give us an insight into the information processing in the animal brain. One way to understand the mechanisms which enable SNNs to perform a computational task is to associate the structural connectivity of the network with the corresponding functional behaviour. This work demonstrates the structure-function mapping of spiking networks evolved (or handcrafted) for recognising temporal patterns. The SNNs are composed of simple yet biologically meaningful adaptive exponential integrate-and-fire (AdEx) neurons. The computational task can be described as identifying a subsequence of three signals (say ABC) in a random input stream of signals ("ABBBCCBABABCBBCAC"). The topology and connection weights of the networks are optimised using a genetic algorithm such that the network output spikes only for the correct input pattern and remains silent for all others. The fitness function rewards the network output for spiking after receiving the correct pattern and penalises spikes elsewhere. To analyse the effect of noise, two types of noise are introduced during evolution: (i) random fluctuations of the membrane potential of neurons in the network at every network step, (ii) random variations of the duration of the silent interval between input signals. It has been observed that evolution in the presence of noise produced networks that were robust to perturbation of neuronal parameters. Moreover, the networks also developed a form of memory, enabling them to maintain network states in the absence of input activity. It has been demonstrated that the network states of an evolved network have a one-to-one correspondence with the states of a finite-state transducer (FST) { a model of computation for time-structured data. The analysis of networks indicated that the task of recognition is accomplished by transitions between network states. Evolution may overproduce synaptic connections, pruning these superfluous connections pronounced structural similarities among individuals obtained from different independent runs. Moreover, the analysis of the pruned networks highlighted that memory is a property of self-excitation in the network. Neurons with self-excitatory loops (also called autapses) could sustain spiking activity indefinitely in the absence of input activity. To recognise a pattern of length n, a network requires n+1 network states, where n states are maintained actively with autapses and the penultimate state is maintained passively by no activity in the network. Simultaneously, the role of other connections in the network is identified. Of particular interest, three interneurons in the network are found to have a specialized role: (i) the lock neuron is always active, preventing the output from spiking unless it is released by the penultimate signal in the correct pattern, exposing the output neuron to spike for the correct last signal, (ii) the switch neuron is responsible for switching the network between the inter-signal states and the start state, and (iii) the accept neuron produces spikes in the output neuron when the network receives the last correct input. It also sends a signal to the switch neuron, transforming the network back into the start state Understanding how information is processed in the evolved networks led to handcrafting network topologies for recognising more extended patterns. The proposed rules can extend network topologies to recognize temporal patterns up to length six. To validate the handcrafted topology, a genetic algorithm is used to optimise its connection weights. It has been observed that the maximum number of active neurons representing a state in the network increases with the pattern length. Therefore, the suggested rules can handcraft network topologies only up to length 6. Handcrafting network topologies, representing a network state with a fixed number of active neurons requires further investigation

    Dynamic Effective Connectivity of Inter-Areal Brain Circuits

    Get PDF
    Anatomic connections between brain areas affect information flow between neuronal circuits and the synchronization of neuronal activity. However, such structural connectivity does not coincide with effective connectivity (or, more precisely, causal connectivity), related to the elusive question “Which areas cause the present activity of which others?”. Effective connectivity is directed and depends flexibly on contexts and tasks. Here we show that dynamic effective connectivity can emerge from transitions in the collective organization of coherent neural activity. Integrating simulation and semi-analytic approaches, we study mesoscale network motifs of interacting cortical areas, modeled as large random networks of spiking neurons or as simple rate units. Through a causal analysis of time-series of model neural activity, we show that different dynamical states generated by a same structural connectivity motif correspond to distinct effective connectivity motifs. Such effective motifs can display a dominant directionality, due to spontaneous symmetry breaking and effective entrainment between local brain rhythms, although all connections in the considered structural motifs are reciprocal. We show then that transitions between effective connectivity configurations (like, for instance, reversal in the direction of inter-areal interactions) can be triggered reliably by brief perturbation inputs, properly timed with respect to an ongoing local oscillation, without the need for plastic synaptic changes. Finally, we analyze how the information encoded in spiking patterns of a local neuronal population is propagated across a fixed structural connectivity motif, demonstrating that changes in the active effective connectivity regulate both the efficiency and the directionality of information transfer. Previous studies stressed the role played by coherent oscillations in establishing efficient communication between distant areas. Going beyond these early proposals, we advance here that dynamic interactions between brain rhythms provide as well the basis for the self-organized control of this “communication-through-coherence”, making thus possible a fast “on-demand” reconfiguration of global information routing modalities

    Relating Neural Dynamics to Olfactory Coding and Behavior

    Get PDF
    Sensory stimuli often evoke temporal patterns of spiking activity across a population of neurons in the early processing stages. What features of these spatiotemporal responses encode behaviorally relevant information, and how dynamic processing of sensory signals facilitates information processing are fundamental problems in sensory neuroscience that remain to be understood. In this thesis, I have investigated these issues using a relatively simple invertebrate model (locusts; Schistocerca americana). In locusts, odorants are transduced into electrical signals by olfactory sensory neurons in the antenna and are subsequently relayed to the downstream neural circuit in the antennal lobe (analogous to the olfactory bulb in vertebrates). We found that the sensory input evoked by an odorant could vary depending on whether the stimulus was presented solitarily or in an overlapping sequence following another cue. These inconsistent sensory inputs triggered dynamic reorganization of ensemble activity in the downstream antennal lobe. As a result, we found that the neural activities evoked by an odorant pattern-matched across conditions, thereby providing a basis for invariant stimulus recognition. Notably, we found that only the combination of neurons activated by an odorant was conserved across conditions. The temporal structure of the ensemble neural responses, on the other hand, varied depending on stimulus history: synchronous ensemble firings when stimulated by a novel odorant compared to asynchronous activities induced by a redundant stimulus. Furthermore, these neural responses were refined on a slower timescale (on the order of minutes, i.e. happening over trials) such that the same information about odorant identity and intensity was represented with fewer spikes. We validated these interpretations of our physiological data using results from multiple quantitative behavioral assays. In sum, this thesis work provides fundamental insights regarding behaviorally important features of olfactory signal processing in a relatively simple biological olfactory system

    An optimally evolved connective ratio of neural networks that maximizes the occurrence of synchronized bursting behavior

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Synchronized bursting activity (SBA) is a remarkable dynamical behavior in both <it>ex vivo </it>and <it>in vivo </it>neural networks. Investigations of the underlying structural characteristics associated with SBA are crucial to understanding the system-level regulatory mechanism of neural network behaviors.</p> <p>Results</p> <p>In this study, artificial pulsed neural networks were established using spike response models to capture fundamental dynamics of large scale <it>ex vivo </it>cortical networks. Network simulations with synaptic parameter perturbations showed the following two findings. (i) In a network with an excitatory ratio (ER) of 80-90%, its connective ratio (CR) was within a range of 10-30% when the occurrence of SBA reached the highest expectation. This result was consistent with the experimental observation in <it>ex vivo </it>neuronal networks, which were reported to possess a matured inhibitory synaptic ratio of 10-20% and a CR of 10-30%. (ii) No SBA occurred when a network does not contain any all-positive-interaction feedback loop (APFL) motif. In a neural network containing APFLs, the number of APFLs presented an optimal range corresponding to the maximal occurrence of SBA, which was very similar to the optimal CR.</p> <p>Conclusions</p> <p>In a neural network, the evolutionarily selected CR (10-30%) optimizes the occurrence of SBA, and APFL serves a pivotal network motif required to maximize the occurrence of SBA.</p

    Evaluation of Machine Learning Techniques for Inflow Prediction in Lake Como, Italy

    Get PDF
    Abstract Accurate streamflow prediction is a fundamental task for integrated water resources management and flood risk mitigation. The purpose of this study is to forecast the water inflow to lake Como, (Italy) using different machine learning algorithms. The forecast is done for different days ranging from one day to three days. These models are evaluated by three statistical measures including Mean Absolute Error, Root Mean Squared Error, and the Nash-Sutcliffe Efficiency Coefficient. The experimental results show that Neural Network performs better for streamflow estimation with MAE and RMSE followed by Support Vector Regression and Random Forest

    Dynamic Control of Network Level Information Processing through Cholinergic Modulation

    Full text link
    Acetylcholine (ACh) release is a prominent neurochemical marker of arousal state within the brain. Changes in ACh are associated with changes in neural activity and information processing, though its exact role and the mechanisms through which it acts are unknown. Here I show that the dynamic changes in ACh levels that are associated with arousal state control informational processing functions of networks through its effects on the degree of Spike-Frequency Adaptation (SFA), an activity dependent decrease in excitability, synchronizability, and neuronal resonance displayed by single cells. Using numerical modeling I develop mechanistic explanations for how control of these properties shift network activity from a stable high frequency spiking pattern to a traveling wave of activity. This transition mimics the change in brain dynamics seen between high ACh states, such as waking and Rapid Eye Movement (REM) sleep, and low ACh states such as Non-REM (NREM) sleep. A corresponding, and related, transition in network level memory recall is also occurs as ACh modulates neuronal SFA. When ACh is at its highest levels (waking) all memories are stably recalled, as ACh is decreased (REM) in the model weakly encoded memories destabilize while strong memories remain stable. In levels of ACh that match Slow Wave Sleep (SWS), no encoded memories are stably recalled. This results from a competition between SFA and excitatory input strength and provides a mechanism for neural networks to control the representation of underlying synaptic information. Finally I show that during the low ACh conditions, oscillatory conditions allow for external inputs to be properly stored in and recalled from synaptic weights. Taken together this work demonstrates that dynamic neuromodulation is critical for the regulation of information processing tasks in neural networks. These results suggest that ACh is capable of switching networks between two distinct information processing modes. Rate coding of information is facilitated during high ACh conditions and phase coding of information is facilitated during low ACh conditions. Finally I propose that ACh levels control whether a network is in one of three functional states: (High ACh; Active waking) optimized for encoding of new information or the stable representation of relevant memories, (Mid ACh; resting state or REM) optimized for encoding connections between currently stored memories or searching the catalog of stored memories, and (Low ACh; NREM) optimized for renormalization of synaptic strength and memory consolidation. This work provides a mechanistic insight into the role of dynamic changes in ACh levels for the encoding, consolidation, and maintenance of memories within the brain.PHDNeuroscienceUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/147503/1/roachjp_1.pd

    A view of Neural Networks as dynamical systems

    Full text link
    We consider neural networks from the point of view of dynamical systems theory. In this spirit we review recent results dealing with the following questions, adressed in the context of specific models. 1. Characterizing the collective dynamics; 2. Statistical analysis of spikes trains; 3. Interplay between dynamics and network structure; 4. Effects of synaptic plasticity.Comment: Review paper, 51 pages, 10 figures. submitte

    Short-term memory and olfactory signal processing

    Get PDF
    Modern neural recording methodologies, including multi-electrode and optical recordings, allow us to monitor the large population of neurons with high temporal resolution. Such recordings provide rich datasets that are expected to understand better how information about the external world is internally represented and how these representations are altered over time. Achieving this goal requires the development of novel pattern recognition methods and/or the application of existing statistical methods in novel ways to gain insights into basic neural computational principles. In this dissertation, I will take this data-driven approach to dissect the role of short-term memory in olfactory signal processing in two relatively simple models of the olfactory system: fruit fly (Drosophila melanogaster) and locust (Schistocerca americana). First, I will focus on understanding how odor representations within a single stimulus exposure are refined across different populations of neurons (faster dynamics; on the order seconds) in the early olfactory circuits. Using light-sheet imaging datasets from transgenic flies expressing calcium indicators in select populations of neurons, I will reveal how odor representations are decorrelated over time in different neural populations. Further, I will examine how this computation is altered by short-term memory in this neural circuitry. Next, I will examine how neural representations for odorants at an ensemble level are altered across different exposures (slower dynamics; on the order of tens of seconds to minutes). I will examine the role of this short-term adaptation in altering neural representations for odor identity and intensity. Lastly, I will present approaches to help achieve robustness against both extrinsic and intrinsic perturbations of odor-evoked neural responses. I will conclude with a Boolean neural network inspired by the insect olfactory system and compare its performance against other state-of-the-art methods on standard machine learning benchmark datasets. In sum, this work will provide deeper insights into how short-term plasticity alters sensory neural representations and their computational significance

    Emergence of Spatio-Temporal Pattern Formation and Information Processing in the Brain.

    Full text link
    The spatio-temporal patterns of neuronal activity are thought to underlie cognitive functions, such as our thoughts, perceptions, and emotions. Neurons and glial cells, specifically astrocytes, are interconnected in complex networks, where large-scale dynamical patterns emerge from local chemical and electrical signaling between individual network components. How these emergent patterns form and encode for information is the focus of this dissertation. I investigate how various mechanisms that can coordinate collections of neurons in their patterns of activity can potentially cause the interactions across spatial and temporal scales, which are necessary for emergent macroscopic phenomena to arise. My work explores the coordination of network dynamics through pattern formation and synchrony in both experiments and simulations. I concentrate on two potential mechanisms: astrocyte signaling and neuronal resonance properties. Due to their ability to modulate neurons, we investigate the role of astrocytic networks as a potential source for coordinating neuronal assemblies. In cultured networks, I image patterns of calcium signaling between astrocytes, and reproduce observed properties of the network calcium patterning and perturbations with a simple model that incorporates the mechanisms of astrocyte communication. Understanding the modes of communication in astrocyte networks and how they form spatial temporal patterns of their calcium dynamics is important to understanding their interaction with neuronal networks. We investigate this interaction between networks and how glial cells modulate neuronal dynamics through microelectrode array measurements of neuronal network dynamics. We quantify the spontaneous electrical activity patterns of neurons and show the effect of glia on the neuronal dynamics and synchrony. Through a computational approach I investigate an entirely different theoretical mechanism for coordinating ensembles of neurons. I show in a computational model how biophysical resonance shifts in individual neurons can interact with the network topology to influence pattern formation and separation. I show that sub-threshold neuronal depolarization, potentially from astrocytic modulation among other sources, can shift neurons into and out of resonance with specific bands of existing extracellular oscillations. This can act as a dynamic readout mechanism during information storage and retrieval. Exploring these mechanisms that facilitate emergence are necessary for understanding information processing in the brain.PHDApplied PhysicsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/111493/1/lshtrah_1.pd
    corecore