3,062 research outputs found

    Depression-Biased Reverse Plasticity Rule Is Required for Stable Learning at Top-down Connections

    Get PDF
    Top-down synapses are ubiquitous throughout neocortex and play a central role in cognition, yet little is known about their development and specificity. During sensory experience, lower neocortical areas are activated before higher ones, causing top-down synapses to experience a preponderance of post-synaptic activity preceding pre-synaptic activity. This timing pattern is the opposite of that experienced by bottom-up synapses, which suggests that different versions of spike-timing dependent synaptic plasticity (STDP) rules may be required at top-down synapses. We consider a two-layer neural network model and investigate which STDP rules can lead to a distribution of top-down synaptic weights that is stable, diverse and avoids strong loops. We introduce a temporally reversed rule (rSTDP) where top-down synapses are potentiated if post-synaptic activity precedes pre-synaptic activity. Combining analytical work and integrate-and-fire simulations, we show that only depression-biased rSTDP (and not classical STDP) produces stable and diverse top-down weights. The conclusions did not change upon addition of homeostatic mechanisms, multiplicative STDP rules or weak external input to the top neurons. Our prediction for rSTDP at top-down synapses, which are distally located, is supported by recent neurophysiological evidence showing the existence of temporally reversed STDP in synapses that are distal to the post-synaptic cell body

    Consciousness CLEARS the Mind

    Full text link
    A full understanding of consciouness requires that we identify the brain processes from which conscious experiences emerge. What are these processes, and what is their utility in supporting successful adaptive behaviors? Adaptive Resonance Theory (ART) predicted a functional link between processes of Consciousness, Learning, Expectation, Attention, Resonance, and Synchrony (CLEARS), includes the prediction that "all conscious states are resonant states." This connection clarifies how brain dynamics enable a behaving individual to autonomously adapt in real time to a rapidly changing world. The present article reviews theoretical considerations that predicted these functional links, how they work, and some of the rapidly growing body of behavioral and brain data that have provided support for these predictions. The article also summarizes ART models that predict functional roles for identified cells in laminar thalamocortical circuits, including the six layered neocortical circuits and their interactions with specific primary and higher-order specific thalamic nuclei and nonspecific nuclei. These prediction include explanations of how slow perceptual learning can occur more frequently in superficial cortical layers. ART traces these properties to the existence of intracortical feedback loops, and to reset mechanisms whereby thalamocortical mismatches use circuits such as the one from specific thalamic nuclei to nonspecific thalamic nuclei and then to layer 4 of neocortical areas via layers 1-to-5-to-6-to-4.National Science Foundation (SBE-0354378); Office of Naval Research (N00014-01-1-0624

    Towards a Unified Theory of Neocortex: Laminar Cortical Circuits for Vision and Cognition

    Full text link
    A key goal of computational neuroscience is to link brain mechanisms to behavioral functions. The present article describes recent progress towards explaining how laminar neocortical circuits give rise to biological intelligence. These circuits embody two new and revolutionary computational paradigms: Complementary Computing and Laminar Computing. Circuit properties include a novel synthesis of feedforward and feedback processing, of digital and analog processing, and of pre-attentive and attentive processing. This synthesis clarifies the appeal of Bayesian approaches but has a far greater predictive range that naturally extends to self-organizing processes. Examples from vision and cognition are summarized. A LAMINART architecture unifies properties of visual development, learning, perceptual grouping, attention, and 3D vision. A key modeling theme is that the mechanisms which enable development and learning to occur in a stable way imply properties of adult behavior. It is noted how higher-order attentional constraints can influence multiple cortical regions, and how spatial and object attention work together to learn view-invariant object categories. In particular, a form-fitting spatial attentional shroud can allow an emerging view-invariant object category to remain active while multiple view categories are associated with it during sequences of saccadic eye movements. Finally, the chapter summarizes recent work on the LIST PARSE model of cognitive information processing by the laminar circuits of prefrontal cortex. LIST PARSE models the short-term storage of event sequences in working memory, their unitization through learning into sequence, or list, chunks, and their read-out in planned sequential performance that is under volitional control. LIST PARSE provides a laminar embodiment of Item and Order working memories, also called Competitive Queuing models, that have been supported by both psychophysical and neurobiological data. These examples show how variations of a common laminar cortical design can embody properties of visual and cognitive intelligence that seem, at least on the surface, to be mechanistically unrelated.National Science Foundation (SBE-0354378); Office of Naval Research (N00014-01-1-0624

    Dynamic Control of Network Level Information Processing through Cholinergic Modulation

    Full text link
    Acetylcholine (ACh) release is a prominent neurochemical marker of arousal state within the brain. Changes in ACh are associated with changes in neural activity and information processing, though its exact role and the mechanisms through which it acts are unknown. Here I show that the dynamic changes in ACh levels that are associated with arousal state control informational processing functions of networks through its effects on the degree of Spike-Frequency Adaptation (SFA), an activity dependent decrease in excitability, synchronizability, and neuronal resonance displayed by single cells. Using numerical modeling I develop mechanistic explanations for how control of these properties shift network activity from a stable high frequency spiking pattern to a traveling wave of activity. This transition mimics the change in brain dynamics seen between high ACh states, such as waking and Rapid Eye Movement (REM) sleep, and low ACh states such as Non-REM (NREM) sleep. A corresponding, and related, transition in network level memory recall is also occurs as ACh modulates neuronal SFA. When ACh is at its highest levels (waking) all memories are stably recalled, as ACh is decreased (REM) in the model weakly encoded memories destabilize while strong memories remain stable. In levels of ACh that match Slow Wave Sleep (SWS), no encoded memories are stably recalled. This results from a competition between SFA and excitatory input strength and provides a mechanism for neural networks to control the representation of underlying synaptic information. Finally I show that during the low ACh conditions, oscillatory conditions allow for external inputs to be properly stored in and recalled from synaptic weights. Taken together this work demonstrates that dynamic neuromodulation is critical for the regulation of information processing tasks in neural networks. These results suggest that ACh is capable of switching networks between two distinct information processing modes. Rate coding of information is facilitated during high ACh conditions and phase coding of information is facilitated during low ACh conditions. Finally I propose that ACh levels control whether a network is in one of three functional states: (High ACh; Active waking) optimized for encoding of new information or the stable representation of relevant memories, (Mid ACh; resting state or REM) optimized for encoding connections between currently stored memories or searching the catalog of stored memories, and (Low ACh; NREM) optimized for renormalization of synaptic strength and memory consolidation. This work provides a mechanistic insight into the role of dynamic changes in ACh levels for the encoding, consolidation, and maintenance of memories within the brain.PHDNeuroscienceUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/147503/1/roachjp_1.pd

    Analogue VLSI study of temporally asymmetric Hebbian learning

    Get PDF

    The role of excitation and inhibition in learning and memory formation

    Get PDF
    The neurons in the mammalian brain can be classified into two broad categories: excitatory and inhibitory neurons. The former has been historically associated to information processing whereas the latter has been linked to network homeostasis. More recently, inhibitory neurons have been related to several computational roles such as the gating of signal propagation, mediation of network competition, or learning. However, the ways by which excitation and inhibition can regulate learning have not been exhaustively explored. Here we explore several model systems to investigate the role of excitation and inhibition in learning and memory formation. Additionally, we investigate the effect that third factors such as neuromodulators and network state exert over this process. Firstly, we explore the effect of neuromodulators onto excitatory neurons and excitatory plasticity. Next, we investigate the plasticity rules governing excitatory connections while the neural network oscillates in a sleep-like cycle, shifting between Up and Down states. We observe that this plasticity rule depends on the state of the network. To study the role of inhibitory neurons in learning, we then investigate the mechanisms underlying place field emergence and consolidation. Our simulations suggest that dendrite-targeting interneurons play an important role in both promoting the emergence of new place fields and in ensuring place field stabilization. Soma-targeting interneurons, on the other hand, are suggested to be related to quick, context-specific changes in the assignment of place and silent cells. We next investigate the mechanisms underlying the plasticity of synaptic connections from specific types of interneurons. Our experiments suggest that different types of interneurons undergo different synaptic plasticity rules. Using a computational model, we implement these plasticity rules in a simplified network. Our simulations indicate that the interaction between the different forms of plasticity account for the development of stable place fields across multiple environments. Moreover, these plasticity rules seems to be gated by the postsynaptic membrane voltage. Inspired by these findings, we propose a voltage-based inhibitory synaptic plasticity rule. As a consequence of this rule, the network activity is kept controlled by the imposition of a maximum pyramidal cell firing rate. Remarkably, this rule does not constrain the postsynaptic firing rate to a narrow range. Overall, through multiple stages of interactions between experiments and computational simulations, we investigate the effect of excitation and inhibition in learning. We propose mechanistic explanations for experimental data, and suggest possible functional implications of experimental findings. Finally, we proposed a voltage-based inhibitory synaptic plasticity model as a mechanism for flexible network homeostasis.Open Acces

    Analog Spiking Neuromorphic Circuits and Systems for Brain- and Nanotechnology-Inspired Cognitive Computing

    Get PDF
    Human society is now facing grand challenges to satisfy the growing demand for computing power, at the same time, sustain energy consumption. By the end of CMOS technology scaling, innovations are required to tackle the challenges in a radically different way. Inspired by the emerging understanding of the computing occurring in a brain and nanotechnology-enabled biological plausible synaptic plasticity, neuromorphic computing architectures are being investigated. Such a neuromorphic chip that combines CMOS analog spiking neurons and nanoscale resistive random-access memory (RRAM) using as electronics synapses can provide massive neural network parallelism, high density and online learning capability, and hence, paves the path towards a promising solution to future energy-efficient real-time computing systems. However, existing silicon neuron approaches are designed to faithfully reproduce biological neuron dynamics, and hence they are incompatible with the RRAM synapses, or require extensive peripheral circuitry to modulate a synapse, and are thus deficient in learning capability. As a result, they eliminate most of the density advantages gained by the adoption of nanoscale devices, and fail to realize a functional computing system. This dissertation describes novel hardware architectures and neuron circuit designs that synergistically assemble the fundamental and significant elements for brain-inspired computing. Versatile CMOS spiking neurons that combine integrate-and-fire, passive dense RRAM synapses drive capability, dynamic biasing for adaptive power consumption, in situ spike-timing dependent plasticity (STDP) and competitive learning in compact integrated circuit modules are presented. Real-world pattern learning and recognition tasks using the proposed architecture were demonstrated with circuit-level simulations. A test chip was implemented and fabricated to verify the proposed CMOS neuron and hardware architecture, and the subsequent chip measurement results successfully proved the idea. The work described in this dissertation realizes a key building block for large-scale integration of spiking neural network hardware, and then, serves as a step-stone for the building of next-generation energy-efficient brain-inspired cognitive computing systems
    • …
    corecore