449 research outputs found

    Neural Dynamics Underlying Impaired Autonomic and Conditioned Responses Following Amygdala and Orbitofrontal Lesions

    Full text link
    A neural model is presented that explains how outcome-specific learning modulates affect, decision-making and Pavlovian conditioned approach responses. The model addresses how brain regions responsible for affective learning and habit learning interact, and answers a central question: What are the relative contributions of the amygdala and orbitofrontal cortex to emotion and behavior? In the model, the amygdala calculates outcome value while the orbitofrontal cortex influences attention and conditioned responding by assigning value information to stimuli. Model simulations replicate autonomic, electrophysiological, and behavioral data associated with three tasks commonly used to assay these phenomena: Food consumption, Pavlovian conditioning, and visual discrimination. Interactions of the basal ganglia and amygdala with sensory and orbitofrontal cortices enable the model to replicate the complex pattern of spared and impaired behavioral and emotional capacities seen following lesions of the amygdala and orbitofrontal cortex.National Science Foundation (SBE-0354378; IIS-97-20333); Office of Naval Research (N00014-01-1-0624); Defense Advanced Research Projects Agency and the Office of Naval Research (N00014-95-1-0409); National Institutes of Health (R29-DC02952

    Dopaminergic and Non-Dopaminergic Value Systems in Conditioning and Outcome-Specific Revaluation

    Full text link
    Animals are motivated to choose environmental options that can best satisfy current needs. To explain such choices, this paper introduces the MOTIVATOR (Matching Objects To Internal Values Triggers Option Revaluations) neural model. MOTIVATOR describes cognitiveemotional interactions between higher-order sensory cortices and an evaluative neuraxis composed of the hypothalamus, amygdala, and orbitofrontal cortex. Given a conditioned stimulus (CS), the model amygdala and lateral hypothalamus interact to calculate the expected current value of the subjective outcome that the CS predicts, constrained by the current state of deprivation or satiation. The amygdala relays the expected value information to orbitofrontal cells that receive inputs from anterior inferotemporal cells, and medial orbitofrontal cells that receive inputs from rhinal cortex. The activations of these orbitofrontal cells code the subjective values of objects. These values guide behavioral choices. The model basal ganglia detect errors in CS-specific predictions of the value and timing of rewards. Excitatory inputs from the pedunculopontine nucleus interact with timed inhibitory inputs from model striosomes in the ventral striatum to regulate dopamine burst and dip responses from cells in the substantia nigra pars compacta and ventral tegmental area. Learning in cortical and striatal regions is strongly modulated by dopamine. The model is used to address tasks that examine food-specific satiety, Pavlovian conditioning, reinforcer devaluation, and simultaneous visual discrimination. Model simulations successfully reproduce discharge dynamics of known cell types, including signals that predict saccadic reaction times and CS-dependent changes in systolic blood pressure.Defense Advanced Research Projects Agency and the Office of Naval Research (N00014-95-1-0409); National Institutes of Health (R29-DC02952, R01-DC007683); National Science Foundation (IIS-97-20333, SBE-0354378); Office of Naval Research (N00014-01-1-0624

    Cortical Dynamics of Contextually-Cued Attentive Visual Learning and Search: Spatial and Object Evidence Accumulation

    Full text link
    How do humans use predictive contextual information to facilitate visual search? How are consistently paired scenic objects and positions learned and used to more efficiently guide search in familiar scenes? For example, a certain combination of objects can define a context for a kitchen and trigger a more efficient search for a typical object, such as a sink, in that context. A neural model, ARTSCENE Search, is developed to illustrate the neural mechanisms of such memory-based contextual learning and guidance, and to explain challenging behavioral data on positive/negative, spatial/object, and local/distant global cueing effects during visual search. The model proposes how global scene layout at a first glance rapidly forms a hypothesis about the target location. This hypothesis is then incrementally refined by enhancing target-like objects in space as a scene is scanned with saccadic eye movements. The model clarifies the functional roles of neuroanatomical, neurophysiological, and neuroimaging data in visual search for a desired goal object. In particular, the model simulates the interactive dynamics of spatial and object contextual cueing in the cortical What and Where streams starting from early visual areas through medial temporal lobe to prefrontal cortex. After learning, model dorsolateral prefrontal cortical cells (area 46) prime possible target locations in posterior parietal cortex based on goalmodulated percepts of spatial scene gist represented in parahippocampal cortex, whereas model ventral prefrontal cortical cells (area 47/12) prime possible target object representations in inferior temporal cortex based on the history of viewed objects represented in perirhinal cortex. The model hereby predicts how the cortical What and Where streams cooperate during scene perception, learning, and memory to accumulate evidence over time to drive efficient visual search of familiar scenes.CELEST, an NSF Science of Learning Center (SBE-0354378); SyNAPSE program of Defense Advanced Research Projects Agency (HR0011-09-3-0001, HR0011-09-C-0011

    Coherence and recurrency: maintenance, control and integration in working memory

    Get PDF
    Working memory (WM), including a ‘central executive’, is used to guide behavior by internal goals or intentions. We suggest that WM is best described as a set of three interdependent functions which are implemented in the prefrontal cortex (PFC). These functions are maintenance, control of attention and integration. A model for the maintenance function is presented, and we will argue that this model can be extended to incorporate the other functions as well. Maintenance is the capacity to briefly maintain information in the absence of corresponding input, and even in the face of distracting information. We will argue that maintenance is based on recurrent loops between PFC and posterior parts of the brain, and probably within PFC as well. In these loops information can be held temporarily in an active form. We show that a model based on these structural ideas is capable of maintaining a limited number of neural patterns. Not the size, but the coherence of patterns (i.e., a chunking principle based on synchronous firing of interconnected cell assemblies) determines the maintenance capacity. A mechanism that optimizes coherent pattern segregation, also poses a limit to the number of assemblies (about four) that can concurrently reverberate. Top-down attentional control (in perception, action and memory retrieval) can be modelled by the modulation and re-entry of top-down information to posterior parts of the brain. Hierarchically organized modules in PFC create the possibility for information integration. We argue that large-scale multimodal integration of information creates an ‘episodic buffer’, and may even suffice for implementing a central executive

    Toward a further understanding of object feature binding: a cognitive neuroscience perspective.

    Get PDF
    The aim of this thesis is to lead to a further understanding of the neural mechanisms underlying object feature binding in the human brain. The focus is on information processing and integration in the visual system and visual shortterm memory. From a review of the literature it is clear that there are three major competing binding theories, however, none of these individually solves the binding problem satisfactorily. Thus the aim of this research is to conduct behavioural experimentation into object feature binding, paying particular attention to visual short-term memory. The behavioural experiment was designed and conducted using a within-subjects delayed responset ask comprising a battery of sixty-four composite objects each with three features and four dimensions in each of three conditions (spatial, temporal and spatio-temporal).Findings from the experiment,which focus on spatial and temporal aspects of object feature binding and feature proximity on binding errors, support the spatial theories on object feature binding, in addition we propose that temporal theories and convergence, through hierarchical feature analysis, are also involved. Because spatial properties have a dedicated processing neural stream, and temporal properties rely on limited capacity memory systems, memories for sequential information would likely be more difficult to accuratelyr ecall. Our study supports other studies which suggest that both spatial and temporal coherence to differing degrees,may be involved in object feature binding. Traditionally, these theories have purported to provide individual solutions, but this thesis proposes a novel unified theory of object feature binding in which hierarchical feature analysis, spatial attention and temporal synchrony each plays a role. It is further proposed that binding takes place in visual short-term memory through concerted and integrated information processing in distributed cortical areas. A cognitive model detailing this integrated proposal is given. Next, the cognitive model is used to inform the design and suggested implementation of a computational model which would be able to test the theory put forward in this thesis. In order to verify the model, future work is needed to implement the computational model.Thus it is argued that this doctoral thesis provides valuable experimental evidence concerning spatio-temporal aspects of the binding problem and as such is an additional building block in the quest for a solution to the object feature binding problem

    How Laminar Frontal Cortex and Basal Ganglia Circuits Interact to Control Planned and Reactive Saccades

    Full text link
    The basal ganglia and frontal cortex together allow animals to learn adaptive responses that acquire rewards when prepotent reflexive responses are insufficient. Anatomical studies show a rich pattern of interactions between the basal ganglia and distinct frontal cortical layers. Analysis of the laminar circuitry of the frontal cortex, together with its interactions with the basal ganglia, motor thalamus, superior colliculus, and inferotemporal and parietal cortices, provides new insight into how these brain regions interact to learn and perform complexly conditioned behaviors. A neural model whose cortical component represents the frontal eye fields captures these interacting circuits. Simulations of the neural model illustrate how it provides a functional explanation of the dynamics of 17 physiologically identified cell types found in these areas. The model predicts how action planning or priming (in cortical layers III and VI) is dissociated from execution (in layer V), how a cue may serve either as a movement target or as a discriminative cue to move elsewhere, and how the basal ganglia help choose among competing actions. The model simulates neurophysiological, anatomical, and behavioral data about how monkeys perform saccadic eye movement tasks, including fixation; single saccade, overlap, gap, and memory-guided saccades; anti-saccades; and parallel search among distractors.Defense Advanced Research Projects Agency and the Office of Naval Research (N00014-95-l-0409, N00014-92-J-1309, N00014-95-1-0657); National Science Foundation (IRI-97-20333)

    Semantic Associations between Signs and Numerical Categories in the Prefrontal Cortex

    Get PDF
    The utilization of symbols such as words and numbers as mental tools endows humans with unrivalled cognitive flexibility. In the number domain, a fundamental first step for the acquisition of numerical symbols is the semantic association of signs with cardinalities. We explored the primitives of such a semantic mapping process by recording single-cell activity in the monkey prefrontal and parietal cortices, brain structures critically involved in numerical cognition. Monkeys were trained to associate visual shapes with varying numbers of items in a matching task. After this long-term learning process, we found that the responses of many prefrontal neurons to the visual shapes reflected the associated numerical value in a behaviorally relevant way. In contrast, such association neurons were rarely found in the parietal lobe. These findings suggest a cardinal role of the prefrontal cortex in establishing semantic associations between signs and abstract categories, a cognitive precursor that may ultimately give rise to symbolic thinking in linguistic humans

    Interactions between dorsal and ventral streams for controlling skilled grasp

    Get PDF
    The two visual systems hypothesis suggests processing of visual information into two distinct routes in the brain: a dorsal stream for the control of actions and a ventral stream for the identification of objects. Recently, increasing evidence has shown that the dorsal and ventral streams are not strictly independent, but do interact with each other. In this paper, we argue that the interactions between dorsal and ventral streams are important for controlling complex object-oriented hand movements, especially skilled grasp. Anatomical studies have reported the existence of direct connections between dorsal and ventral stream areas. These physiological interconnections appear to be gradually more active as the precision demands of the grasp become higher. It is hypothesised that the dorsal stream needs to retrieve detailed information about object identity, stored in ventral stream areas, when the object properties require complex fine-tuning of the grasp. In turn, the ventral stream might receive up to date grasp-related information from dorsal stream areas to refine the object internal representation. Future research will provide direct evidence for which specific areas of the two streams interact, the timing of their interactions and in which behavioural context they occur

    A three-threshold learning rule approaches the maximal capacity of recurrent neural networks

    Get PDF
    Understanding the theoretical foundations of how memories are encoded and retrieved in neural populations is a central challenge in neuroscience. A popular theoretical scenario for modeling memory function is the attractor neural network scenario, whose prototype is the Hopfield model. The model has a poor storage capacity, compared with the capacity achieved with perceptron learning algorithms. Here, by transforming the perceptron learning rule, we present an online learning rule for a recurrent neural network that achieves near-maximal storage capacity without an explicit supervisory error signal, relying only upon locally accessible information. The fully-connected network consists of excitatory binary neurons with plastic recurrent connections and non-plastic inhibitory feedback stabilizing the network dynamics; the memory patterns are presented online as strong afferent currents, producing a bimodal distribution for the neuron synaptic inputs. Synapses corresponding to active inputs are modified as a function of the value of the local fields with respect to three thresholds. Above the highest threshold, and below the lowest threshold, no plasticity occurs. In between these two thresholds, potentiation/depression occurs when the local field is above/below an intermediate threshold. We simulated and analyzed a network of binary neurons implementing this rule and measured its storage capacity for different sizes of the basins of attraction. The storage capacity obtained through numerical simulations is shown to be close to the value predicted by analytical calculations. We also measured the dependence of capacity on the strength of external inputs. Finally, we quantified the statistics of the resulting synaptic connectivity matrix, and found that both the fraction of zero weight synapses and the degree of symmetry of the weight matrix increase with the number of stored patterns.Comment: 24 pages, 10 figures, to be published in PLOS Computational Biolog

    Neural blackboard architectures of combinatorial structures in cognition

    Get PDF
    Human cognition is unique in the way in which it relies on combinatorial (or compositional) structures. Language provides ample evidence for the existence of combinatorial structures, but they can also be found in visual cognition. To understand the neural basis of human cognition, it is therefore essential to understand how combinatorial structures can be instantiated in neural terms. In his recent book on the foundations of language, Jackendoff described four fundamental problems for a neural instantiation of combinatorial structures: the massiveness of the binding problem, the problem of 2, the problem of variables and the transformation of combinatorial structures from working memory to long-term memory. This paper aims to show that these problems can be solved by means of neural ‘blackboard’ architectures. For this purpose, a neural blackboard architecture for sentence structure is presented. In this architecture, neural structures that encode for words are temporarily bound in a manner that preserves the structure of the sentence. It is shown that the architecture solves the four problems presented by Jackendoff. The ability of the architecture to instantiate sentence structures is illustrated with examples of sentence complexity observed in human language performance. Similarities exist between the architecture for sentence structure and blackboard architectures for combinatorial structures in visual cognition, derived from the structure of the visual cortex. These architectures are briefly discussed, together with an example of a combinatorial structure in which the blackboard architectures for language and vision are combined. In this way, the architecture for language is grounded in perception
    corecore