20,649 research outputs found

    Semantic memory

    Get PDF
    The Encyclopedia of Human Behavior, Second Edition is a comprehensive three-volume reference source on human action and reaction, and the thoughts, feelings, and physiological functions behind those actions

    The role of the posterior fusiform gyrus in reading

    Get PDF
    Studies of skilled reading [Price, C. J., & Mechelli, A. Reading and reading disturbance. Current Opinion in Neurobiology, 15, 231–238, 2005], its acquisition in children [Shaywitz, B. A., Shaywitz, S. E., Pugh, K. R., Mencl, W. E., Fulbright, R. K., Skudlarski, P., et al. Disruption of posterior brain systems for reading in children with developmental dyslexia. Biological Psychiatry, 52, 101–110, 2002; Turkeltaub, P. E., Gareau, L., Flowers, D. L., Zeffiro, T. A., & Eden, G. F. Development of neural mechanisms for reading. Nature Neuroscience, 6, 767–773, 2003], and its impairment in patients with pure alexia [Leff, A. P., Crewes, H., Plant, G. T., Scott, S. K., Kennard, C., & Wise, R. J. The functional anatomy of single word reading in patients with hemianopic and pure alexia. Brain, 124, 510–521, 2001] all highlight the importance of the left posterior fusiform cortex in visual word recognition. We used visual masked priming and functional magnetic resonance imaging to elucidate the specific functional contribution of this region to reading and found that (1) unlike words, repetition of pseudowords (“solst-solst”) did not produce a neural priming effect in this region, (2) orthographically related words such as “corner-corn” did produce a neural priming effect, but (3) this orthographic priming effect was reduced when prime-target pairs were semantically related (“teacher-teach”). These findings conflict with the notion of stored visual word forms and instead suggest that this region acts as an interface between visual form information and higher order stimulus properties such as its associated sound and meaning. More importantly, this function is not specific to reading but is also engaged when processing any meaningful visual stimulus

    Neural blackboard architectures of combinatorial structures in cognition

    Get PDF
    Human cognition is unique in the way in which it relies on combinatorial (or compositional) structures. Language provides ample evidence for the existence of combinatorial structures, but they can also be found in visual cognition. To understand the neural basis of human cognition, it is therefore essential to understand how combinatorial structures can be instantiated in neural terms. In his recent book on the foundations of language, Jackendoff described four fundamental problems for a neural instantiation of combinatorial structures: the massiveness of the binding problem, the problem of 2, the problem of variables and the transformation of combinatorial structures from working memory to long-term memory. This paper aims to show that these problems can be solved by means of neural ‘blackboard’ architectures. For this purpose, a neural blackboard architecture for sentence structure is presented. In this architecture, neural structures that encode for words are temporarily bound in a manner that preserves the structure of the sentence. It is shown that the architecture solves the four problems presented by Jackendoff. The ability of the architecture to instantiate sentence structures is illustrated with examples of sentence complexity observed in human language performance. Similarities exist between the architecture for sentence structure and blackboard architectures for combinatorial structures in visual cognition, derived from the structure of the visual cortex. These architectures are briefly discussed, together with an example of a combinatorial structure in which the blackboard architectures for language and vision are combined. In this way, the architecture for language is grounded in perception

    Cortical Dynamics of Contextually-Cued Attentive Visual Learning and Search: Spatial and Object Evidence Accumulation

    Full text link
    How do humans use predictive contextual information to facilitate visual search? How are consistently paired scenic objects and positions learned and used to more efficiently guide search in familiar scenes? For example, a certain combination of objects can define a context for a kitchen and trigger a more efficient search for a typical object, such as a sink, in that context. A neural model, ARTSCENE Search, is developed to illustrate the neural mechanisms of such memory-based contextual learning and guidance, and to explain challenging behavioral data on positive/negative, spatial/object, and local/distant global cueing effects during visual search. The model proposes how global scene layout at a first glance rapidly forms a hypothesis about the target location. This hypothesis is then incrementally refined by enhancing target-like objects in space as a scene is scanned with saccadic eye movements. The model clarifies the functional roles of neuroanatomical, neurophysiological, and neuroimaging data in visual search for a desired goal object. In particular, the model simulates the interactive dynamics of spatial and object contextual cueing in the cortical What and Where streams starting from early visual areas through medial temporal lobe to prefrontal cortex. After learning, model dorsolateral prefrontal cortical cells (area 46) prime possible target locations in posterior parietal cortex based on goalmodulated percepts of spatial scene gist represented in parahippocampal cortex, whereas model ventral prefrontal cortical cells (area 47/12) prime possible target object representations in inferior temporal cortex based on the history of viewed objects represented in perirhinal cortex. The model hereby predicts how the cortical What and Where streams cooperate during scene perception, learning, and memory to accumulate evidence over time to drive efficient visual search of familiar scenes.CELEST, an NSF Science of Learning Center (SBE-0354378); SyNAPSE program of Defense Advanced Research Projects Agency (HR0011-09-3-0001, HR0011-09-C-0011

    The Neural Mechanisms Underlying the Influence of Pavlovian Cues on Human Decision Making

    Get PDF
    In outcome-specific transfer, pavlovian cues that are predictive of specific outcomes bias action choice toward actions associated with those outcomes. This transfer occurs despite no explicit training of the instrumental actions in the presence of pavlovian cues. The neural substrates of this effect in humans are unknown. To address this, we scanned 23 human subjects with functional magnetic resonance imaging while they made choices between different liquid food rewards in the presence of pavlovian cues previously associated with one of these outcomes. We found behavioral evidence of outcome-specific transfer effects in our subjects, as well as differential blood oxygenation level-dependent activity in a region of ventrolateral putamen when subjects chose, respectively, actions consistent and inconsistent with the pavlovian-predicted outcome. Our results suggest that choosing an action incompatible with a pavlovian-predicted outcome might require the inhibition of feasible but nonselected action– outcome associations. The results of this study are relevant for understanding how marketing actions can affect consumer choice behavior as well as for how environmental cues can influence drug-seeking behavior in addiction

    Goldilocks Forgetting in Cross-Situational Learning

    Get PDF
    Given that there is referential uncertainty (noise) when learning words, to what extent can forgetting filter some of that noise out, and be an aid to learning? Using a Cross Situational Learning model we find a U-shaped function of errors indicative of a "Goldilocks" zone of forgetting: an optimum store-loss ratio that is neither too aggressive nor too weak, but just the right amount to produce better learning outcomes. Forgetting acts as a high-pass filter that actively deletes (part of) the referential ambiguity noise, retains intended referents, and effectively amplifies the signal. The model achieves this performance without incorporating any specific cognitive biases of the type proposed in the constraints and principles account, and without any prescribed developmental changes in the underlying learning mechanism. Instead we interpret the model performance as more of a by-product of exposure to input, where the associative strengths in the lexicon grow as a function of linguistic experience in combination with memory limitations. The result adds a mechanistic explanation for the experimental evidence on spaced learning and, more generally, advocates integrating domain-general aspects of cognition, such as memory, into the language acquisition process
    corecore