25 research outputs found

    Grounding Word Learning in Space

    Get PDF
    Humans and objects, and thus social interactions about objects, exist within space. Words direct listeners' attention to specific regions of space. Thus, a strong correspondence exists between where one looks, one's bodily orientation, and what one sees. This leads to further correspondence with what one remembers. Here, we present data suggesting that children use associations between space and objects and space and words to link words and objectsβ€”space binds labels to their referents. We tested this claim in four experiments, showing that the spatial consistency of where objects are presented affects children's word learning. Next, we demonstrate that a process model that grounds word learning in the known neural dynamics of spatial attention, spatial memory, and associative learning can capture the suite of results reported here. This model also predicts that space is special, a prediction supported in a fifth experiment that shows children do not use color as a cue to bind words and objects. In a final experiment, we ask whether spatial consistency affects word learning in naturalistic word learning contexts. Children of parents who spontaneously keep objects in a consistent spatial location during naming interactions learn words more effectively. Together, the model and data show that space is a powerful tool that can effectively ground word learning in social contexts

    Brain Responses to Violet, Blue, and Green Monochromatic Light Exposures in Humans: Prominent Role of Blue Light and the Brainstem

    Get PDF
    BACKGROUND: Relatively long duration retinal light exposure elicits nonvisual responses in humans, including modulation of alertness and cognition. These responses are thought to be mediated in part by melanopsin-expressing retinal ganglion cells which are more sensitive to blue light than violet or green light. The contribution of the melanopsin system and the brain mechanisms involved in the establishment of such responses to light remain to be established. METHODOLOGY/PRINCIPAL FINDINGS: We exposed 15 participants to short duration (50 s) monochromatic violet (430 nm), blue (473 nm), and green (527 nm) light exposures of equal photon flux (10(13)ph/cm(2)/s) while they were performing a working memory task in fMRI. At light onset, blue light, as compared to green light, increased activity in the left hippocampus, left thalamus, and right amygdala. During the task, blue light, as compared to violet light, increased activity in the left middle frontal gyrus, left thalamus and a bilateral area of the brainstem consistent with activation of the locus coeruleus. CONCLUSION/SIGNIFICANCE: These results support a prominent contribution of melanopsin-expressing retinal ganglion cells to brain responses to light within the very first seconds of an exposure. The results also demonstrate the implication of the brainstem in mediating these responses in humans and speak for a broad involvement of light in the regulation of brain function

    Spatial Modulation of Primate Inferotemporal Responses by Eye Position

    Get PDF
    Background: A key aspect of representations for object recognition and scene analysis in the ventral visual stream is the spatial frame of reference, be it a viewer-centered, object-centered, or scene-based coordinate system. Coordinate transforms from retinocentric space to other reference frames involve combining neural visual responses with extraretinal postural information. Methodology/Principal Findings: We examined whether such spatial information is available to anterior inferotemporal (AIT) neurons in the macaque monkey by measuring the effect of eye position on responses to a set of simple 2D shapes. We report, for the first time, a significant eye position effect in over 40 % of recorded neurons with small gaze angle shifts from central fixation. Although eye position modulates responses, it does not change shape selectivity. Conclusions/Significance: These data demonstrate that spatial information is available in AIT for the representation of objects and scenes within a non-retinocentric frame of reference. More generally, the availability of spatial information in AIT calls into questions the classic dichotomy in visual processing that associates object shape processing with ventra

    Representing Where along with What Information in a Model of a Cortical Patch

    Get PDF
    Behaving in the real world requires flexibly combining and maintaining information about both continuous and discrete variables. In the visual domain, several lines of evidence show that neurons in some cortical networks can simultaneously represent information about the position and identity of objects, and maintain this combined representation when the object is no longer present. The underlying network mechanism for this combined representation is, however, unknown. In this paper, we approach this issue through a theoretical analysis of recurrent networks. We present a model of a cortical network that can retrieve information about the identity of objects from incomplete transient cues, while simultaneously representing their spatial position. Our results show that two factors are important in making this possible: A) a metric organisation of the recurrent connections, and B) a spatially localised change in the linear gain of neurons. Metric connectivity enables a localised retrieval of information about object identity, while gain modulation ensures localisation in the correct position. Importantly, we find that the amount of information that the network can retrieve and retain about identity is strongly affected by the amount of information it maintains about position. This balance can be controlled by global signals that change the neuronal gain. These results show that anatomical and physiological properties, which have long been known to characterise cortical networks, naturally endow them with the ability to maintain a conjunctive representation of the identity and location of objects

    An information theoretic approach to the contributions of the firing rates and the correlations between the firing of neurons.

    No full text
    To analyze the extent to which populations of neurons encode information in the numbers of spikes each neuron emits or in the relative time of firing of the different neurons that might reflect synchronization, we developed and analyzed the performance of an information theoretic approach. The formula quantifies the corrections to the instantaneous information rate that result from correlations in spike emission between pairs of neurons. We showed how these cross-cell terms can be separated from the correlations that occur between the spikes emitted by each neuron, the auto-cell terms in the information rate expansion. We also described a method to test whether the estimate of the amount of information contributed by stimulus-dependent synchronization is significant. With simulated data, we show that the approach can separate information arising from the number of spikes emitted by each neuron from the redundancy that can arise if neurons have common inputs and from the synergy that can arise if cells have stimulus-dependent synchronization. The usefulness of the approach is also demonstrated by showing how it helps to interpret the encoding shown by neurons in the primate inferior temporal visual cortex. When applied to a sample dataset of simultaneously recorded inferior temporal cortex neurons, the algorithm showed that most of the information is available in the number of spikes emitted by each cell; that there is typically just a small degree (approximately 12%) of redundancy between simultaneously recorded inferior temporal cortex (IT) neurons; and that there is very little gain of information that arises from stimulus-dependent synchronization effects in these neurons
    corecore