29 research outputs found

    Role of Temporal Processing Stages by Inferior Temporal Neurons in Facial Recognition

    Get PDF
    In this review, we focus on the role of temporal stages of encoded facial information in the visual system, which might enable the efficient determination of species, identity, and expression. Facial recognition is an important function of our brain and is known to be processed in the ventral visual pathway, where visual signals are processed through areas V1, V2, V4, and the inferior temporal (IT) cortex. In the IT cortex, neurons show selective responses to complex visual images such as faces, and at each stage along the pathway the stimulus selectivity of the neural responses becomes sharper, particularly in the later portion of the responses. In the IT cortex of the monkey, facial information is represented by different temporal stages of neural responses, as shown in our previous study: the initial transient response of face-responsive neurons represents information about global categories, i.e., human vs. monkey vs. simple shapes, whilst the later portion of these responses represents information about detailed facial categories, i.e., expression and/or identity. This suggests that the temporal stages of the neuronal firing pattern play an important role in the coding of visual stimuli, including faces. This type of coding may be a plausible mechanism underlying the temporal dynamics of recognition, including the process of detection/categorization followed by the identification of objects. Recent single-unit studies in monkeys have also provided evidence consistent with the important role of the temporal stages of encoded facial information. For example, view-invariant facial identity information is represented in the response at a later period within a region of face-selective neurons. Consistent with these findings, temporally modulated neural activity has also been observed in human studies. These results suggest a close correlation between the temporal processing stages of facial information by IT neurons and the temporal dynamics of face recognition

    Differential Encoding of Factors Influencing Predicted Reward Value in Monkey Rostral Anterior Cingulate Cortex

    Get PDF
    Background: The value of a predicted reward can be estimated based on the conjunction of both the intrinsic reward value and the length of time to obtain it. The question we addressed is how the two aspects, reward size and proximity to reward, influence the responses of neurons in rostral anterior cingulate cortex (rACC), a brain region thought to play an important role in reward processing. Methods and Findings: We recorded from single neurons while two monkeys performed a multi-trial reward schedule task. The monkeys performed 1–4 sequential color discrimination trials to obtain a reward of 1–3 liquid drops. There were two task conditions, a valid cue condition, where the number of trials and reward amount were associated with visual cues, and a random cue condition, where the cue was picked from the cue set at random. In the valid cue condition, the neuronal firing is strongly modulated by the predicted reward proximity during the trials. Information about the predicted reward amount is almost absent at those times. In substantial subpopulations, the neuronal responses decreased or increased gradually through schedule progress to the predicted outcome. These two gradually modulating signals could be used to calculate the effect of time on the perception of reward value. In the random cue condition, little information about the reward proximity or reward amount is encoded during the course of the trial before reward delivery, but when the reward is actually delivered the responses reflect both the reward proximity and reward amount

    Short-Term Memory Trace in Rapidly Adapting Synapses of Inferior Temporal Cortex

    Get PDF
    Visual short-term memory tasks depend upon both the inferior temporal cortex (ITC) and the prefrontal cortex (PFC). Activity in some neurons persists after the first (sample) stimulus is shown. This delay-period activity has been proposed as an important mechanism for working memory. In ITC neurons, intervening (nonmatching) stimuli wipe out the delay-period activity; hence, the role of ITC in memory must depend upon a different mechanism. Here, we look for a possible mechanism by contrasting memory effects in two architectonically different parts of ITC: area TE and the perirhinal cortex. We found that a large proportion (80%) of stimulus-selective neurons in area TE of macaque ITCs exhibit a memory effect during the stimulus interval. During a sequential delayed matching-to-sample task (DMS), the noise in the neuronal response to the test image was correlated with the noise in the neuronal response to the sample image. Neurons in perirhinal cortex did not show this correlation. These results led us to hypothesize that area TE contributes to short-term memory by acting as a matched filter. When the sample image appears, each TE neuron captures a static copy of its inputs by rapidly adjusting its synaptic weights to match the strength of their individual inputs. Input signals from subsequent images are multiplied by those synaptic weights, thereby computing a measure of the correlation between the past and present inputs. The total activity in area TE is sufficient to quantify the similarity between the two images. This matched filter theory provides an explanation of what is remembered, where the trace is stored, and how comparison is done across time, all without requiring delay period activity. Simulations of a matched filter model match the experimental results, suggesting that area TE neurons store a synaptic memory trace during short-term visual memory

    Performance of the matched filter across full set of stimulus pairs by the stochastic model simulation.

    No full text
    <p>The left column shows the 8 stimuli presented to the model as the sample, and the top row shows the 8 stimuli presented as a match or nonmatch. The intersection of each row and column is a 16×16 pixel image made up of the responses of the 256 model TE neurons. The diagonal (with slope -1) shows the matched filter outputs for the eight sample-match pairs. The off diagonals show the matched filter outputs for the 56 sample-nonmatch pairs. The total power (normalized to 1.0 for the peak of the 64 pair set, in this example, S7-S7 sample-match) is shown above each output. With the threshold set to 0.225, the model made the fewest mistakes (false alarms, red values). The green values show correct matches, and the blue responses show the correct nonmatches. With the noise in the model adjusted to match that in the monkeys, the model got 62/64 = 97% of the trials correct. The average performance across the two monkeys was 98%.</p

    Noise-driven output of a matched filter.

    No full text
    <p>Top row shows a sample image made up of white noise uniformly distributed on [0, 1], the template, and the product of the two. The second row shows the effect of averaging across many such representations. Clearly, the signal-to-noise ratio is rapidly improving as the size of the pool being averaged increases, but even the output from a single sample (top row) looks somewhat like the filter. The response to the noise input has revealed some cells with selectivity for the template image. Similarly, neuronal activity seen between stimulus presentations may be the result of noisy inputs to matched filter cells.</p

    Predictions of response deviations by the deterministic model simulation.

    No full text
    <p>(A, B) Predictions of the deviations for TE (A) and perirhinal (B) neurons. Left column shows predictions of the deviations from the mean in the sample responses compared to the actual sample response deviations. Right column shows predictions for nonmatch response deviations. Variance explained is low in TE, but zero for perirhinal, which is a rough match to our data. (Format as in <a href="http://www.ploscompbiol.org/article/info:doi/10.1371/journal.pcbi.1000073#pcbi-1000073-g005" target="_blank">Figure 5</a>.)</p
    corecore