345 research outputs found

    Prior experience modulates top-down predictive processing in the ventral visual areas

    Get PDF
    Repetition suppression(RS)refers to that the reduction of neural activities for repeated presentations of a given stimulus compared to its first presentation. Summerfield et al(2008) found the magnitude of RS is affected by the repetition probability of stimuli, called as P(rep) effect. Based on the predictive coding theory, prior experience about the sensory inputs is necessary to optimally achieve cognitive processes. But it remains unclear how prior experience modulates predictive processes. To address this issue, in Study I, we estimated the P(rep) effects for Chinese characters and German words in native Chinese and German participants to test whether prior experience affects the P(rep) effect of lexical stimuli. The results showed that the P(rep) effect is only manifest for words of a language with which participants had prior experience. Study II performed fMRI measurements before and after a 10-day perceptual learning (PL) training for cars to test the modulation of short-term experience on the P(rep) effect. The results replicated the P(rep) effect for faces and cars. More interestingly, the P(rep) effect can be temporarily abolished by the short-term PL experience. The third study investigated how prior experience modulates sensory inputs. Study 3a adopted a classic stimulus repetition paradigm to measure RS for faces, together with either concurrent short-term memory (STM) load or a control condition. The results showed that RS is significantly attenuated when visual STM is loaded. Study 3b manipulates attention by a face inversion detection task. The results showed that the RS effect appears in the STM condition when participants attend to faces. The main conclusions: i) predictive processes, as measured by the P(rep) effect, require extensive prior experiences with stimuli, but ii) these can also be modulated by short-term learning experience. Further, iii) STM and attention are two modulators of prior experiences on predictive processes

    Updating contextual sensory expectations for adaptive behaviour

    Get PDF
    The brain has the extraordinary capacity to construct predictive models of the environment by internalizing statistical regularities in the sensory inputs. The resulting sensory expectations shape how we perceive and react to the world; at the neural level, this relates to decreased neural responses to expected than unexpected stimuli (‘expectation suppression’). Crucially, expectations may need revision as context changes. However, existing research has often neglected this issue. Further, it is unclear whether contextual revisions apply selectively to expectations relevant to the task at hand, hence serving adaptive behaviour. The present fMRI study examined how contextual visual expectations spread throughout the cortical hierarchy as participants update their beliefs. We created a volatile environment with two state spaces presented over separate contexts and controlled by an independent contextualizing signal. Participants attended a training session before scanning to learn contextual temporal associations among pairs of object images. The fMRI experiment then tested for the emergence of contextual expectation suppression in two separate tasks, respectively with task-relevant and task-irrelevant expectations. Behavioural and neural effects of contextual expectation emerged progressively across the cortical hierarchy as participants attuned themselves to the context: expectation suppression appeared first in the insula, inferior frontal gyrus and posterior parietal cortex, followed by the ventral visual stream, up to early visual cortex. This applied selectively to task-relevant expectations. Taken together, the present results suggest that an insular and frontoparietal executive control network may guide the flexible deployment of contextual sensory expectations for adaptive behaviour in our complex and dynamic world.<br

    Change blindness: eradication of gestalt strategies

    Get PDF
    Arrays of eight, texture-defined rectangles were used as stimuli in a one-shot change blindness (CB) task where there was a 50% chance that one rectangle would change orientation between two successive presentations separated by an interval. CB was eliminated by cueing the target rectangle in the first stimulus, reduced by cueing in the interval and unaffected by cueing in the second presentation. This supports the idea that a representation was formed that persisted through the interval before being 'overwritten' by the second presentation (Landman et al, 2003 Vision Research 43149–164]. Another possibility is that participants used some kind of grouping or Gestalt strategy. To test this we changed the spatial position of the rectangles in the second presentation by shifting them along imaginary spokes (by ±1 degree) emanating from the central fixation point. There was no significant difference seen in performance between this and the standard task [F(1,4)=2.565, p=0.185]. This may suggest two things: (i) Gestalt grouping is not used as a strategy in these tasks, and (ii) it gives further weight to the argument that objects may be stored and retrieved from a pre-attentional store during this task

    Sensory Entrainment Mechanisms in Auditory Perception: Neural Synchronization Cortico-Striatal Activation.

    Get PDF
    The auditory system displays modulations in sensitivity that can align with the temporal structure of the acoustic environment. This sensory entrainment can facilitate sensory perception and is particularly relevant for audition. Systems neuroscience is slowly uncovering the neural mechanisms underlying the behaviorally observed sensory entrainment effects in the human sensory system. The present article summarizes the prominent behavioral effects of sensory entrainment and reviews our current understanding of the neural basis of sensory entrainment, such as synchronized neural oscillations, and potentially, neural activation in the cortico-striatal system

    Prediction related phenomena of visual perception

    Get PDF
    Perception is grounded in our ability to optimize predictions about upcoming events. Such predictions depend on both the incoming sensory input and on our previously acquired conceptual knowledge. Correctly predicted or expected sensory stimuli induce reduced responses when compared to incorrectly predicted, surprising inputs. Predictions enable an efficient neuronal encoding so that less energy is invested to interpret redundant sensory stimuli. Several different neuronal phenomena are the consequences of predictions, such as repetition suppression (RS) and mismatch negativity (MMN). RS represents the reduced neuronal response to a stimulus upon its repeated presentation. MMN is the electrophysiological response difference between rare and frequent stimuli in an oddball sequence. While both are currently studied extensively, the underlying mechanisms of RS and MMN as well as their relation to predictions remains poorly understood. In the current thesis, four experiments were devised to investigate prediction related phenomena dependent on the repetition probability of stimuli. Two studies deal with the RS phenomenon, while the other two investigate the MMN response. In Experiment 1 the temporal dynamics underlying prediction and RS effects were tested. Participants were presented with expected and surprising stimulus pairs with two different inter-stimulus intervals (0.5s for Immediate and 1.75 or 3.75s for Delayed target presentation). These pairs could either repeat or alternate. Expectations were contingent on face gender and were manipulated with the repetition probability. We found that the prediction effects do not depend on the length of the ISI period, suggesting that Immediate and Delayed cue-target stimulus arrangements create similar expectation effects. In order to elucidate the neuronal mechanisms underlying these prediction effects (i.e. surprise enhancement or expectation suppression), in our second study, we employed the experimental design of the first experiment with the addition of random events as a control. We found that surprising events elicit stronger Blood Oxygen Level Dependent (BOLD) responses than random events, implying that predictions influence the neuronal responses via surprise enhancement. Similarly, the third experiment was employed to disentangle which neural mechanism underlies the visual MMN (vMMN). We compared the responses to the stimuli (chairs, faces, real and false characters) presented in conventional oddball sequences to the same stimuli in control sequences (Kaliukhovich and Vogels, 2014). We found that the neural mechanisms underlying vMMN are category dependent: the vMMN of faces and chairs was due to RS, while the vMMN response of real and false characters was mainly driven by surprise-related changes. So far, no study used category-specific regions of interest (ROIs) to examine the neuroimaging correlates of the vMMN. Therefore, for the fourth experiment, we recorded electrophysiological and neuroimaging data from the same participants with an oddball paradigm for real and false characters. We found a significant correlation between vMMN (CP1 cluster at 400 ms) and functional magnetic resonance imaging adaptation (in the letter form area for real characters), suggesting their strong relationship. Taking the four studies into consideration, it is clear that surprise has an important role in prediction related phenomena. The role of surprise is discussed in the light of these results and other recent developments reported in the literature. Overall, this thesis suggests the unification of RS and MMN within the framework of predictive coding

    Interpreting EEG and MEG signal modulation in response to facial features: the influence of top-down task demands on visual processing strategies

    Get PDF
    The visual processing of faces is a fast and efficient feat that our visual system usually accomplishes many times a day. The N170 (an Event-Related Potential) and the M170 (an Event-Related Magnetic Field) are thought to be prominent markers of the face perception process in the ventral stream of visual processing that occur ~ 170 ms after stimulus onset. The question of whether face processing at the time window of the N170 and M170 is automatically driven by bottom-up visual processing only, or whether it is also modulated by top-down control, is still debated in the literature. However, it is known from research on general visual processing, that top-down control can be exerted much earlier along the visual processing stream than the N170 and M170 take place. I conducted two studies, each consisting of two face categorization tasks. In order to examine the influence of top-down control on the processing of faces, I changed the task demands from one task to the next, while presenting the same set of face stimuli. In the first study, I recorded participants’ EEG signal in response to faces while they performed both a Gender task and an Expression task on a set of expressive face stimuli. Analyses using Bubbles (Gosselin & Schyns, 2001) and Classification Image techniques revealed significant task modulations of the N170 ERPs (peaks and amplitudes) and the peak latency of maximum information sensitivity to key facial features. However, task demands did not change the information processing during the N170 with respect to behaviourally diagnostic information. Rather, the N170 seemed to integrate gender and expression diagnostic information equally in both tasks. In the second study, participants completed the same behavioural tasks as in the first study (Gender and Expression), but this time their MEG signal was recorded in order to allow for precise source localisation. After determining the active sources during the M170 time window, a Mutual Information analysis in connection with Bubbles was used to examine voxel sensitivity to both the task-relevant and the task-irrelevant face category. When a face category was relevant for the task, sensitivity to it was usually higher and peaked in different voxels than sensitivity to the task-irrelevant face category. In addition, voxels predictive of categorization accuracy were shown to be sensitive to task-relevant, behaviourally diagnostic facial features only. I conclude that facial feature integration during both N170 and M170 is subject to top-down control. The results are discussed against the background of known face processing models and current research findings on visual processing

    Statistical learning is not error-driven

    Get PDF
    Prediction errors have a prominent role in many forms of learning. For example, in reinforcement learning agents learn by updating the association between states and outcomes as a function of the prediction error elicited by the event. An empirical hallmark of such error-driven learning is Kamin blocking, whereby the association between a stimulus and outcome is only learnt when the outcome is not already fully predicted by another stimulus. It remains debated however to which extent error-driven computations underlie learning of automatically formed associations as in statistical learning. Here we asked whether the automatic and incidental learning of the statistical structure of the environment is error-driven, like reinforcement learning, or instead does not rely on prediction errors for learning associations. We addressed this issue in a series of Kamin blocking studies. In three consecutive experiments, we observed robust incidental statistical learning of temporal associations among pairs of images, but no evidence of blocking. Our results suggest that statistical learning is not error-driven but may rather follow the principles of basic Hebbian associative learning

    The cognitive neuroscience of visual working memory

    Get PDF
    Visual working memory allows us to temporarily maintain and manipulate visual information in order to solve a task. The study of the brain mechanisms underlying this function began more than half a century ago, with Scoville and Milner’s (1957) seminal discoveries with amnesic patients. This timely collection of papers brings together diverse perspectives on the cognitive neuroscience of visual working memory from multiple fields that have traditionally been fairly disjointed: human neuroimaging, electrophysiological, behavioural and animal lesion studies, investigating both the developing and the adult brain

    Between prediction and reality: top-down propagation, communication and modulation

    Get PDF
    Expectations predict the upcoming visual information, facilitating its disambiguation from the noisy input to speed up behavior. However, the neural mechanisms that support such dynamic feature flow and how they facilitate behavior remain unclear. In this thesis, I will trace the propagation, communication and modulation effect related to the prediction. In the first study (Chapter 2), I initiated a cueing-categorization design and validated its feasibility. I first detected the sensibility of participants in distinguishing auditory pitches (i.e., the cues) by estimating their d-primes. Next, in two phases, I instructed participants to build the coupling relationship between auditory cues and stimuli, then proved that compared with non-informative prediction, informative ones can significantly reduce the reaction time. Recognizing confounding factors in the first study, I further improved the design by manipulating two specific and separate predicted contents. I used a prediction experiment that cued participants (N = 11) to the spatial location (left vs. right) and spatial frequency (SF, Low, LSF, vs. High, HSF) contents of an upcoming Gabor patch. I reconstructed two networks (prediction network and categorization network) in the following two studies with simultaneous MEG recordings of each participant’s neural activity. In the second study (Chapter 3), focusing on the pre-stimulus prediction stage, I answered when, where and how predictions dynamically propagate through a network of brain regions. I traced the dynamic neural representation of predictive cues and reconstructed the communications about predicted contents in a functional network, sequentially from temporal lobe at 90-120ms, to occipital cortex after 200ms, with modulatory supervision of frontal regions at 120-200ms. In the third study (Chapter 4), turning to the post-stimulus stage, I reconstructed the communication network propagating the stimulus feature from occipital-ventral regions (150-250ms) to parietal lobe (250-350ms), finally arriving premotor cortex (>350ms) which modulates behavioral categorization. I found the previous prediction previewed and then sharpen stimulus representation across the categorization network, leading to a faster reaction time. I discussed the generalization of the findings to other stimulus features and sensory modalities. Putting forward the plans about developing a series of structured studies on predicting higher-dimensional features, in the future, I aim to understand the neural mechanisms about how prediction tunes perception and to trace the concrete predicted contents in laminar layers with the fusion of E/MEG and fMRI
    corecore