54 research outputs found

    Whatever after next? adaptive predictions based on short- and long-term memory in visual search

    Get PDF
    Generating predictions for task-relevant goals is a fundamental requirement of human information processing, as it ensures adaptive success in our complex natural environment. Clark (in press) proposed a model of hierarchical predictive processing, in which perception, attention, and learning are unified within a coherent framework. In this view, incoming sensory signals are constantly matched with top-down expectations or predictions, with the aim of minimizing the prediction error to generate adaptive behavior. For example, in a natural environment such as a kitchen, search for a given target object (e.g., a pan) might be guided by a variety of predictive cues generated by previously acquired knowledge, such as the target’s typical appearance (e.g., its color, size, and shape as defined by a top-down implemented search template). In addition, predictions can also be derived from contextual factors, such as the most probable location of the target (e.g., on the stove), and its typical co-occurrence with other objects (e.g., pan and kettle; see Oliva and Torralba, 2007; Wolfe et al., 2011, for reviews)

    Taking attention out of context: Frontopolar Transcranial Magnetic Stimulation abolishes the formation of new context memories in visual search

    Get PDF
    This study investigates the causal contribution of the left frontopolar cortex (FPC) to the processing of violated expectations from learned target-distractor spatial contingencies during visual search. The experiment consisted of two phases: learning and test. Participants searched for targets presented either among repeated or nonrepeated target-distractor configurations. Prior research showed that repeated encounters of identically arranged displays lead to memory about these arrays, which then can come to guide search (contextual cueing effect). The crucial manipulation was a change of the target location, in a nevertheless constant distractor layout, at the transition from learning to test. In addition to this change, we applied repetitive transcranial magnetic stimulation (rTMS) over the left lateral FPC, over a posterior control site, or no rTMS at all (baseline; between-group manipulation) to see how FPC rTMS influences the ability of observers to adapt context-based memories acquired in the training phase. The learning phase showed expedited search in repeated relative to nonrepeated displays, with this context-based facilitation being comparable across all experimental groups. For the test phase, the recovery of cueing was critically dependent on the stimulation site: Although there was evidence of context adaptation toward the end of the experiment in the occipital and no-rTMS conditions, observers with FPC rTMS showed no evidence of relearning at all after target location changes. This finding shows that FPC plays an important role in the regulation of prediction errors in statistical context learning, thus contributing to an update of the spatial target-distractor contingencies after target position changes in learned spatial arrays

    Sleep-effects on implicit and explicit memory in repeated visual search

    Get PDF
    In repeated visual search tasks, facilitation of reaction times (RTs) due to repetition of the spatial arrangement of items occurs independently of RT facilitation due to improvements in general task performance. Whereas the latter represents typical procedural learning, the former is a kind of implicit memory that depends on the medial temporal lobe (MTL) memory system and is impaired in patients with amnesia. A third type of memory that develops during visual search is the observers’ explicit knowledge of repeated displays. Here, we used a visual search task to investigate whether procedural memory, implicit contextual cueing, and explicit knowledge of repeated configurations, which all arise independently from the same set of stimuli, are influenced by sleep. Observers participated in two experimental sessions, separated by either a nap or a controlled rest period. In each of the two sessions, they performed a visual search task in combination with an explicit recognition task. We found that (1) across sessions, MTL-independent procedural learning was more pronounced for the nap than rest group. This confirms earlier findings, albeit from different motor and perceptual tasks, showing that procedural memory can benefit from sleep. (2) Likewise, the sleep group compared with the rest group showed enhanced context-dependent configural learning in the second session. This is a novel finding, indicating that the MTL-dependent, implicit memory underlying contextual cueing is also sleep-dependent. (3) By contrast, sleep and wake groups displayed equivalent improvements in explicit recognition memory in the second session. Overall, the current study shows that sleep affects MTL-dependent as well as MTL-independent memory, but it affects different, albeit simultaneously acquired, forms of MTL-dependent memory differentially

    Statistical learning in the past modulates contextual cueing in the future

    No full text
    Observers' capability to extract statistical regularities from the visual world can facilitate attentional orienting. For instance, visual search benefits from the repetition of target locations by means of probability learning. Furthermore, repeated (old) contexts of nontargets contribute to faster visual search in comparison to random (new) arrangements of nontargets. Chun and Jiang (1998) called this effect “contextual cueing” because old contexts provide spatial cues to repeated target locations. In the present study, we investigated how probability learning modulates the adaptation of contextual cueing to a change in target location. After an initial learning phase, targets were relocated within their respective contexts to new positions that were, however, familiar from previous presentations in other spatial contexts. Contextual cueing was observed for relocated targets that originated from old contexts, but it turned into costs when relocated targets had previously been presented in new contexts. Thus, probability learning was not sufficient to observe adaptive contextual cueing for relocated targets. Instead, the contextual past of target locations—whether they had been cued or not—modulated the integration of relocated targets into a learned context. These findings imply that observers extract multiple levels of available statistical information and use them to infer hypotheses about future occurrences of familiar stimuli

    In the eye of the listener: pupil dilation elucidates discourse processing.

    No full text
    The current study investigated cognitive resource allocation in discourse processing by means of pupil dilation and behavioral measures. Short question-answer dialogs were presented to listeners. Either the context question queried a new information focus in the successive answer, or else the context query was corrected in the answer sentence (correction information). The information foci contained in the answer sentences were either adequately highlighted by prosodic means or not. Participants had to judge the adequacy of the focus prosody with respect to the preceding context question. Prosodic judgment accuracy was higher in the conditions bearing adequate focus prosody than in the conditions with inadequate focus prosody. Latency to peak pupil dilation was longer when new information foci were perceived compared to correction foci. Moreover, for the peak dilation, an interaction of focus type and prosody was found. Post hoc statistical tests revealed that prosodically adequate correction focus positions were processed with smaller peak dilation in comparison to all other dialog conditions. Thus, pupil dilation and results of a principal component analysis suggest an interaction of focus type and focus prosody in discourse processing
    corecore