27 research outputs found

    A common dynamic prior for time in duration discrimination

    Get PDF
    Estimation of time depends heavily on both global and local statistical context. Durations that are short relative to the global distribution are systematically overestimated; durations that are locally preceded by long durations are also overestimated. Context effects are prominent in duration discrimination tasks, where a standard duration and a comparison duration are presented on each trial. In this study, we compare and test two models that posit a dynamically updating internal reference that biases time estimation on global and local scales in duration discrimination tasks. The internal reference model suggests that the internal reference operates during postperceptual stages and only interacts with the first presented duration. In contrast, a Bayesian account of time estimation implies that any perceived duration updates the internal reference and therefore interacts with both the first and second presented duration. We implemented both models and tested their predictions in a duration discrimination task where the standard duration varied from trial to trial. Our results are in line with a Bayesian perspective on time estimation. First, the standard systematically biased estimation of the comparison, such that shorter standards increased the likelihood of reporting that the comparison was shorter. Second, both the previous standard and comparison systematically biased time estimation of subsequent trials in the same direction. Third, more precise observers showed smaller biases. In sum, our findings suggest a common dynamic prior for time that is updated by each perceived duration and where the relative weighting of old and new observations is determined by their relative precision

    Target Cueing Provides Support for Target- and Resource-Based Models of the Attentional Blink

    Get PDF
    The attentional blink (AB) describes a time-based deficit in processing the second of two masked targets. The AB is attenuated if successive targets appear between the first and final target, or if a cueing target is positioned before the final target. Using various speeds of stimulus presentation, the current study employed successive targets and cueing targets to confirm and extend an understanding of target-target cueing in the AB. In Experiment 1, three targets were presented sequentially at rates of 30 msec/item or 90 msec/item. Successive targets presented at 90 msec improved performance compared with non-successive targets. However, accuracy was equivalently high for successive and non-successive targets presented at 30 msec/item, suggesting that–regardless of whether they occurred consecutively–those items fell within the temporally defined attentional window initiated by the first target. Using four different presentation speeds, Experiment 2 confirmed the time-based definition of the AB and the success of target-cueing at 30 msec/item. This experiment additionally revealed that cueing was most effective when resources were not devoted to the cue, thereby implicating capacity limitations in the AB. Across both experiments, a novel order-error measure suggested that errors tend to decrease with an increasing duration between the targets, but also revealed that certain stimulus conditions result in stable order accuracy. Overall, the results are best encapsulated by target-based and resource-sharing theories of the AB, which collectively value the contributions of capacity limitations and optimizing transient attention in time

    Analysis

    No full text

    Adaptive event integration in the missing element task

    No full text

    Temporal integration and attentional selection of color and contrast target pairs in rapid serial visual presentation

    No full text
    Performance in a dual target rapid serial visual presentation task was investigated, dependent on whether the color or the contrast of the targets was the same or different. Both identification accuracy on the second target, as a measure of temporal attention, and the frequency of temporal integration were measured. When targets had a different color (red or blue), overall identification accuracy of the second target and identification accuracy of the second target at Lag 1 were both higher than when targets had the same color. At the same time, increased temporal integration of the targets at Lag 1 was observed in the different color condition, even though actual (non-integrated) single targets never consisted of multiple colors. When the color pairs were made more similar, so that they all fell within the range of a single nominal hue (blue), these effects were not observed. Different findings were obtained when contrast was manipulated. Identification accuracy of the second target was higher in the same contrast condition than in the different contrast condition. Higher identification accuracy of both targets was furthermore observed when they were presented with high contrast, while target contrast did not influence temporal integration at all. Temporal attention and integration were thus influenced differently by target contrast pairing than by (categorical) color pairing. Categorically different color pairs, or more generally, categorical feature pairs, may thus afford a reduction in temporal competition between successive targets that eventually enhances attention and integration

    Impulse perturbation reveals cross-modal access to sensory working memory through learned associations

    No full text
    Using MVPA of EEG data to investigate visual memories encoded after visual presentation and after retrieval from long-term memory via auditory cues

    Between one event and two: The locus of the effect of stimulus contrast on temporal integration

    No full text
    This project contains the behavioral and EEG data, as well as the analysis scripts, belonging to the article "Between one event and two: The locus of the effect of stimulus contrast on temporal integration" by Akyurek & Wijnja, which is currently in press at Psychophysiology

    Data

    No full text

    Data

    No full text
    corecore