74 research outputs found

    Multiple Time Intervals of Visual Events Are Represented as Discrete Items in Working Memory

    Get PDF
    Previous studies on time perception and temporal memory have focused primarily on single time intervals; it is still unclear how multiple time intervals are perceived and maintained in working memory. In the present study, using Sternberg’s item recognition task, we compared the working memory of multiple items with different time intervals and visual textures, for sub- and supra-second ranges, and investigated the characteristics of working memory representation in the framework of the signal detection theory. In Experiments 1–3, gratings with different spatial frequencies and time intervals were sequentially presented as study items, followed by another grating as a probe. Participants determined whether the probe matched one of the study gratings, in either the temporal dimension or in the visual dimension. The results exhibited typical working memory characteristics such as the effects of memory load, serial position, and similarity between probe and study gratings, similarly, to the time intervals and visual textures. However, there were some differences between the two conditions. Specifically, the recency effect for time intervals was smaller, or even absent, as compared to that for visual textures. Further, as compared with visual textures, sub-second intervals were more likely to be judged as remembered in working memory. In addition, we found interactions between visual texture memory and time interval memory, and such visual–interval binding differed between sub- and supra-second ranges. Our results indicate that multiple time intervals are stored as discrete items in working memory, similarly, to visual texture memory, but the former might be more susceptible to decay than the latter. The differences in the binding between sub- and supra-second ranges imply that working memory for sub- and supra-second ranges may differ in the relatively higher decision stage

    Interference and feature specificity in visual perceptual learning

    Get PDF
    AbstractPerceptual learning (PL) often shows specificity to a trained feature. We investigated whether feature specificity is related to disruption in PL using the texture discrimination task (TDT), which shows learning specificity to background element but not to target element. Learning was disrupted when orientations of background elements were changed in two successive training sessions (interference) but not in a random order from trial to trial (roving). The presentation of target elements seemed to have reversed effect; learning occurred in two-parts training but not with roving. These results suggest that interference in TDT is feature specific while disruption by roving is not

    Location-Specific Cortical Activation Changes during Sleep after Training for Perceptual Learning

    Get PDF
    Visual perceptual learning is defined as performance enhancement on a sensory task and is distinguished from other types of learning and memory in that it is highly specific for location of the trained stimulus. The location specificity has been shown to be paralleled by enhancement in functional magnetic resonance imaging (fMRI) signal in the trained region of V1 after visual training. Although recently the role of sleep in strengthening visual perceptual learning has attracted much attention, its underlying neural mechanism has yet to be clarified. Here, for the first time, fMRI measurement of human V1 activation was conducted concurrently with a polysomnogram during sleep with and without preceding training for visual perceptual learning. As a result of predetermined region-of-interest analysis of V1, activation enhancement during non-rapid-eye-movement sleep after training was observed specifically in the trained region of V1. Furthermore, improvement of task performance measured subsequently to the post-training sleep session was significantly correlated with the amount of the trained-region-specific fMRI activation in V1 during sleep. These results suggest that as far as V1 is concerned, only the trained region is involved in improving task performance after sleep

    Defining a Link between Perceptual Learning and Attention

    Get PDF
    Takeo Watanabe and Yuko Yotsumoto explore the implications of a new study that shows that for perceptual learning of visual features involving multiple stimuli to occur, the brain needs to temporally "tag" the features, a learning process that requires paying attention

    The Blursday database as a resource to study subjective temporalities during COVID-19

    Get PDF
    The COVID-19 pandemic and associated lockdowns triggered worldwide changes in the daily routines of human experience. The Blursday database provides repeated measures of subjective time and related processes from participants in nine countries tested on 14 questionnaires and 15 behavioural tasks during the COVID-19 pandemic. A total of 2,840 participants completed at least one task, and 439 participants completed all tasks in the first session. The database and all data collection tools are accessible to researchers for studying the effects of social isolation on temporal information processing, time perspective, decision-making, sleep, metacognition, attention, memory, self-perception and mindfulness. Blursday includes quantitative statistics such as sleep patterns, personality traits, psychological well-being and lockdown indices. The database provides quantitative insights on the effects of lockdown (stringency and mobility) and subjective confinement on time perception (duration, passage of time and temporal distances). Perceived isolation affects time perception, and we report an inter-individual central tendency effect in retrospective duration estimation

    Behavioral data

    No full text
    Behavioral data of each subjec

    Data from: Opposite distortions in interval timing perception for visual and auditory stimuli with temporal modulations

    No full text
    When an object is presented visually and moves or flickers, the perception of its duration tends to be overestimated. Such an overestimation is called time dilation. Perceived time can also be distorted when a stimulus is presented aurally as an auditory flutter, but the mechanisms and their relationship to visual processing remains unclear. In the present study, we measured interval timing perception while modulating the temporal characteristics of visual and auditory stimuli, and investigated whether the interval times of visually and aurally presented objects shared a common mechanism. In these experiments, participants compared the durations of flickering or fluttering stimuli to standard stimuli, which were presented continuously. Perceived durations for auditory flutters were underestimated, while perceived durations of visual flickers were overestimated. When auditory flutters and visual flickers were presented simultaneously, these distortion effects were cancelled out. When auditory flutters were presented with a constantly presented visual stimulus, the interval timing perception of the visual stimulus was affected by the auditory flutters. These results indicate that interval timing perception is governed by independent mechanisms for visual and auditory processing, and that there are some interactions between the two processing systems

    Near-optimal integration of the magnitude information of time and numerosity

    No full text
    Magnitude information is often correlated in the external world, providing complementary information about the environment. As if to reflect this relationship, the perceptions of different magnitudes (e.g. time and numerosity) are known to influence one another. Recent studies suggest that such magnitude interaction is similar to cue integration, such as multisensory integration. Here, we tested whether human observers could integrate the magnitudes of two quantities with distinct physical units (i.e. time and numerosity) as abstract magnitude information. The participants compared the magnitudes of two visual stimuli based on time, numerosity, or both. Consistent with the predictions of the maximum-likelihood estimation model, the participants integrated time and numerosity in a near-optimal manner; the weight of each dimension was proportional to their relative reliability, and the integrated estimate was more reliable than either the time or numerosity estimate. Furthermore, the integration approached a statistical optimum as the temporal discrepancy of the acquisition of each piece of information became smaller. These results suggest that magnitude interaction arises through a similar computational mechanism to cue integration. They are also consistent with the idea that different magnitudes are processed by a generalized magnitude system

    Own Voice Experiment data

    No full text
    Each worksheet corresponds to each experiment
    • …
    corecore