1,270 research outputs found

    Processing irrelevant location information: practice and transfer effects in a Simon task.

    Get PDF
    How humans produce cognitively driven fine motor movements is a question of fundamental importance in how we interact with the world around us. For example, we are exposed to a constant stream of information and we must select the information that is most relevant by which to guide our actions. In the present study, we employed a well-known behavioral assay called the Simon task to better understand how humans are able to learn to filter out irrelevant information. We trained subjects for four days with a visual stimulus presented, alternately, in central and lateral locations. Subjects responded with one hand moving a joystick in either the left or right direction. They were instructed to ignore the irrelevant location information and respond based on color (e.g. red to the right and green to the left). On the fifth day, an additional testing session was conducted where the task changed and the subjects had to respond by shape (e.g. triangle to the right and rectangle to the left). They were instructed to ignore the color and location, and respond based solely on the task relevant shape. We found that the magnitude of the Simon effect decreases with training, however it returns in the first few trials after a break. Furthermore, task-defined associations between response direction and color did not significantly affect the Simon effect based on shape, and no significant associative learning from the specific stimulus-response features was found for the centrally located stimuli. We discuss how these results are consistent with a model involving route suppression/gating of the irrelevant location information. Much of the learning seems to be driven by subjects learning to suppress irrelevant location information, however, this seems to be an active inhibition process that requires a few trials of experience to engage

    High resolution, high capacity, spatial specificity in perceptual learning.

    Get PDF
    Research of perceptual learning has received significant interest due to findings that training on perceptual tasks can yield learning effects that are specific to the stimulus features of that task. However, recent studies have demonstrated that while training a single stimulus at a single location can yield a high-degree of stimulus specificity, training multiple features, or at multiple locations can reveal a broad transfer of learning to untrained features or stimulus locations. We devised a high resolution, high capacity, perceptual learning procedure with the goal of testing whether spatial specificity can be found in cases where observers are highly trained to discriminate stimuli in many different locations in the visual field. We found a surprising degree of location specific learning, where performance was significantly better when target stimuli were presented at 1 of the 24 trained locations compared to when they were placed in 1 of the 12 untrained locations. This result is particularly impressive given that untrained locations were within a couple degrees of visual angle of those that were trained. Given the large number of trained locations, the fact that the trained and untrained locations were interspersed, and the high-degree of spatial precision of the learning, we suggest that these results are difficult to account for using attention or decision strategies and instead suggest that learning may have taken place for each location separately in retinotopically organized visual cortex

    Task-Irrelevant Perceptual Learning Specific to the Contrast Polarity of Motion Stimuli

    Full text link
    Studies of perceptual learning have focused on aspects of learning that are related to early stages of sensory processing. However, conclusions that perceptual learning results in low-level sensory plasticity are of great controversy, largely because such learning can often be attributed to plasticity in later stages of sensory processing or in the decision processes. To address this controversy, we developed a novel random dot motion (RDM) stimulus to target motion cells selective to contrast polarity, by ensuring the motion direction information arises only from signal dot onsets and not their offsets, and used these stimuli in conjunction with the paradigm of task-irrelevant perceptual learning (TIPL). In TIPL, learning is achieved in response to a stimulus by subliminally pairing that stimulus with the targets of an unrelated training task. In this manner, we are able to probe learning for an aspect of motion processing thought to be a function of directional V1 simple cells with a learning procedure that dissociates the learned stimulus from the decision processes relevant to the training task. Our results show learning for the exposed contrast polarity and that this learning does not transfer to the unexposed contrast polarity. These results suggest that TIPL for motion stimuli may occur at the stage of directional V1 simple cells.CELEST, an NSF Science of Learning Center (SBE-0354378); Defense Advanced Research Projects Agency SyNAPSE program (HR0011-09-3-0001, HR001-09-C-0011); National Science Foundation (BCS-0549036); National Institutes of Health (R21 EY017737

    Word-decoding as a function of temporal processing in the visual system.

    Get PDF
    This study explored the relation between visual processing and word-decoding ability in a normal reading population. Forty participants were recruited at Arizona State University. Flicker fusion thresholds were assessed with an optical chopper using the method of limits by a 1-deg diameter green (543 nm) test field. Word decoding was measured using reading-word and nonsense-word decoding tests. A non-linguistic decoding measure was obtained using a computer program that consisted of Landolt C targets randomly presented in four cardinal orientations, at 3-radial distances from a focus point, for eight compass points, in a circular pattern. Participants responded by pressing the arrow key on the keyboard that matched the direction the target was facing. The results show a strong correlation between critical flicker fusion thresholds and scores on the reading-word, nonsense-word, and non-linguistic decoding measures. The data suggests that the functional elements of the visual system involved with temporal modulation and spatial processing may affect the ease with which people read

    Performance-monitoring integrated reweighting model of perceptual learning

    Get PDF
    Perceptual learning (PL) has been traditionally thought of as highly specific to stimulus properties, task and retinotopic position. This view is being progressively challenged, with accumulating evidence that learning can generalize (transfer) across various parameters under certain conditions. For example, retinotopic specificity can be diminished when the proportion of easy to hard trials is high, such as when multiple short staircases, instead of a single long one, are used during training. To date, there is a paucity of mechanistic explanations of what conditions affect transfer of learning. Here we present a model based on the popular Integrated Reweighting Theory model of PL but departing from its one-layer architecture by including a novel key feature: dynamic weighting of retinotopic-location-specific vs location-independent representations based on internal performance estimates of these representations. This dynamic weighting is closely related to gating in a mixture-of-experts architecture. Our dynamic performance-monitoring model (DPMM) unifies a variety of psychophysical data on transfer of PL, such as the short-vs-long staircase effect, as well as several findings from the double-training literature. Furthermore, the DPMM makes testable predictions and ultimately helps understand the mechanisms of generalization of PL, with potential applications to vision rehabilitation and enhancement

    Visual rhythm perception improves through auditory but not visual training

    Get PDF
    SummaryMemory research has shown that test performance is optimal when testing and practice occur in identical contexts [1]. However, recent research in object recognition and perceptual learning has shown that multisensory practice leads to improved test performance, even when the test is unisensory [2,3]. It is also known that different sensory modalities can have differing proficiencies in a given domain. For instance, research shows that, compared to the auditory modality, the visual modality is significantly less proficient at discriminating the rhythms of temporal sequences [4,5]. Although rhythm perception is typically thought of as residing in the auditory domain, instances of visual rhythm perception abound in daily life, for example, when one watches a dancer or a drummer, or when a doctor examines a patient’s breathing or heart rate on a monitor (such as when diagnosing arrhythmia). However, no previous study has examined whether visual rhythm discrimination is a trainable perceptual skill. In light of this, we examined the extent to which visual rhythm perception can be improved through two sessions of visual, auditory, or audiovisual training. We found that visual rhythm discrimination was significantly improved in the auditory and audiovisual training groups, but not in the visual training group. Our results show that, in certain tasks, within-modality training may not be the best approach and that, instead, training in a different sensory modality can be a necessary approach to achieve learning

    Predicting individual contrast sensitivity functions from acuity and letter contrast sensitivity measurements.

    Get PDF
    Contrast sensitivity (CS) is widely used as a measure of visual function in both basic research and clinical evaluation. There is conflicting evidence on the extent to which measuring the full contrast sensitivity function (CSF) offers more functionally relevant information than a single measurement from an optotype CS test, such as the Pelli-Robson chart. Here we examine the relationship between functional CSF parameters and other measures of visual function, and establish a framework for predicting individual CSFs with effectively a zero-parameter model that shifts a standard-shaped template CSF horizontally and vertically according to independent measurements of high contrast acuity and letter CS, respectively. This method was evaluated for three different CSF tests: a chart test (CSV-1000), a computerized sine-wave test (M&S Sine Test), and a recently developed adaptive test (quick CSF). Subjects were 43 individuals with healthy vision or impairment too mild to be considered low vision (acuity range of -0.3 to 0.34 logMAR). While each test demands a slightly different normative template, results show that individual subject CSFs can be predicted with roughly the same precision as test-retest repeatability, confirming that individuals predominantly differ in terms of peak CS and peak spatial frequency. In fact, these parameters were sufficiently related to empirical measurements of acuity and letter CS to permit accurate estimation of the entire CSF of any individual with a deterministic model (zero free parameters). These results demonstrate that in many cases, measuring the full CSF may provide little additional information beyond letter acuity and contrast sensitivity
    corecore