23 research outputs found
Differences in the time course of learning for hard compared to easy training.
Learning is known to facilitate performance in a range of perceptual tasks. Behavioral improvement after training is typically shown after practice with highly similar stimuli that are difficult to discriminate (i.e., hard training), or after exposure to dissimilar stimuli that are highly discriminable (i.e., easy training). However, little is known about the processes that mediate learning after training with difficult compared to easy stimuli. Here we investigate the time course of learning when observers were asked to discriminate similar global form patterns after hard vs. easy training. Hard training required observers to discriminate highly similar global forms, while easy training to judge clearly discriminable patterns. Our results demonstrate differences in learning and transfer performance for hard compared to easy training. Hard training resulted in stronger behavioral improvement than easy training. Further, for hard training, performance improved during single sessions, while for easy training performance improved across but not within sessions. These findings suggest that training with difficult stimuli may result in online learning of specific stimulus features that are similar between the training and test stimuli, while training with easy stimuli involves transfer of learning from highly to less discriminable stimuli that may require longer periods of consolidation
Socio-cognitive profiles for visual learning in young and older adults.
It is common wisdom that practice makes perfect; but why do some adults learn better than others? Here, we investigate individuals' cognitive and social profiles to test which variables account for variability in learning ability across the lifespan. In particular, we focused on visual learning using tasks that test the ability to inhibit distractors and select task-relevant features. We tested the ability of young and older adults to improve through training in the discrimination of visual global forms embedded in a cluttered background. Further, we used a battery of cognitive tasks and psycho-social measures to examine which of these variables predict training-induced improvement in perceptual tasks and may account for individual variability in learning ability. Using partial least squares regression modeling, we show that visual learning is influenced by cognitive (i.e., cognitive inhibition, attention) and social (strategic and deep learning) factors rather than an individual's age alone. Further, our results show that independent of age, strong learners rely on cognitive factors such as attention, while weaker learners use more general cognitive strategies. Our findings suggest an important role for higher-cognitive circuits involving executive functions that contribute to our ability to improve in perceptual tasks after training across the lifespan.This work was supported by grants to ZK from the Leverhulme Trust [RF-2011-
378] and the [European Community’s] Seventh Framework
Programme [FP7/2007-2013] under agreement PITN-GA-2011-
290011 and Biotechnology and Biological Sciences Research
Council [D52199X,E027436].This is the final version. It was first published by Frontiers at http://journal.frontiersin.org/article/10.3389/fnagi.2015.00105/abstract
Recommended from our members
Integration of motion and form cues for the perception of self-motion in the human brain
When moving around in the world, the human visual system uses both motion and form information to estimate the direction of self-motion (i.e., heading). However, little is known about cortical areas in charge of this task. This brain-imaging study addressed this question by using visual stimuli consisting of randomly distributed dot pairs oriented toward a locus on a screen (the form-defined focus of expansion (FoE)) but moved away from a different locus (the motion-defined FoE) to simulate observer translation. We first fixed the motion-defined FoE location and shifted the form-defined FoE location. We then made the locations of the motion- and the form-defined FoEs either congruent (at the same location in the display) or incongruent (on the opposite sides of the display). The motion- or the form-defined FoE shift was the same in the two types of stimuli but the perceived heading direction shifted for the congruent but not the incongruent stimuli. Participants (both sexes) made a task-irrelevant (contrast discrimination) judgment during scanning. Searchlight and region-of-interest based multi-voxel pattern analysis (MVPA) revealed that early visual areas V1, V2, and V3 responded to either the motion- or the form-defined FoE shift. After V3, only the dorsal areas V3a and V3B/KO responded to such shifts. Furthermore, area V3B/KO shows a highly significant higher decoding accuracy for the congruent than the incongruent stimuli. Our results provide direct evidence showing area V3B/KO does not simply respond to motion and form cues but integrate these two cues for the perception of heading. Human survival relies on accurate perception of self-motion. The visual system uses both motion (optic flow) and form cues for the perception of the direction of self-motion (heading). Although human brain areas for processing optic flow and form structure are well identified, the areas responsible for integrating these two cues for the perception of self-motion remain unknown. We conducted fMRI experiments and used MVPA analysis technique to find human brain areas that can decode the shift in heading specified by each cue alone and the two cues combined. We found that motion and form information are first processed in the early visual areas and then are likely integrated in the higher dorsal area V3B/KO for the final estimation of heading
Stimulus Coding Rules for Perceptual Learning
Perceptual learning of visual features occurs when multiple stimuli are presented in a fixed sequence (temporal patterning), but not when they are presented in random order (roving). This points to the need for proper stimulus coding in order for learning of multiple stimuli to occur. We examined the stimulus coding rules for learning with multiple stimuli. Our results demonstrate that: (1) stimulus rhythm is necessary for temporal patterning to take effect during practice; (2) learning consolidation is subject to disruption by roving up to 4 h after each practice session; (3) importantly, after completion of temporal-patterned learning, performance is undisrupted by extended roving training; (4) roving is ineffective if each stimulus is presented for five or more consecutive trials; and (5) roving is also ineffective if each stimulus has a distinct identity. We propose that for multi-stimulus learning to occur, the brain needs to conceptually “tag” each stimulus, in order to switch attention to the appropriate perceptual template. Stimulus temporal patterning assists in tagging stimuli and switching attention through its rhythmic stimulus sequence
Constant contour integration in peripheral vision for stimuli with good Gestalt properties
global position jittering up to 20% of the contour size and by dramatic shape jittering, which excluded non-contour integration processes such as detection of various local cues and template matching as alternative mechanisms for uncompromised peripheral perception of good Gestalt stimuli. Peripheral contour integration also presented an interesting upper-lower visual field symmetry after asymmetries of contrast sensitivity and shape discrimination were discounted. The constant peripheral performance might benefit from easy detection of good Gestalt stimuli, which popped out from background noise, from a boost of local contour linking by topdown influences and/or from multielement contour linking by long-range interactions