20,741 research outputs found

    Transfer Effect of Speech-sound Learning on Auditory-motor Processing of Perceived Vocal Pitch Errors

    Get PDF
    Speech perception and production are intimately linked. There is evidence that speech motor learning results in changes to auditory processing of speech. Whether speech motor control benefits from perceptual learning in speech, however, remains unclear. This event-related potential study investigated whether speech-sound learning can modulate the processing of feedback errors during vocal pitch regulation. Mandarin speakers were trained to perceive five Thai lexical tones while learning to associate pictures with spoken words over 5 days. Before and after training, participants produced sustained vowel sounds while they heard their vocal pitch feedback unexpectedly perturbed. As compared to the pre-training session, the magnitude of vocal compensation significantly decreased for the control group, but remained consistent for the trained group at the post-training session. However, the trained group had smaller and faster N1 responses to pitch perturbations and exhibited enhanced P2 responses that correlated significantly with their learning performance. These findings indicate that the cortical processing of vocal pitch regulation can be shaped by learning new speech-sound associations, suggesting that perceptual learning in speech can produce transfer effects to facilitating the neural mechanisms underlying the online monitoring of auditory feedback regarding vocal production

    Fast, invariant representation for human action in the visual system

    Get PDF
    Humans can effortlessly recognize others' actions in the presence of complex transformations, such as changes in viewpoint. Several studies have located the regions in the brain involved in invariant action recognition, however, the underlying neural computations remain poorly understood. We use magnetoencephalography (MEG) decoding and a dataset of well-controlled, naturalistic videos of five actions (run, walk, jump, eat, drink) performed by different actors at different viewpoints to study the computational steps used to recognize actions across complex transformations. In particular, we ask when the brain discounts changes in 3D viewpoint relative to when it initially discriminates between actions. We measure the latency difference between invariant and non-invariant action decoding when subjects view full videos as well as form-depleted and motion-depleted stimuli. Our results show no difference in decoding latency or temporal profile between invariant and non-invariant action recognition in full videos. However, when either form or motion information is removed from the stimulus set, we observe a decrease and delay in invariant action decoding. Our results suggest that the brain recognizes actions and builds invariance to complex transformations at the same time, and that both form and motion information are crucial for fast, invariant action recognition

    Stereotyping starlings are more 'pessimistic'.

    Get PDF
    Negative affect in humans and animals is known to cause individuals to interpret ambiguous stimuli pessimistically, a phenomenon termed 'cognitive bias'. Here, we used captive European starlings (Sturnus vulgaris) to test the hypothesis that a reduction in environmental conditions, from enriched to non-enriched cages, would engender negative affect, and hence 'pessimistic' biases. We also explored whether individual differences in stereotypic behaviour (repetitive somersaulting) predicted 'pessimism'. Eight birds were trained on a novel conditional discrimination task with differential rewards, in which background shade (light or dark) determined which of two covered dishes contained a food reward. The reward was small when the background was light, but large when the background was dark. We then presented background shades intermediate between those trained to assess the birds' bias to choose the dish associated with the smaller food reward (a 'pessimistic' judgement) when the discriminative stimulus was ambiguous. Contrary to predictions, changes in the level of cage enrichment had no effect on 'pessimism'. However, changes in the latency to choose and probability of expressing a choice suggested that birds learnt rapidly that trials with ambiguous stimuli were unreinforced. Individual differences in performance of stereotypies did predict 'pessimism'. Specifically, birds that somersaulted were more likely to choose the dish associated with the smaller food reward in the presence of the most ambiguous discriminative stimulus. We propose that somersaulting is part of a wider suite of behavioural traits indicative of a stress response to captive conditions that is symptomatic of a negative affective state

    A retinotopic attentional trace after saccadic eye movements: evidence from event-related potentials

    Get PDF
    Saccadic eye movements are a major source of disruption to visual stability, yet we experience little of this disruption. We can keep track of the same object across multiple saccades. It is generally assumed that visual stability is due to the process of remapping, in which retinotopically organized maps are updated to compensate for the retinal shifts caused by eye movements. Recent behavioral and ERP evidence suggests that visual attention is also remapped, but that it may still leave a residual retinotopic trace immediately after a saccade. The current study was designed to further examine electrophysiological evidence for such a retinotopic trace by recording ERPs elicited by stimuli that were presented immediately after a saccade (80 msec SOA). Participants were required to maintain attention at a specific location (and to memorize this location) while making a saccadic eye movement. Immediately after the saccade, a visual stimulus was briefly presented at either the attended location (the same spatiotopic location), a location that matched the attended location retinotopically (the same retinotopic location), or one of two control locations. ERP data revealed an enhanced P1 amplitude for the stimulus presented at the retinotopically matched location, but a significant attenuation for probes presented at the original attended location. These results are consistent with the hypothesis that visuospatial attention lingers in retinotopic coordinates immediately following gaze shifts
    • 

    corecore