3 research outputs found

    Acoustic and Semantic Processing of Speech and Non-speech Sounds in Children with Autism Spectrum Disorders

    Full text link
    The processing of semantically meaningful non-speech and speech sounds requires the use of acoustic and higher-order information, such as categorical knowledge and semantic context. Individuals with an autism spectrum disorder (ASD) have been theorized to show enhanced processing of acoustic features and impaired processing of contextual information. The current study investigated how children with and without ASD use acoustic and semantic information during an auditory change detection task and semantic context during a speech-in-noise task. Furthermore, relationships among IQ, the presence of ASD symptoms and the use of acoustic and semantic information across the two tasks were examined among typically developing (TD) children. Results indicated that age-matched--but not IQ-matched--TD controls performed worse overall at the change detection task relative to the ASD group. However, all groups utilized acoustic and semantic information similarly. Results also revealed that all groups utilized semantic information to a greater degree than acoustic information and that all groups displayed an attentional bias to detecting changes that involve the human voice. For the speech-in-noise task, age-matched--but not IQ-matched--TD controls performed better than the ASD group. However, all groups utilized semantic context to the same degree. Regression analyses revealed that IQ or the presence of ASD symptoms did not predict the use of acoustic or semantic information among TD children. In conclusion, children with and without ASD utilize acoustic and semantic information when processing semantically meaningful speech and non-speech sounds during auditory change detection and speech-in-noise processing. Furthermore, a diagnosis of ASD alone does not determine lower performance on complex auditory tasks; rather, lower intellect appears to explain group differences in overall performance

    Resetting of Auditory and Visual Segregation Occurs After Transient Stimuli of the Same Modality

    Get PDF
    In the presence of a continually changing sensory environment, maintaining stable but flexible awareness is paramount, and requires continual organization of information. Determining which stimulus features belong together, and which are separate is therefore one of the primary tasks of the sensory systems. Unknown is whether there is a global or sensory-specific mechanism that regulates the final perceptual outcome of this streaming process. To test the extent of modality independence in perceptual control, an auditory streaming experiment, and a visual moving-plaid experiment were performed. Both were designed to evoke alternating perception of an integrated or segregated percept. In both experiments, transient auditory and visual distractor stimuli were presented in separate blocks, such that the distractors did not overlap in frequency or space with the streaming or plaid stimuli, respectively, thus preventing peripheral interference. When a distractor was presented in the opposite modality as the bistable stimulus (visual distractors during auditory streaming or auditory distractors during visual streaming), the probability of percept switching was not significantly different than when no distractor was presented. Conversely, significant differences in switch probability were observed following within-modality distractors, but only when the pre-distractor percept was segregated. Due to the modality-specificity of the distractor-induced resetting, the results suggest that conscious perception is at least partially controlled by modality-specific processing. The fact that the distractors did not have peripheral overlap with the bistable stimuli indicates that the perceptual reset is due to interference at a locus in which stimuli of different frequencies and spatial locations are integrated
    corecore