6,508 research outputs found
Recommended from our members
Change detection in children with autism: an auditory event-related fMRI study
Autism involves impairments in communication and social interaction, as well as high levels of repetitive, stereotypic and ritualistic behaviours, and extreme resistance to change. This latter dimension, whilst required for a diagnosis, has received less research attention. We hypothesise that this extreme resistance to change in autism is rooted in atypical processing of unexpected stimuli. We tested this using auditory event-related fMRI to determine regional brain activity associated with passive detection of infrequently occurring frequency-deviant and complex novel sounds in a no-task condition. Participants were twelve 10 to 15-year-old children with autism, and a group of 12 age- and sex-matched healthy controls
Age differences in fMRI adaptation for sound identity and location
We explored age differences in auditory perception by measuring fMRI adaptation of brain activity to repetitions of sound identity (what) and location (where), using meaningful environmental sounds. In one condition, both sound identity and location were repeated allowing us to assess non-specific adaptation. In other conditions, only one feature was repeated (identity or location) to assess domain-specific adaptation. Both young and older adults showed comparable non-specific adaptation (identity and location) in bilateral temporal lobes, medial parietal cortex, and subcortical regions. However, older adults showed reduced domain-specific adaptation to location repetitions in a distributed set of regions, including frontal and parietal areas, and to identity repetition in anterior temporal cortex. We also re-analyzed data from a previously published 1-back fMRI study, in which participants responded to infrequent repetition of the identity or location of meaningful sounds. This analysis revealed age differences in domain-specific adaptation in a set of brain regions that overlapped substantially with those identified in the adaptation experiment. This converging evidence of reductions in the degree of auditory fMRI adaptation in older adults suggests that the processing of specific auditory “what” and “where” information is altered with age, which may influence cognitive functions that depend on this processing
Beat synchronization across the lifespan: intersection of development and musical experience
Rhythmic entrainment, or beat synchronization, provides an opportunity to understand how multiple systems operate together to integrate sensory-motor information. Also, synchronization is an essential component of musical performance that may be enhanced through musical training. Investigations of rhythmic entrainment have revealed a developmental trajectory across the lifespan, showing synchronization improves with age and musical experience. Here, we explore the development and maintenance of synchronization in childhood through older adulthood in a large cohort of participants (N = 145), and also ask how it may be altered by musical experience. We employed a uniform assessment of beat synchronization for all participants and compared performance developmentally and between individuals with and without musical experience. We show that the ability to consistently tap along to a beat improves with age into adulthood, yet in older adulthood tapping performance becomes more variable. Also, from childhood into young adulthood, individuals are able to tap increasingly close to the beat (i.e., asynchronies decline with age), however, this trend reverses from younger into older adulthood. There is a positive association between proportion of life spent playing music and tapping performance, which suggests a link between musical experience and auditory-motor integration. These results are broadly consistent with previous investigations into the development of beat synchronization across the lifespan, and thus complement existing studies and present new insights offered by a different, large cross-sectional sample
Cortical and subcortical speech-evoked responses in young and older adults: Effects of background noise, arousal states, and neural excitability
This thesis investigated how the brain processes speech signals in human adults across a wide age-range in the sensory auditory systems using electroencephalography (EEG). Two types of speech-evoked phase-locked responses were focused on: (i) cortical responses (theta-band phase-locked responses) that reflect processing of low-frequency slowly-varying envelopes of speech; (ii) subcortical/peripheral responses (frequency-following responses; FFRs) that reflect encoding of speech periodicity and temporal fine structure information. The aims are to elucidate how these neural activities are affected by different internal (aging, hearing loss, level of arousal and neural excitability) and external (background noise) factors during our daily life through three studies. Study 1 investigated theta-band phase-locking and FFRs in noisy environments in young and older adults. It investigated how aging and hearing loss affect these activities under quiet and noisy environments, and how these activities are associated with speech-in-noise perception. The results showed that ageing and hearing loss affect speech-evoked phase-locked responses through different mechanisms, and the effects of aging on cortical and subcortical activities take different roles in speech-in-noise perception. Study 2 investigated how level of arousal, or consciousness, affects phase-locked responses in young and older adults. The results showed that both theta-band phase-locking and FFRs decreases following decreases in the level of arousal. It was further found that neuro-regulatory role of sleep spindles on theta-band phase-locking is distinct between young and older adults, indicating that the mechanisms of neuro-regulation for phase-locked responses in different arousal states are age-dependent. Study 3 established a causal relationship between the auditory cortical excitability and FFRs using combined transcranial direct current stimulation (tDCS) and EEG. FFRs were measured before and after tDCS was applied over the auditory cortices. The results showed that changes in neural excitability of the right auditory cortex can alter FFR magnitudes along the contralateral pathway. This shows important theoretical and clinical implications that causally link functions of auditory cortex with neural encoding of speech periodicity. Taken together, findings of this thesis will advance our understanding of how speech signals are processed via neural phase-locking in our everyday life across the lifespan
Precis of neuroconstructivism: how the brain constructs cognition
Neuroconstructivism: How the Brain Constructs Cognition proposes a unifying framework for the study of cognitive development that brings together (1) constructivism (which views development as the progressive elaboration of increasingly complex structures), (2) cognitive neuroscience (which aims to understand the neural mechanisms underlying behavior), and (3) computational modeling (which proposes formal and explicit specifications of information processing). The guiding principle of our approach is context dependence, within and (in contrast to Marr [1982]) between levels of organization. We propose that three mechanisms guide the emergence of representations: competition, cooperation, and chronotopy; which themselves allow for two central processes: proactivity and progressive specialization. We suggest that the main outcome of development is partial representations, distributed across distinct functional circuits. This framework is derived by examining development at the level of single neurons, brain systems, and whole organisms. We use the terms encellment, embrainment, and embodiment to describe the higher-level contextual influences that act at each of these levels of organization. To illustrate these mechanisms in operation we provide case studies in early visual perception, infant habituation, phonological development, and object representations in infancy. Three further case studies are concerned with interactions between levels of explanation: social development, atypical development and within that, developmental dyslexia. We conclude that cognitive development arises from a dynamic, contextual change in embodied neural structures leading to partial representations across multiple brain regions and timescales, in response to proactively specified physical and social environment
Do you see what I mean? The role of visual speech information in lexical representations
Human speech is necessarily multimodal and audiovisual redundancies in speech may play a vital role in speech perception across the lifespan. The majority of previous studies have focused particularly on how language is learned from auditory input, but the way in which audiovisual speech information is perceived and comprehended remains less well understood. Here, I examine how audiovisual and visual-only speech information is represented for known words, and if intersensory processing efficiency ability predicts the strength of the lexical representation. To explore the relationship between intersensory processing ability (indexed by matching temporally synchronous auditory and visual stimulation) and the strength of lexical representations, adult subjects participated in an audiovisual word recognition task and the Intersensory Processing Efficiency Protocol (IPEP). Participants were able to reliably identify a correct referent object across manipulations of modality (audiovisual vs visual-only) and pronunciation (correctly vs mispronounced). Correlational analyses did not reveal any relationship between processing efficiency and visual speech information in lexical representations. However, the results presented here suggest that adults’ lexical representations robustly include visual speech information and that visual speech information is sublexically processed during speech perception
Recommended from our members
Sensory dominance and multisensory integration as screening tools in aging
Multisensory information typically confers neural and behavioural advantages over unisensory information. We used a simple audio-visual detection task to compare healthy young (HY), healthy older (HO) and mild-cognitive impairment (MCI) individuals. Neuropsychological tests assessed individuals' learning and memory impairments. First, we provide much-needed clarification regarding the presence of enhanced multisensory benefits in both healthily and abnormally aging individuals. The pattern of sensory dominance shifted with healthy and abnormal aging to favour a propensity of auditory-dominant behaviour (i.e., detecting sounds faster than flashes). Notably, multisensory benefits were larger only in healthy older than younger individuals who were also visually-dominant. Second, we demonstrate that the multisensory detection task offers benefits as a time- and resource-economic MCI screening tool. Receiver operating characteristic (ROC) analysis demonstrated that MCI diagnosis could be reliably achieved based on the combination of indices of multisensory integration together with indices of sensory dominance. Our findings showcase the importance of sensory profiles in determining multisensory benefits in healthy and abnormal aging. Crucially, our findings open an exciting possibility for multisensory detection tasks to be used as a cost-effective screening tool. These findings clarify relationships between multisensory and memory functions in aging, while offering new avenues for improved dementia diagnostics
Concordant cues in faces and voices: testing the backup signal hypothesis
Information from faces and voices combines to provide multimodal signals about a person. Faces and voices may offer redundant, overlapping (backup signals), or complementary information (multiple messages). This article reports two experiments which investigated the extent to which faces and voices deliver concordant information about dimensions of fitness and quality. In Experiment 1, participants rated faces and voices on scales for masculinity/femininity, age, health, height, and weight. The results showed that people make similar judgments from faces and voices, with particularly strong correlations for masculinity/femininity, health, and height. If, as these results suggest, faces and voices constitute backup signals for various dimensions, it is hypothetically possible that people would be able to accurately match novel faces and voices for identity. However, previous investigations into novel face–voice matching offer contradictory results. In Experiment 2, participants saw a face and heard a voice and were required to decide whether the face and voice belonged to the same person. Matching accuracy was significantly above chance level, suggesting that judgments made independently from faces and voices are sufficiently similar that people can match the two. Both sets of results were analyzed using multilevel modeling and are interpreted as being consistent with the backup signal hypothesis
Musicians have better memory than nonmusicians: A meta-analysis
Background
Several studies have found that musicians perform better than nonmusicians in memory tasks, but this is not always the case, and the strength of this apparent advantage is unknown. Here, we conducted a meta-analysis with the aim of clarifying whether musicians perform better than nonmusicians in memory tasks.
Methods
Education Source; PEP (WEB)\u2014Psychoanalytic Electronic Publishing; Psychology and Behavioral Science (EBSCO); PsycINFO (Ovid); PubMed; ScienceDirect\u2014AllBooks Content (Elsevier API); SCOPUS (Elsevier API); SocINDEX with Full Text (EBSCO) and Google Scholar were searched for eligible studies. The selected studies involved two groups of participants: young adult musicians and nonmusicians. All the studies included memory tasks (loading long-term, short-term or working memory) that contained tonal, verbal or visuospatial stimuli. Three meta-analyses were run separately for long-term memory, short-term memory and working memory.
Results
We collected 29 studies, including 53 memory tasks. The results showed that musicians performed better than nonmusicians in terms of long-term memory, g = .29, 95% CI (.08\u2013.51), short-term memory, g = .57, 95% CI (.41\u2013.73), and working memory, g = .56, 95% CI (.33\u2013.80). To further explore the data, we included a moderator (the type of stimulus presented, i.e., tonal, verbal or visuospatial), which was found to influence the effect size for short-term and working memory, but not for long-term memory. In terms of short-term and working memory, the musicians\u2019 advantage was large with tonal stimuli, moderate with verbal stimuli, and small or null with visuospatial stimuli.
Conclusions
The three meta-analyses revealed a small effect size for long-term memory, and a medium effect size for short-term and working memory, suggesting that musicians perform better than nonmusicians in memory tasks. Moreover, the effect of the moderator suggested that, the type of stimuli influences this advantage
- …