257 research outputs found
Impaired extraction of speech rhythm from temporal modulation patterns in speech in developmental dyslexia
Dyslexia is associated with impaired neural representation of the sound structure of words (phonology). The “phonological deficit” in dyslexia may arise in part from impaired speech rhythm perception, thought to depend on neural oscillatory phase-locking to slow amplitude modulation (AM) patterns in the speech envelope. Speech contains AM patterns at multiple temporal rates, and these different AM rates are associated with phonological units of different grain sizes, e.g., related to stress, syllables or phonemes. Here, we assess the ability of adults with dyslexia to use speech AMs to identify rhythm patterns (RPs). We study 3 important temporal rates: “Stress” (~2 Hz), “Syllable” (~4 Hz) and “Sub-beat” (reduced syllables, ~14 Hz). 21 dyslexics and 21 controls listened to nursery rhyme sentences that had been tone-vocoded using either single AM rates from the speech envelope (Stress only, Syllable only, Sub-beat only) or pairs of AM rates (Stress + Syllable, Syllable + Sub-beat). They were asked to use the acoustic rhythm of the stimulus to identity the original nursery rhyme sentence. The data showed that dyslexics were significantly poorer at detecting rhythm compared to controls when they had to utilize multi-rate temporal information from pairs of AMs (Stress + Syllable or Syllable + Sub-beat). These data suggest that dyslexia is associated with a reduced ability to utilize AMs <20 Hz for rhythm recognition. This perceptual deficit in utilizing AM patterns in speech could be underpinned by less efficient neuronal phase alignment and cross-frequency neuronal oscillatory synchronization in dyslexia. Dyslexics' perceptual difficulties in capturing the full spectro-temporal complexity of speech over multiple timescales could contribute to the development of impaired phonological representations for words, the cognitive hallmark of dyslexia across languages
Developmental Psychology: How Social Context Influences Infants’ Attention
A recent study shows that changes in the focus of a social partner’s attention associate, on a second-by-second scale, with changes in how much attention infants pay to objects
Recommended from our members
14 challenges and their solutions for conducting social neuroscience and longitudinal EEG research with infants.
The use of electroencephalography (EEG) to study infant brain development is a growing trend. In addition to classical longitudinal designs that study the development of neural, cognitive and behavioural functions, new areas of EEG application are emerging, such as novel social neuroscience paradigms using dual infant-adult EEG recordings. However, most of the experimental designs, analysis methods, as well as EEG hardware were originally developed for single-person adult research. When applied to study infant development, adult-based solutions often pose unique problems that may go unrecognised. Here, we identify 14 challenges that infant EEG researchers may encounter when designing new experiments, collecting data, and conducting data analysis. Challenges related to the experimental design are: (1) small sample size and data attrition, and (2) varying arousal in younger infants. Challenges related to data acquisition are: (3) determining the optimal location for reference and ground electrodes, (4) control of impedance when testing with the high-density sponge electrode nets, (5) poor fit of standard EEG caps to the varying infant head shapes, and (6) ensuring a high degree of temporal synchronisation between amplifiers and recording devices during dual-EEG acquisition. Challenges related to the analysis of longitudinal and social neuroscience datasets are: (7) developmental changes in head anatomy, (8) prevalence and diversity of infant myogenic artefacts, (9) a lack of stereotypical topography of eye movements needed for the ICA-based data cleaning, (10) and relatively high inter-individual variability of EEG responses in younger cohorts. Additional challenges for the analysis of dual EEG data are: (11) developmental shifts in canonical EEG rhythms and difficulties in differentiating true inter-personal synchrony from spurious synchrony due to (12) common intrinsic properties of the signal and (13) shared external perturbation. Finally, (14) there is a lack of test-retest reliability studies of infant EEG. We describe each of these challenges and suggest possible solutions. While we focus specifically on the social neuroscience and longitudinal research, many of the issues we raise are relevant for all fields of infant EEG research.This research was funded by an ESRC Transforming Social Sciences collaboration grant (ES/N006461/1) to VL and SW, and by ESRC FRL Fellowship (ES/N017560/1) to SW
Using Optogenetic Dyadic Animal Models to Elucidate the Neural Basis for Human Parent-Infant Social Knowledge Transmission.
Healthy early development depends on a warm reciprocal relationship between parent and offspring, where parent and infant interact in close temporal co-ordination as if engaged in a “dyadic dance” of glances, gestures, smiles and words (Stern, 1985; Gianino and Tronick, 1988). Most, if not all, early learning takes place during these well-choreographed social exchanges, which support cultural knowledge transmission from parent to offspring using verbal and non-verbal forms of communication and behavioural modelling. Such vicarious knowledge transmission through social interaction (rather than direct experience) is known as social learning (Bandura, 1971; Csibra and Gergely, 2009). Tomasello (2014) argues that human mastery of these “second-personal social relations” (Darwall, 2006)—in which social partners share and create joint knowledge, intentionality and goals—has accelerated the rise of the human species through “cultural intelligence” (Herrmann et al., 2007).Ministry of Education (MOE)Published versionThis research is supported by the Ministry of Education, Singapore, under its Academic Research Fund Tier 1 [RG99/20 to VL and GA; RG152/18 (NS) to VL]
Towards a Personalized Multi-Domain Digital Neurophenotyping Model for the Detection and Treatment of Mood Trajectories
The commercial availability of many real-life smart sensors, wearables, and mobile apps provides a valuable source of information about a wide range of human behavioral, physiological, and social markers that can be used to infer the user’s mental state and mood. However, there are currently no commercial digital products that integrate these psychosocial metrics with the real-time measurement of neural activity. In particular, electroencephalography (EEG) is a well-validated and highly sensitive neuroimaging method that yields robust markers of mood and affective processing, and has been widely used in mental health research for decades. The integration of wearable neuro-sensors into existing multimodal sensor arrays could hold great promise for deep digital neurophenotyping in the detection and personalized treatment of mood disorders. In this paper, we propose a multi-domain digital neurophenotyping model based on the socioecological model of health. The proposed model presents a holistic approach to digital mental health, leveraging recent neuroscientific advances, and could deliver highly personalized diagnoses and treatments. The technological and ethical challenges of this model are discussed
Learning multi-modal generative models with permutation-invariant encoders and tighter variational bounds
Devising deep latent variable models for multi-modal data has been a
long-standing theme in machine learning research. Multi-modal Variational
Autoencoders (VAEs) have been a popular generative model class that learns
latent representations which jointly explain multiple modalities. Various
objective functions for such models have been suggested, often motivated as
lower bounds on the multi-modal data log-likelihood or from
information-theoretic considerations. In order to encode latent variables from
different modality subsets, Product-of-Experts (PoE) or Mixture-of-Experts
(MoE) aggregation schemes have been routinely used and shown to yield different
trade-offs, for instance, regarding their generative quality or consistency
across multiple modalities. In this work, we consider a variational bound that
can tightly lower bound the data log-likelihood. We develop more flexible
aggregation schemes that generalise PoE or MoE approaches by combining encoded
features from different modalities based on permutation-invariant neural
networks. Our numerical experiments illustrate trade-offs for multi-modal
variational bounds and various aggregation schemes. We show that tighter
variational bounds and more flexible aggregation models can become beneficial
when one wants to approximate the true joint distribution over observed
modalities and latent variables in identifiable models
Difficulties in auditory organization as a cause of reading backwardness? An auditory neuroscience perspective.
Over 30Â years ago, it was suggested that difficulties in the 'auditory organization' of word forms in the mental lexicon might cause reading difficulties. It was proposed that children used parameters such as rhyme and alliteration to organize word forms in the mental lexicon by acoustic similarity, and that such organization was impaired in developmental dyslexia. This literature was based on an 'oddity' measure of children's sensitivity to rhyme (e.g. wood, book, good) and alliteration (e.g. sun, sock, rag). The 'oddity' task revealed that children with dyslexia were significantly poorer at identifying the 'odd word out' than younger children without reading difficulties. Here we apply a novel modelling approach drawn from auditory neuroscience to study the possible sensory basis of the auditory organization of rhyming and non-rhyming words by children. We utilize a novel Spectral-Amplitude Modulation Phase Hierarchy (S-AMPH) approach to analysing the spectro-temporal structure of rhyming and non-rhyming words, aiming to illuminate the potential acoustic cues used by children as a basis for phonological organization. The S-AMPH model assumes that speech encoding depends on neuronal oscillatory entrainment to the amplitude modulation (AM) hierarchy in speech. Our results suggest that phonological similarity between rhyming words in the oddity task depends crucially on slow (delta band) modulations in the speech envelope. Contrary to linguistic assumptions, therefore, auditory organization by children may not depend on phonemic information for this task. Linguistically, it is assumed that 'book' does not rhyme with 'wood' and 'good' because the final phoneme differs. However, our auditory analysis suggests that the acoustic cues to this phonological dissimilarity depend primarily on the slower amplitude modulations in the speech envelope, thought to carry prosodic information. Therefore, the oddity task may help in detecting reading difficulties because phonological similarity judgements about rhyme reflect sensitivity to slow amplitude modulation patterns. Slower amplitude modulations are known to be detected less efficiently by children with dyslexia.This research was funded by Medical Research Council grants G0400574 and G0902375 to Usha Goswami.This is the author accepted manuscript. It is currently under an indefinite embargo pending publication by Wiley
Recommended from our members
The Role of Affectionate Caregiver Touch in Early Neurodevelopment and Parent–Infant Interactional Synchrony
Though rarely included in studies of parent–infant interactions, affectionate touch plays a unique and vital role in infant development. Previous studies in human and rodent models have established that early and consistent affectionate touch from a caregiver confers wide-ranging and holistic benefits for infant psychosocial and neurophysiological development. We begin with an introduction to the neurophysiological pathways for the positive effects of touch. Then, we provide a brief review of how affectionate touch tunes the development of infant somatosensory, autonomic (stress regulation), and immune systems. Affective touch also plays a foundational role in the establishment of social affiliative bonds and early psychosocial behavior. These touch-related bonding effects are known to be mediated primarily by the oxytocin system, but touch also activates mesocorticolimbic dopamine and endogenous opioid systems which aid the development of social cognitive processes such as social learning and reward processing. We conclude by proposing a unique role for affectionate touch as an essential pathway to establishing and maintaining parent-infant interactional synchrony at behavioral and neural levels. The limitations of the current understanding of affectionate touch in infant development point to fruitful avenues for future research
Recommended from our members
Acoustic-Emergent Phonology (AEP) in the Amplitude Envelope
When acquiring language, young children may use acoustic spectro-temporal patterns in speech to derive phonological units in spoken language (e.g., prosodic stress patterns, syllables, phonemes). Children appear to learn acoustic-phonological mappings rapidly, without direct instruction, yet the underlying developmental mechanisms remain unclear. Across different languages, a relationship between amplitude envelope sensitivity and phonological development has been found, suggesting that children may make use of amplitude modulation (AM) patterns within the envelope to develop a phonological system. Here we present the Spectral Amplitude Modulation Phase Hierarchy (S-AMPH) model, a set of algorithms for deriving the dominant AM patterns in child-directed speech (CDS). Using Principal Components Analysis, we show that rhythmic CDS contains an AM hierarchy comprising 3 core modulation timescales. These timescales correspond to key phonological units: prosodic stress (Stress AM, ~2 Hz), syllables (Syllable AM, ~5 Hz) and onset-rime units (Phoneme AM, ~20 Hz). We argue that these AM patterns could in principle be used by naĂŻve listeners to compute acoustic-phonological mappings without lexical knowledge. We then demonstrate that the modulation statistics within this AM hierarchy indeed parse the speech signal into a primitive hierarchically-organised phonological system comprising stress feet (proto-words), syllables and onset-rime units. We apply the S-AMPH model to two other CDS corpora, one spontaneous and one deliberately-timed. The model accurately identified 72-82% (freely-read CDS) and 90-98% (rhythmically-regular CDS) stress patterns, syllables and onset-rime units. This in-principle demonstration that primitive phonology can be extracted from speech AMs is termed Acoustic-Emergent Phonology (AEP) theory. AEP theory provides a set of methods for examining how early phonological development is shaped by the temporal modulation structure of speech across languages. The S-AMPH model reveals a crucial developmental role for stress feet (AMs ~2 Hz). Stress feet underpin different linguistic rhythm typologies, and speech rhythm underpins language acquisition by infants in all languages.This research was funded by a Harold Hyam Wingate Foundation Research Scholarship and a Lucy Cavendish College Junior Research Fellowship to VL, and by a grant from the Medical Research Council to UG (G0902375). We thank Richard Turner for the use of his PCA scripts, and Michael Stone for the use of his code.This is the final version of the article. It was first available from PLOS via http://dx.doi.org/10.1371/journal.pone.014441
Recommended from our members
Acoustic-Emergent Phonology (AEP) in the Amplitude Envelope
When acquiring language, young children may use acoustic spectro-temporal patterns in speech to derive phonological units in spoken language (e.g., prosodic stress patterns, syllables, phonemes). Children appear to learn acoustic-phonological mappings rapidly, without direct instruction, yet the underlying developmental mechanisms remain unclear. Across different languages, a relationship between amplitude envelope sensitivity and phonological development has been found, suggesting that children may make use of amplitude modulation (AM) patterns within the envelope to develop a phonological system. Here we present the Spectral Amplitude Modulation Phase Hierarchy (S-AMPH) model, a set of algorithms for deriving the dominant AM patterns in child-directed speech (CDS). Using Principal Components Analysis, we show that rhythmic CDS contains an AM hierarchy comprising 3 core modulation timescales. These timescales correspond to key phonological units: prosodic stress (Stress AM, ~2 Hz), syllables (Syllable AM, ~5 Hz) and onset-rime units (Phoneme AM, ~20 Hz). We argue that these AM patterns could in principle be used by naĂŻve listeners to compute acoustic-phonological mappings without lexical knowledge. We then demonstrate that the modulation statistics within this AM hierarchy indeed parse the speech signal into a primitive hierarchically-organised phonological system comprising stress feet (proto-words), syllables and onset-rime units. We apply the S-AMPH model to two other CDS corpora, one spontaneous and one deliberately-timed. The model accurately identified 72-82% (freely-read CDS) and 90-98% (rhythmically-regular CDS) stress patterns, syllables and onset-rime units. This in-principle demonstration that primitive phonology can be extracted from speech AMs is termed Acoustic-Emergent Phonology (AEP) theory. AEP theory provides a set of methods for examining how early phonological development is shaped by the temporal modulation structure of speech across languages. The S-AMPH model reveals a crucial developmental role for stress feet (AMs ~2 Hz). Stress feet underpin different linguistic rhythm typologies, and speech rhythm underpins language acquisition by infants in all languages.This research was funded by a Harold Hyam Wingate Foundation Research Scholarship and a Lucy Cavendish College Junior Research Fellowship to VL, and by a grant from the Medical Research Council to UG (G0902375). We thank Richard Turner for the use of his PCA scripts, and Michael Stone for the use of his code.This is the final version of the article. It was first available from PLOS via http://dx.doi.org/10.1371/journal.pone.014441
- …