257 research outputs found

    Impaired extraction of speech rhythm from temporal modulation patterns in speech in developmental dyslexia

    Get PDF
    Dyslexia is associated with impaired neural representation of the sound structure of words (phonology). The “phonological deficit” in dyslexia may arise in part from impaired speech rhythm perception, thought to depend on neural oscillatory phase-locking to slow amplitude modulation (AM) patterns in the speech envelope. Speech contains AM patterns at multiple temporal rates, and these different AM rates are associated with phonological units of different grain sizes, e.g., related to stress, syllables or phonemes. Here, we assess the ability of adults with dyslexia to use speech AMs to identify rhythm patterns (RPs). We study 3 important temporal rates: “Stress” (~2 Hz), “Syllable” (~4 Hz) and “Sub-beat” (reduced syllables, ~14 Hz). 21 dyslexics and 21 controls listened to nursery rhyme sentences that had been tone-vocoded using either single AM rates from the speech envelope (Stress only, Syllable only, Sub-beat only) or pairs of AM rates (Stress + Syllable, Syllable + Sub-beat). They were asked to use the acoustic rhythm of the stimulus to identity the original nursery rhyme sentence. The data showed that dyslexics were significantly poorer at detecting rhythm compared to controls when they had to utilize multi-rate temporal information from pairs of AMs (Stress + Syllable or Syllable + Sub-beat). These data suggest that dyslexia is associated with a reduced ability to utilize AMs <20 Hz for rhythm recognition. This perceptual deficit in utilizing AM patterns in speech could be underpinned by less efficient neuronal phase alignment and cross-frequency neuronal oscillatory synchronization in dyslexia. Dyslexics' perceptual difficulties in capturing the full spectro-temporal complexity of speech over multiple timescales could contribute to the development of impaired phonological representations for words, the cognitive hallmark of dyslexia across languages

    Developmental Psychology: How Social Context Influences Infants’ Attention

    Get PDF
    A recent study shows that changes in the focus of a social partner’s attention associate, on a second-by-second scale, with changes in how much attention infants pay to objects

    Using Optogenetic Dyadic Animal Models to Elucidate the Neural Basis for Human Parent-Infant Social Knowledge Transmission.

    Get PDF
    Healthy early development depends on a warm reciprocal relationship between parent and offspring, where parent and infant interact in close temporal co-ordination as if engaged in a “dyadic dance” of glances, gestures, smiles and words (Stern, 1985; Gianino and Tronick, 1988). Most, if not all, early learning takes place during these well-choreographed social exchanges, which support cultural knowledge transmission from parent to offspring using verbal and non-verbal forms of communication and behavioural modelling. Such vicarious knowledge transmission through social interaction (rather than direct experience) is known as social learning (Bandura, 1971; Csibra and Gergely, 2009). Tomasello (2014) argues that human mastery of these “second-personal social relations” (Darwall, 2006)—in which social partners share and create joint knowledge, intentionality and goals—has accelerated the rise of the human species through “cultural intelligence” (Herrmann et al., 2007).Ministry of Education (MOE)Published versionThis research is supported by the Ministry of Education, Singapore, under its Academic Research Fund Tier 1 [RG99/20 to VL and GA; RG152/18 (NS) to VL]

    Towards a Personalized Multi-Domain Digital Neurophenotyping Model for the Detection and Treatment of Mood Trajectories

    Get PDF
    The commercial availability of many real-life smart sensors, wearables, and mobile apps provides a valuable source of information about a wide range of human behavioral, physiological, and social markers that can be used to infer the user’s mental state and mood. However, there are currently no commercial digital products that integrate these psychosocial metrics with the real-time measurement of neural activity. In particular, electroencephalography (EEG) is a well-validated and highly sensitive neuroimaging method that yields robust markers of mood and affective processing, and has been widely used in mental health research for decades. The integration of wearable neuro-sensors into existing multimodal sensor arrays could hold great promise for deep digital neurophenotyping in the detection and personalized treatment of mood disorders. In this paper, we propose a multi-domain digital neurophenotyping model based on the socioecological model of health. The proposed model presents a holistic approach to digital mental health, leveraging recent neuroscientific advances, and could deliver highly personalized diagnoses and treatments. The technological and ethical challenges of this model are discussed

    Learning multi-modal generative models with permutation-invariant encoders and tighter variational bounds

    Full text link
    Devising deep latent variable models for multi-modal data has been a long-standing theme in machine learning research. Multi-modal Variational Autoencoders (VAEs) have been a popular generative model class that learns latent representations which jointly explain multiple modalities. Various objective functions for such models have been suggested, often motivated as lower bounds on the multi-modal data log-likelihood or from information-theoretic considerations. In order to encode latent variables from different modality subsets, Product-of-Experts (PoE) or Mixture-of-Experts (MoE) aggregation schemes have been routinely used and shown to yield different trade-offs, for instance, regarding their generative quality or consistency across multiple modalities. In this work, we consider a variational bound that can tightly lower bound the data log-likelihood. We develop more flexible aggregation schemes that generalise PoE or MoE approaches by combining encoded features from different modalities based on permutation-invariant neural networks. Our numerical experiments illustrate trade-offs for multi-modal variational bounds and various aggregation schemes. We show that tighter variational bounds and more flexible aggregation models can become beneficial when one wants to approximate the true joint distribution over observed modalities and latent variables in identifiable models

    Difficulties in auditory organization as a cause of reading backwardness? An auditory neuroscience perspective.

    Get PDF
    Over 30 years ago, it was suggested that difficulties in the 'auditory organization' of word forms in the mental lexicon might cause reading difficulties. It was proposed that children used parameters such as rhyme and alliteration to organize word forms in the mental lexicon by acoustic similarity, and that such organization was impaired in developmental dyslexia. This literature was based on an 'oddity' measure of children's sensitivity to rhyme (e.g. wood, book, good) and alliteration (e.g. sun, sock, rag). The 'oddity' task revealed that children with dyslexia were significantly poorer at identifying the 'odd word out' than younger children without reading difficulties. Here we apply a novel modelling approach drawn from auditory neuroscience to study the possible sensory basis of the auditory organization of rhyming and non-rhyming words by children. We utilize a novel Spectral-Amplitude Modulation Phase Hierarchy (S-AMPH) approach to analysing the spectro-temporal structure of rhyming and non-rhyming words, aiming to illuminate the potential acoustic cues used by children as a basis for phonological organization. The S-AMPH model assumes that speech encoding depends on neuronal oscillatory entrainment to the amplitude modulation (AM) hierarchy in speech. Our results suggest that phonological similarity between rhyming words in the oddity task depends crucially on slow (delta band) modulations in the speech envelope. Contrary to linguistic assumptions, therefore, auditory organization by children may not depend on phonemic information for this task. Linguistically, it is assumed that 'book' does not rhyme with 'wood' and 'good' because the final phoneme differs. However, our auditory analysis suggests that the acoustic cues to this phonological dissimilarity depend primarily on the slower amplitude modulations in the speech envelope, thought to carry prosodic information. Therefore, the oddity task may help in detecting reading difficulties because phonological similarity judgements about rhyme reflect sensitivity to slow amplitude modulation patterns. Slower amplitude modulations are known to be detected less efficiently by children with dyslexia.This research was funded by Medical Research Council grants G0400574 and G0902375 to Usha Goswami.This is the author accepted manuscript. It is currently under an indefinite embargo pending publication by Wiley
    • …
    corecore