316 research outputs found

    Phylogenetic reorganization of the basal ganglia: A necessary, but not the only, bridge over a primate Rubicon of acoustic communication

    Get PDF
    In this response to commentaries, we revisit the two main arguments of our target article. Based on data drawn from a variety of research areas – vocal behavior in nonhuman primates, speech physiology and pathology, neurobiology of basal ganglia functions, motor skill learning, paleoanthropological concepts – the target article, first, suggests a two-stage model of the evolution of the crucial motor prerequisites of spoken language within the hominin lineage: (1) monosynaptic refinement of the projections of motor cortex to brainstem nuclei steering laryngeal muscles, and (2) subsequent “vocal-laryngeal elaboration” of cortico-basal ganglia circuits, driven by human-specific FOXP2 mutations. Second, as concerns the ontogenetic development of verbal communication, age-dependent interactions between the basal ganglia and their cortical targets are assumed to contribute to the time course of the acquisition of articulate speech. Whereas such a phylogenetic reorganization of cortico-striatal circuits must be considered a necessary prerequisite for ontogenetic speech acquisition, the 30 commentaries – addressing the whole range of data sources referred to – point at several further aspects of acoustic communication which have to be added to or integrated with the presented model. For example, the relationships between vocal tract movement sequencing – the focus of the target article – and rhythmical structures of movement organization, the connections between speech motor control and the central-auditory and central-visual systems, the impact of social factors upon the development of vocal behavior (in nonhuman primates and in our species), and the interactions of ontogenetic speech acquisition – based upon FOXP2-driven structural changes at the level of the basal ganglia – with preceding subvocal stages of acoustic communication as well as higher-order (cognitive) dimensions of phonological development. Most importantly, thus, several promising future research directions unfold from these contributions – accessible to clinical studies and functional imaging in our species as well as experimental investigations in nonhuman primates

    Seeing a talking face matters to infants, children and adults : behavioural and neurophysiological studies

    Get PDF
    Everyday conversations typically occur face-to-face. Over and above auditory information, visual information from a speaker’s face, e.g., lips, eyebrows, contributes to speech perception and comprehension. The facilitation that visual speech cues bring— termed the visual speech benefit—are experienced by infants, children and adults. Even so, studies on speech perception have largely focused on auditory-only speech leaving a relative paucity of research on the visual speech benefit. Central to this thesis are the behavioural and neurophysiological manifestations of the visual speech benefit. As the visual speech benefit assumes that a listener is attending to a speaker’s talking face, the investigations are conducted in relation to the possible modulating effects that gaze behaviour brings. Three investigations were conducted. Collectively, these studies demonstrate that visual speech information facilitates speech perception, and this has implications for individuals who do not have clear access to the auditory speech signal. The results, for instance the enhancement of 5-month-olds’ cortical tracking by visual speech cues, and the effect of idiosyncratic differences in gaze behaviour on speech processing, expand knowledge of auditory-visual speech processing, and provide firm bases for new directions in this burgeoning and important area of research

    MEG, PSYCHOPHYSICAL AND COMPUTATIONAL STUDIES OF LOUDNESS, TIMBRE, AND AUDIOVISUAL INTEGRATION

    Get PDF
    Natural scenes and ecological signals are inherently complex and understanding of their perception and processing is incomplete. For example, a speech signal contains not only information at various frequencies, but is also not static; the signal is concurrently modulated temporally. In addition, an auditory signal may be paired with additional sensory information, as in the case of audiovisual speech. In order to make sense of the signal, a human observer must process the information provided by low-level sensory systems and integrate it across sensory modalities and with cognitive information (e.g., object identification information, phonetic information). The observer must then create functional relationships between the signals encountered to form a coherent percept. The neuronal and cognitive mechanisms underlying this integration can be quantified in several ways: by taking physiological measurements, assessing behavioral output for a given task and modeling signal relationships. While ecological tokens are complex in a way that exceeds our current understanding, progress can be made by utilizing synthetic signals that encompass specific essential features of ecological signals. The experiments presented here cover five aspects of complex signal processing using approximations of ecological signals : (i) auditory integration of complex tones comprised of different frequencies and component power levels; (ii) audiovisual integration approximating that of human speech; (iii) behavioral measurement of signal discrimination; (iv) signal classification via simple computational analyses and (v) neuronal processing of synthesized auditory signals approximating speech tokens. To investigate neuronal processing, magnetoencephalography (MEG) is employed to assess cortical processing non-invasively. Behavioral measures are employed to evaluate observer acuity in signal discrimination and to test the limits of perceptual resolution. Computational methods are used to examine the relationships in perceptual space and physiological processing between synthetic auditory signals, using features of the signals themselves as well as biologically-motivated models of auditory representation. Together, the various methodologies and experimental paradigms advance the understanding of ecological signal analytics concerning the complex interactions in ecological signal structure

    Prosodic Structure as a Parallel to Musical Structure

    Get PDF
    Funding for Open Access provided by the UMD Libraries Open Access Publishing Fund.What structural properties do language and music share? Although early speculation identified a wide variety of possibilities, the literature has largely focused on the parallels between musical structure and syntactic structure. Here, we argue that parallels between musical structure and prosodic structure deserve more attention. We review the evidence for a link between musical and prosodic structure and find it to be strong. In fact, certain elements of prosodic structure may provide a parsimonious comparison with musical structure without sacrificing empirical findings related to the parallels between language and music. We then develop several predictions related to such a hypothesis

    The Neurodevelopment Of Basic Sensory Processing And Integration In Autism Spectrum Disorder

    Full text link
    This thesis presents three studies that together explore the neurophysiological basis for the sensory processing and integration abnormalities that have been observed in autism spectrum disorder (ASD) since the disorder was first described over half a century ago. In designing these studies we seek to fill a hole that currently exists in the research community‟s knowledge of the neurodevelopment of basic multisensory integration -- both in children with autism and as well as in those with typical development. The first study applied event related potentials (ERPs) and behavioral measures of multisensory integration to a large group of healthy participants ranging in age from 7 to 29 years, with the goal of detailing the developmental trajectory of basic audiovisual integration in the brain. Our behavioral results revealed a gradual fine-tuning of multisensory facilitation of reaction time which reached mature levels by about 14 years of age. A similarly protracted period of maturation was seen in the brain processes thought to underlie to multisensory integration. Using the results of this cross-sectional study as a guide, the second study employed a between groups design to assess differences in the neural activity and behavioral facilitation associated with integrating basic audiovisual stimuli in groups of children and adolescents with ASD and typical development (aged 7-16 years). Deficits in basic audiovisual integration were seen at the earliest stages of cortical sensory processing in the ASD groups. In the concluding study we assessed whether neurophysiological measures of sensory processing and integration predict autistic symptom severity and parent-reported visual/auditory sensitivities. The data revealed that a combination of neural indices of auditory and visual processing and integration were predictive of severity of autistic symptoms in a group of children and adolescents with ASD. A particularly robust relationship was observed between severity of autism and the integrity of basic auditory processing and audiovisual integration. In contrast, our physiological indices did not predict visual/auditory sensitivities as assessed by parent responses on a questionnaire

    An Investigation of Speechreading in Profoundly Congenitally Deaf British Adults

    Get PDF
    Speechreading is the major route through which deaf people access the spoken language of the society in which they live. This thesis investigated speechreading and its correlates in a group of profoundly congenitally deaf British adults, and in a control group of hearing adults. For this purpose, the Test of Adult Speechreading (TAS) was developed. The TAS was designed to be sensitive to the perceptual abilities that underlie speechreading at varying linguistic levels, and to be appropriate, therefore, for use with d/Deaf as well as hearing individuals. The vocabulary and syntax used were selected to be familiar to Deaf adults, and the response mode, using picture choices only, made no demands on written or expressive spoken English. This new test was administered silently to groups of congenitally deaf and hearing adults, with a battery of visual, cognitive and language tasks. The deaf participants differed in their language and educational backgrounds, but all had hearing losses over 90dB. They significantly outperformed the hearing group on the TAS, even when only closely matched pairs of participants were included in the analyses. Adults who are deaf can speechread better than those who are hearing. Multiple factors impact on an individual’s speechreading abilities, and no single factor in isolation results in good speechreading skills. In addition to hearing status, other factors were identified through group comparisons, correlation and regression analyses, cluster analyses and multiple case studies, as being potentially necessary (although not sufficient) for skilled speechreading. These were lexical knowledge, the ability to visually identify sentence focus, and verbal working memory capacity. A range of further factors facilitated skilled speechreading, including hearing aid use, the use of speech at home during childhood, sensitivity to visual motion, personality (risk-taking & impulsiveness), and reading age. It seems there are many ways to become a skilled speechreader

    Temporal integration of loudness as a function of level

    Get PDF

    Right Neural Substrates of Language and Music Processing Left Out: Activation Likelihood Estimation (ALE) and Meta-Analytic Connectivity Modelling (MACM)

    Get PDF
    Introduction: Language and music processing have been investigated in neuro-based research for over a century. However, consensus of independent and shared neural substrates among the domains remains elusive due to varying neuroimaging methodologies. Identifying functional connectivity in language and music processing via neuroimaging meta-analytic methods provides neuroscientific knowledge of higher cognitive domains and normative models may guide treatment development in communication disorders based on principles of neural plasticity. Methods: Using BrainMap software and tools, the present coordinate-based meta-analysis analyzed 65 fMRI studies investigating language and music processing in healthy adult subjects. We conducted activation likelihood estimates (ALE) in language processing, music processing, and language + music (Omnibus) processing. Omnibus ALE clusters were used to elucidate functional connectivity by use of meta-analytic connectivity modelling (MACM). Paradigm Class and Behavioral Domain analyses were completed for the ten identified nodes to aid functional MACM interpretation. Results: The Omnibus ALE revealed ten peak activation clusters (bilateral inferior frontal gyri, left medial frontal gyrus, right superior temporal gyrus, left transverse temporal gyrus, bilateral claustrum, left superior parietal lobule, right precentral gyrus, and right anterior culmen). MACM demonstrates an interconnected network consisting of unidirectional and bidirectional connectivity. Subsequent analyses demonstrated nodal involvement across 44 BrainMap paradigms and 32 BrainMap domains. Discussion: These findings demonstrate functional connectivity among Omnibus areas of activation in language and music processing. We analyze ALE and MACM outcomes by comparing them to previously observed roles in cognitive processing and functional network connectivity. Finally, we discuss the importance of translational neuroimaging and need for normative models guiding intervention

    The building blocks of sound symbolism

    Get PDF
    Languages contain thousands of words each and are made up by a seemingly endless collection of sound combinations. Yet a subsection of these show clear signs of corresponding word shapes for the same meanings which is generally known as vocal iconicity and sound symbolism. This dissertation explores the boundaries of sound symbolism in the lexicon from typological, functional and evolutionary perspectives in an attempt to provide a deeper understanding of the role sound symbolism plays in human language. In order to achieve this, the subject in question was triangulated by investigating different methodologies which included lexical data from a large number of language families, experiment participants and robust statistical tests.Study I investigates basic vocabulary items in a large number of language families in order to establish the extent of sound symbolic items in the core of the lexicon, as well as how the sound-meaning associations are mapped and interconnected. This study shows that by expanding the lexical dataset compared to previous studies and completely controlling for genetic bias, a larger number of sound-meaning associations can be established. In addition, by placing focus on the phonetic and semantic features of sounds and meanings, two new types of sounds symbolism could be established, along with 20 semantically and phonetically superordinate concepts which could be linked to the semantic development of the lexicon.Study II explores how sound symbolic associations emerge in arbitrary words through sequential transmission over language users. This study demonstrates that transmission of signals is sufficient for iconic effects to emerge and does not require interactional communication. Furthermore, it also shows that more semantically marked meanings produce stronger effects and that iconicity in the size and shape domains seems to be dictated by similarities between the internal semantic relationships of each oppositional word pair and its respective associated sounds.Studies III and IV use color words to investigate differences and similarities between low-level cross-modal associations and sound symbolism in lexemes. Study III explores the driving factors of cross-modal associations between colors and sounds by experimentally testing implicit preferences between several different acoustic and visual parameters. The most crucial finding was that neither specific hues nor specific vowels produced any notable effects and it is therefore possible that previously reported associations between vowels and colors are actually dependent on underlying visual and acoustic parameters.Study IV investigates sound symbolic associations in words for colors in a large number of language families by correlating acoustically described segments with luminance and saturation values obtained from cross-linguistic color-naming data. In accordance with Study III, this study showed that luminance produced the strongest results and was primarily associated with vowels, while saturation was primarily associated with consonants. This could then be linked to cross-linguistic lexicalization order of color words.To summarize, this dissertation shows the importance of studying the underlying parameters of sound symbolism semantically and phonetically in both language users and cross-linguistic language data. In addition, it also shows the applicability of non-arbitrary sound-meaning associations for gaining a deeper understanding of how linguistic categories have developed evolutionarily and historically
    • …
    corecore