2,906 research outputs found
Singers show enhanced performance and neural representation of vocal imitation
Humans have a remarkable capacity to finely control the muscles of the larynx, via distinct patterns of cortical topography and innervation that may underpin our sophisticated vocal capabilities compared with non-human primates. Here, we investigated the behavioural and neural correlates of laryngeal control, and their relationship to vocal expertise, using an imitation task that required adjustments of larynx musculature during speech. Highly trained human singers and non-singer control participants modulated voice pitch and vocal tract length (VTL) to mimic auditory speech targets, while undergoing real-time anatomical scans of the vocal tract and functional scans of brain activity. Multivariate analyses of speech acoustics, larynx movements and brain activation data were used to quantify vocal modulation behaviour and to search for neural representations of the two modulated vocal parameters during the preparation and execution of speech. We found that singers showed more accurate task-relevant modulations of speech pitch and VTL (i.e. larynx height, as measured with vocal tract MRI) during speech imitation; this was accompanied by stronger representation of VTL within a region of the right somatosensory cortex. Our findings suggest a common neural basis for enhanced vocal control in speech and song. This article is part of the theme issue ‘Voice modulation: from origin and mechanism to social impact (Part I)’
Singers show enhanced performance and neural representation of vocal imitation
Humans have a remarkable capacity to finely control the muscles of the larynx, via distinct patterns of cortical topography and innervation that may underpin our sophisticated vocal capabilities compared with non-human primates. Here, we investigated the behavioural and neural correlates of laryngeal control, and their relationship to vocal expertise, using an imitation task that required adjustments of larynx musculature during speech. Highly trained human singers and non-singer control participants modulated voice pitch and vocal tract length (VTL) to mimic auditory speech targets, while undergoing real-time anatomical scans of the vocal tract and functional scans of brain activity. Multivariate analyses of speech acoustics, larynx movements and brain activation data were used to quantify vocal modulation behaviour and to search for neural representations of the two modulated vocal parameters during the preparation and execution of speech. We found that singers showed more accurate task-relevant modulations of speech pitch and VTL (i.e. larynx height, as measured with vocal tract MRI) during speech imitation; this was accompanied by stronger representation of VTL within a region of the right somatosensory cortex. Our findings suggest a common neural basis for enhanced vocal control in speech and song.
This article is part of the theme issue ‘Voice modulation: from origin and mechanism to social impact (Part I)’
Poor neuro-motor tuning of the human larynx:A comparison of sung and whistled pitch imitation
Vocal imitation is a hallmark of human communication that
underlies the capacity to learn to speak and sing. Even so,
poor vocal imitation abilities are surprisingly common in the
general population and even expert vocalists cannot match
the precision of a musical instrument. Although humans
have evolved a greater degree of control over the laryngeal
muscles that govern voice production, this ability may be
underdeveloped compared with control over the articulatory
muscles, such as the tongue and lips, volitional control of which
emerged earlier in primate evolution. Human participants
imitated simple melodies by either singing (i.e. producing pitch
with the larynx) or whistling (i.e. producing pitch with the lips
and tongue). Sung notes were systematically biased towards
each individual’s habitual pitch, which we hypothesize may act
to conserve muscular effort. Furthermore, while participants
who sung more precisely also whistled more precisely, sung
imitations were less precise than whistled imitations. The
laryngeal muscles that control voice production are under less
precise control than the oral muscles that are involved in
whistling. This imprecision may be due to the relatively recent
evolution of volitional laryngeal-motor control in humans,
which may be tuned just well enough for the coarse modulation
of vocal-pitch in speech
The song system of the human brain.
Although sophisticated insights have been gained into the neurobiology of singing in songbirds, little comparable knowledge exists for humans, the most complex singers in nature. Human song complexity is evidenced by the capacity to generate both richly structured melodies and coordinated multi-part harmonizations. The present study aimed to elucidate this multi-faceted vocal system by using 15O-water positron emission tomography to scan ?listen and respond? performances of amateur musicians either singing repetitions of novel melodies, singing harmonizations with novel melodies, or vocalizing monotonically. Overall, major blood flow increases were seen in the primary and secondary auditory cortices, primary motor cortex, frontal operculum, supplementary motor area, insula, posterior cerebellum, and basal ganglia. Melody repetition and harmonization produced highly similar patterns of activation. However, whereas all three tasks activated secondary auditory cortex (posterior Brodmann Area 22), only melody repetition and harmonization activated the planum polare (BA 38). This result implies that BA 38 is responsible for an even higher level of musical processing than BA 22. Finally, all three of these ?listen and respond? tasks activated the frontal operculum (Broca's area), a region involved in cognitive/motor sequence production and imitation, thereby implicating it in musical imitation and vocal learning
How to Do Things Without Words: Infants, utterance-activity and distributed cognition
Clark and Chalmers (1998) defend the hypothesis of an ‘Extended Mind’, maintaining that beliefs and other paradigmatic mental states can be implemented outside the central nervous system or body. Aspects of the problem of ‘language acquisition’ are considered in the light of the extended mind hypothesis. Rather than ‘language’ as typically understood, the object of study is something called ‘utterance-activity’, a term of art intended to refer to the full range of kinetic and prosodic features of the on-line behaviour of interacting humans. It is argued that utterance activity is plausibly regarded as jointly controlled by the embodied activity of interacting people, and that it contributes to the control of their behaviour. By means of specific examples it is suggested that this complex joint control facilitates easier learning of at least some features of language. This in turn suggests a striking form of the extended mind, in which infants’ cognitive powers are augmented by those of the people with whom they interact
Enkinaesthetic polyphony: the underpinning for first-order languaging
We contest two claims: (1) that language, understood as the processing of abstract symbolic forms, is an instrument of cognition and rational thought, and (2) that conventional notions of turn-taking, exchange structure, and move analysis, are satisfactory as a basis for theorizing communication between living, feeling agents. We offer an enkinaesthetic theory describing the reciprocal affective neuro-muscular dynamical flows and tensions of co- agential dialogical sense-making relations. This “enkinaesthetic dialogue” is characterised by a preconceptual experientially recursive temporal dynamics forming the deep extended melodies of relationships in time. An understanding of how those relationships work, when we understand and are ourselves understood, when communication falters and conflict arises, will depend on a grasp of our enkinaesthetic intersubjectivity
How to do things without words
Clark and Chalmers (1998) defend the hypothesis of an ‘Extended Mind’, maintaining that beliefs and other paradigmatic mental states can be implemented outside the central nervous system or body. Aspects of the problem of ‘language acquisition’ are considered in the light of the extended mind hypothesis. Rather than ‘language’ as typically understood, the object of study is something called ‘utterance-activity’, a term of art intended to refer to the full range of kinetic and prosodic features of the on-line behaviour of interacting humans. It is argued that utterance activity is plausibly regarded as jointly controlled by the embodied activity of interacting people, and that it contributes to the control of their behaviour. By means of specific examples it is suggested that this complex joint control facilitates easier learning of at least some features of language. This in turn suggests a striking form of the extended mind, in which infants’ cognitive powers are augmented by those of the people with whom they interact
Differential Gene Expression in the Anterior Forebrain Pathway Nucleus Area X During Rapid Vocal Learning
Vocal learning is the complex process by which an organism is able to modify its vocal
output, such as birdsong or human speech, due to experience. The pathways used in the
production and modification of human speech and birdsong have been shown to be quite
similar, and so, the determining the transcriptome changes in songbirds provide a logical
first step to learn more about human speech development. In the current study, trained
Zebra Finches, a passerine songbird, were allowed to progress through only the initial
stage of vocal development, as determined by a pitch increase compared with untrained
isolates. The transcriptomes of the four song nuclei and three auditory forebrain regions
of these two groups were compared using microarray hybridizations, and the results
were confirmed using in situ hybridization. In Area X, part of the anterior forebrain
pathway known to play a role in vocal learning, 149 genes were found to be
differentially regulated, with approximately 85% of these genes decreasing in
expression. Of the differentially expressed genes, some have already been found to play
a role, either directly or indirectly, in learning through previous studies, though most have still yet to have their properties determined. This study, though important in and of
itself, is only the first of many pieces to the large process of vocal learning to be put into
place; further work will be able to expand upon work here to fill in gaps in our
knowledge of the vocal learning process
Dubstep, Darwin, and the Prehistoric Invention of Music
Where did music come from, and why are we so drawn to it? Though various scholars have offered a diverse set of hypotheses, none of these existing theories can fully encapsulate the complexity of music. They generally treat music holistically, but music is not monolithic. Musical ability encompasses myriad component parts, such as pitch perception and beat synchronization. These various musical elements are processed in different parts of the brain. Thus, it is unlikely that music arose in one place, at one time, in response to one evolutionary pressure. While existing theories can explain pitch-related aspects of music, such as melody and harmony, they fail to encapsulate rhythm. I explore rhythm’s connection with motion, social function, and the brain in order to investigate how and why it may have evolved. In order to do so, I use diverse lines of evidence, such as my own ethnomusicological fieldwork, autism studies, and brain scans of monkeys. I hypothesize that the mirror neuron system, a mechanism in the brain that allows cognitive and physical synchronization, may be behind the connection between rhythm, movement, and social cognition. When eventually rhythm was joined with pitch manipulation activities, music as we know it was born
- …