542 research outputs found

    Auditory sustained field responses to periodic noise

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Auditory sustained responses have been recently suggested to reflect neural processing of speech sounds in the auditory cortex. As periodic fluctuations below the pitch range are important for speech perception, it is necessary to investigate how low frequency periodic sounds are processed in the human auditory cortex. Auditory sustained responses have been shown to be sensitive to temporal regularity but the relationship between the amplitudes of auditory evoked sustained responses and the repetitive rates of auditory inputs remains elusive. As the temporal and spectral features of sounds enhance different components of sustained responses, previous studies with click trains and vowel stimuli presented diverging results. In order to investigate the effect of repetition rate on cortical responses, we analyzed the auditory sustained fields evoked by periodic and aperiodic noises using magnetoencephalography.</p> <p>Results</p> <p>Sustained fields were elicited by white noise and repeating frozen noise stimuli with repetition rates of 5-, 10-, 50-, 200- and 500 Hz. The sustained field amplitudes were significantly larger for all the periodic stimuli than for white noise. Although the sustained field amplitudes showed a rising and falling pattern within the repetition rate range, the response amplitudes to 5 Hz repetition rate were significantly larger than to 500 Hz.</p> <p>Conclusions</p> <p>The enhanced sustained field responses to periodic noises show that cortical sensitivity to periodic sounds is maintained for a wide range of repetition rates. Persistence of periodicity sensitivity below the pitch range suggests that in addition to processing the fundamental frequency of voice, sustained field generators can also resolve low frequency temporal modulations in speech envelope.</p

    Perception of acoustically complex phonological features in vowels is reflected in the induced brain-magnetic activity

    Get PDF
    A central issue in speech recognition is which basic units of speech are extracted by the auditory system and used for lexical access. One suggestion is that complex acoustic-phonetic information is mapped onto abstract phonological representations of speech and that a finite set of phonological features is used to guide speech perception. Previous studies analyzing the N1m component of the auditory evoked field have shown that this holds for the acoustically simple feature place of articulation. Brain magnetic correlates indexing the extraction of acoustically more complex features, such as lip rounding (ROUND) in vowels, have not been unraveled yet. The present study uses magnetoencephalography (MEG) to describe the spatial-temporal neural dynamics underlying the extraction of phonological features. We examined the induced electromagnetic brain response to German vowels and found the event-related desynchronization in the upper beta-band to be prolonged for those vowels that exhibit the lip rounding feature (ROUND). It was the presence of that feature rather than circumscribed single acoustic parameters, such as their formant frequencies, which explained the differences between the experimental conditions. We conclude that the prolonged event-related desynchronization in the upper beta-band correlates with the computational effort for the extraction of acoustically complex phonological features from the speech signal. The results provide an additional biomagnetic parameter to study mechanisms of speech perception

    The phylogenetic origin and mechanism of sound symbolism - the role of action-perception circuits

    Get PDF
    As opposed to the classic Saussurean view on the arbitrary relationship between linguistic form and meaning, non-arbitrariness is a pervasive feature in human language. Sound symbolism—namely, the intrinsic relationship between meaningless speech sounds and visual shapes—is a typical case of non-arbitrariness. A demonstration of sound symbolism is the “maluma-takete” effect, in which immanent links are observed between meaningless ‘round’ or ‘sharp’ speech sounds (e.g., maluma vs. takete) and round or sharp abstract visual shapes, respectively. An extensive amount of empirical work suggests that these mappings are shared by humans and play a distinct role in the emergence and acquisition of language. However, important questions are still pending on the origins and mechanism of sound symbolic processing. Those questions are addressed in the present work. The first part of this dissertation focuses on the validation of sound symbolic effects in a forced choice task, and on the interaction of sound symbolism with two crossmodal mappings shared by humans. To address this question, human subjects were tested with a forced choice task on sound symbolic mappings crossed with two crossmodal audiovisual mappings (pitch-shape and pitch-spatial position). Subjects performed significantly above chance only for the sound symbolic associations but not for the other two mappings. Sound symbolic effects were replicated, while the other two crossmodal mappings involving low-level audiovisual properties, such as pitch and spatial position, did not emerge. The second issue examined in the present dissertation are the phylogenetic origins of sound symbolic associations. Human subjects and a group of touchscreen trained great apes were tested with a forced choice task on sound symbolic mappings. Only humans were able to process and/or infer the links between meaningless speech sounds and abstract shapes. These results reveal, for the first time, the specificity of humans’ sound symbolic ability, which can be related to neurobiological findings on the distinct development and connectivity of the human language network. The last part of the dissertation investigates whether action knowledge and knowledge of the perceptual outputs of our actions can provide a possible explanation of sound symbolic mappings. In a series of experiments, human subjects performed sound symbolic mappings, and mappings of ‘round’ or ‘sharp’ hand actions sounds with the shapes produced by these hand actions. In addition, the auditory and visual stimuli of both conditions were crossed. Subjects significantly detected congruencies for all mappings, and most importantly, a positive correlation was observed in their performances across conditions. Physical acoustic and visual similarities between the audiovisual byproducts of our hand actions with the sound symbolic pseudowords and shapes show that the link between meaningless speech sounds and abstract visual shapes is found in action knowledge. From a neurobiological perspective the link between actions and the audiovisual by-products of our actions is also in accordance with distributed action perception circuits in the human brain. Action-perception circuits, supported by the human neuroanatomical connectivity between auditory, visual, and motor cortices, and under associative learning, emerge and carry the perceptual and motor knowledge of our actions. These findings give a novel explanation for how symbolic communication is linked to our sensorimotor experiences. To sum up, the present dissertation (i) validates the presence of sound symbolic effects in a forced choice task, (ii) shows that sound symbolic ability is specific to humans, and (iii) that action knowledge can provide the mechanistic glue of mapping meaningless speech sounds to abstract shapes. Overall, the present work contributes to a better understanding of the phylogenetic origins and mechanism of sound symbolic ability in humans.Im Gegensatz zur klassischen Saussureschen Ansicht ĂŒber die willkĂŒrliche Beziehung zwischen sprachlicher Form und Bedeutung ist die NichtwillkĂŒrlichkeit ein durchdringendes Merkmal der menschlichen Sprache. Lautsymbolik—nĂ€mlich die intrinsische Beziehung zwischen bedeutungslosen Sprachlauten und visuellen Formen—ist ein typischer Fall von NichtwillkĂŒrlichkeit. Ein Beispiel fĂŒr Klangsymbolik ist der “malumatakete” Effekt, bei dem immanente Verbindungen zwischen bedeutungslosen ‘runden’ oder ‘scharfen’ Sprachlauten (z.B. maluma vs. takete) und runden bzw. scharfen abstrakten visuellen Formen beobachtet werden. Umfangreiche empirische Arbeiten legen nahe, dass diese Zuordnungen von Menschen vorgenommen werden und bei der Entstehung und dem Erwerb von Sprache eine besondere Rolle spielen. Wichtige Fragen zu Ursprung und Mechanismus der Verarbeitung von Lautsymbolen sind jedoch noch offen. Diese Fragen werden in der vorliegenden Arbeit behandelt. Der erste Teil dieser Dissertation konzentriert sich auf die Validierung von klangsymbolischen Effekten in einer Forced-Choice-Auswahlaufgabe (erzwungene Wahl) und auf die Interaktion von Klangsymbolik mit zwei crossmodalen Mappings, die von Menschen vorgenommen werden. Um dieser Frage nachzugehen, wurden menschliche Probanden mit einer Auswahlaufgabe mit zwei Auswahlmöglichkeiten auf klangsymbolische Zuordnungen getestet , die mit zwei crossmodalen audiovisuellen Zuordnungen (Tonhöhenform und Tonhöhen-Raum-Position) gekreuzt wurden. Die Versuchspersonen erbrachten nur bei den klangsymbolischen Assoziationen eine signifikant ĂŒber dem Zufall liegende Leistung, nicht aber bei den beiden anderen Zuordnungen. Tonsymbolische Effekte wurden repliziert, wĂ€hrend die beiden anderen crossmodalen Zuordnungen, die audiovisuelle Eigenschaften auf niedriger Ebene wie Tonhöhe und rĂ€umliche Position beinhalteten, nicht auftraten. Das zweite Thema, das in der vorliegenden Dissertation untersucht wird, sind die phylogenetischen UrsprĂŒnge der klangsymbolischen Assoziationen. Menschliche Versuchspersonen und eine Gruppe von Menschenaffen, die auf Touchscreens trainiert wurden, wurden mit einer Forced-Choice-Aufgabe auf klangsymbolische Zuordnungen getestet. Nur Menschen waren in der Lage, die Verbindungen zwischen bedeutungslosen Sprachlauten und abstrakten Formen zu verarbeiten und/oder abzuleiten. Diese Ergebnisse zeigen zum ersten Mal die SpezifitĂ€t der lautsymbolischen FĂ€higkeit des Menschen, die mit neurobiologischen Erkenntnissen ĂŒber die ausgeprĂ€gte Entwicklung und KonnektivitĂ€t des menschlichen Sprachnetzwerks in Verbindung gebracht werden kann. Der letzte Teil der Dissertation untersucht darĂŒber hinaus, ob Handlungswissen und das Wissen um die Wahrnehmungsergebnisse unserer Handlungen eine mögliche ErklĂ€rung fĂŒr solide symbolische Mappings liefern können. In einer Reihe von Experimenten fĂŒhrten menschliche Versuchspersonen klangsymbolische Mappings durch sowie Mappings von ‘runden’ oder ‘scharfen’ Handaktionen KlĂ€nge mit den durch diese Handaktionen erzeugten Formen. DarĂŒber hinaus wurden die auditiven und visuellen Reize beider Bedingungen gekreuzt. Die Probanden stellten bei allen Zuordnungen signifikant Kongruenzen fest, und, was am wichtigsten war, es wurde eine positive Korrelation ihrer Leistungen unter allen Bedingungen beobachtet. Physikalische akustische und visuelle Ähnlichkeiten zwischen den audiovisuellen Nebenprodukten unserer Handaktionen mit den klangsymbolischen Pseudowörtern und Formen zeigen, dass die Verbindung zwischen bedeutungslosen Sprachlauten und abstrakten visuellen Formen im Handlungswissen zu finden ist. Aus neurobiologischer Sicht stimmt die Verbindung zwischen Handlungen und den audiovisuellen Nebenprodukten unserer Handlungen auch mit den verteilten Handlungs- und WahrnehmungskreislĂ€ufen im menschlichen Gehirn ĂŒberein. Aktions- Wahrnehmungsnetzwerken, die durch die neuroanatomische KonnektivitĂ€t zwischen auditorischen, visuellen und motorischen kortikalen Arealen des Menschen unterstĂŒtzt werden, entstehen und tragen unter assoziativem Lernen das perzeptuelle und motorische Wissen unserer Handlungen. Diese Erkenntnisse geben eine neuartige ErklĂ€rung dafĂŒr, wie symbolische Kommunikation in unseren sensomotorischen Erfahrungen verknĂŒpft ist. Zusammenfassend lĂ€sst sich sagen, dass die vorliegende Dissertation (i) das Vorhandensein von lautsymbolischen Effekten in einer Forced-Choice-Aufgabe validiert, (ii) zeigt, dass lautsymbolische FĂ€higkeiten spezifisch fĂŒr Menschen sind, und (iii) dass Handlungswissen den mechanistischen Klebstoff liefern kann, um bedeutungslose Sprachlaute auf abstrakte Formen abzubilden. Insgesamt trĂ€gt die vorliegende Arbeit zu einem besseren VerstĂ€ndnis der phylogenetischen UrsprĂŒnge und des Mechanismus der lautsymbolischen FĂ€higkeit des Menschen bei

    The Role of Sensory Feedback in Developmental Stuttering: A Review

    Get PDF
    Developmental stuttering is a neurodevelopmental disorder that severely affects speech fluency. Multiple lines of evidence point to a role of sensory feedback in the disorder; this has led to a number of theories proposing different disruptions to the use of sensory feedback during speech motor control in people who stutter. The purpose of this review was to bring together evidence from studies using altered auditory feedback paradigms with people who stutter, in order to evaluate the predictions of these different theories. This review highlights converging evidence for particular patterns of differences in the responses of people who stutter to feedback perturbations. The implications for hypotheses on the nature of the disruption to sensorimotor control of speech in the disorder are discussed, with reference to neurocomputational models of speech control (predominantly, the DIVA model; Guenther et al., 2006; Tourville et al., 2008). While some consistent patterns are emerging from this evidence, it is clear that more work in this area is needed with developmental samples in particular, in order to tease apart differences related to symptom onset from those related to compensatory strategies that develop with experience of stuttering

    MEG, PSYCHOPHYSICAL AND COMPUTATIONAL STUDIES OF LOUDNESS, TIMBRE, AND AUDIOVISUAL INTEGRATION

    Get PDF
    Natural scenes and ecological signals are inherently complex and understanding of their perception and processing is incomplete. For example, a speech signal contains not only information at various frequencies, but is also not static; the signal is concurrently modulated temporally. In addition, an auditory signal may be paired with additional sensory information, as in the case of audiovisual speech. In order to make sense of the signal, a human observer must process the information provided by low-level sensory systems and integrate it across sensory modalities and with cognitive information (e.g., object identification information, phonetic information). The observer must then create functional relationships between the signals encountered to form a coherent percept. The neuronal and cognitive mechanisms underlying this integration can be quantified in several ways: by taking physiological measurements, assessing behavioral output for a given task and modeling signal relationships. While ecological tokens are complex in a way that exceeds our current understanding, progress can be made by utilizing synthetic signals that encompass specific essential features of ecological signals. The experiments presented here cover five aspects of complex signal processing using approximations of ecological signals : (i) auditory integration of complex tones comprised of different frequencies and component power levels; (ii) audiovisual integration approximating that of human speech; (iii) behavioral measurement of signal discrimination; (iv) signal classification via simple computational analyses and (v) neuronal processing of synthesized auditory signals approximating speech tokens. To investigate neuronal processing, magnetoencephalography (MEG) is employed to assess cortical processing non-invasively. Behavioral measures are employed to evaluate observer acuity in signal discrimination and to test the limits of perceptual resolution. Computational methods are used to examine the relationships in perceptual space and physiological processing between synthetic auditory signals, using features of the signals themselves as well as biologically-motivated models of auditory representation. Together, the various methodologies and experimental paradigms advance the understanding of ecological signal analytics concerning the complex interactions in ecological signal structure

    Multiplexed and Robust Representations of Sound Features in Auditory Cortex

    Full text link

    How does the brain extract acoustic patterns? A behavioural and neural study

    Get PDF
    In complex auditory scenes the brain exploits statistical regularities to group sound elements into streams. Previous studies using tones that transition from being randomly drawn to regularly repeating, have highlighted a network of brain regions involved during this process of regularity detection, including auditory cortex (AC) and hippocampus (HPC; Barascud et al., 2016). In this thesis, I seek to understand how the neurons within AC and HPC detect and maintain a representation of deterministic acoustic regularity. I trained ferrets (n = 6) on a GO/NO-GO task to detect the transition from a random sequence of tones to a repeating pattern of tones, with increasing pattern lengths (3, 5 and 7). All animals performed significantly above chance, with longer reaction times and declining performance as the pattern length increased. During performance of the behavioural task, or passive listening, I recorded from primary and secondary fields of AC with multi-electrode arrays (behaving: n = 3), or AC and HPC using Neuropixels probes (behaving: n = 1; passive: n = 1). In the local field potential, I identified no differences in the evoked response between presentations of random or regular sequences. Instead, I observed significant increases in oscillatory power at the rate of the repeating pattern, and decreases at the tone presentation rate, during regularity. Neurons in AC, across the population, showed higher firing with more repetitions of the pattern and for shorter pattern lengths. Single-units within AC showed higher precision in their firing when responding to their best frequency during regularity. Neurons in AC and HPC both entrained to the pattern rate during presentation of the regular sequence when compared to the random sequence. Lastly, development of an optogenetic approach to inactivate AC in the ferret paves the way for future work to probe the causal involvement of these brain regions

    MEG correlates of temporal regularity relevant to pitch perception in human auditory cortex

    Get PDF
    We recorded neural responses in human participants to three types of pitch-evoking regular stimuli at rates below and above the lower limit of pitch using magnetoencephalography (MEG). These bandpass filtered (1–4 kHz) stimuli were harmonic complex tones (HC), click trains (CT), and regular interval noise (RIN). Trials consisted of noise-regular-noise (NRN) or regular-noise-regular (RNR) segments in which the repetition rate (or fundamental frequency F0) was either above (250 Hz) or below (20 Hz) the lower limit of pitch. Neural activation was estimated and compared at the senor and source levels. The pitch-relevant regular stimuli (F0 = 250 Hz) were all associated with marked evoked responses at around 140 ms after noise-to-regular transitions at both sensor and source levels. In particular, greater evoked responses to pitch-relevant stimuli than pitch-irrelevant stimuli (F0 = 20 Hz) were localized along the Heschl's sulcus around 140 ms. The regularity-onset responses for RIN were much weaker than for the other types of regular stimuli (HC, CT). This effect was localized over planum temporale, planum polare, and lateral Heschl's gyrus. Importantly, the effect of pitch did not interact with the stimulus type. That is, we did not find evidence to support different responses for different types of regular stimuli from the spatiotemporal cluster of the pitch effect (∌140 ms). The current data demonstrate cortical sensitivity to temporal regularity relevant to pitch that is consistently present across different pitch-relevant stimuli in the Heschl's sulcus between Heschl's gyrus and planum temporale, both of which have been identified as a “pitch center” based on different modalities
    • 

    corecore