216 research outputs found

    Repetition Enhancement for Frequency-Modulated but Not Unmodulated Sounds: A Human MEG Study

    Get PDF
    BACKGROUND: Decoding of frequency-modulated (FM) sounds is essential for phoneme identification. This study investigates selectivity to FM direction in the human auditory system. METHODOLOGY/PRINCIPAL FINDINGS: Magnetoencephalography was recorded in 10 adults during a two-tone adaptation paradigm with a 200-ms interstimulus-interval. Stimuli were pairs of either same or different frequency modulation direction. To control that FM repetition effects cannot be accounted for by their on- and offset properties, we additionally assessed responses to pairs of unmodulated tones with either same or different frequency composition. For the FM sweeps, N1m event-related magnetic field components were found at 103 and 130 ms after onset of the first (S1) and second stimulus (S2), respectively. This was followed by a sustained component starting at about 200 ms after S2. The sustained response was significantly stronger for stimulation with the same compared to different FM direction. This effect was not observed for the non-modulated control stimuli. CONCLUSIONS/SIGNIFICANCE: Low-level processing of FM sounds was characterized by repetition enhancement to stimulus pairs with same versus different FM directions. This effect was FM-specific; it did not occur for unmodulated tones. The present findings may reflect specific interactions between frequency separation and temporal distance in the processing of consecutive FM sweeps

    Sensitivity of the human auditory cortex to acoustic degradation of speech and non-speech sounds

    Get PDF
    The perception of speech is usually an effortless and reliable process even in highly adverse listening conditions. In addition to external sound sources, the intelligibility of speech can be reduced by degradation of the structure of speech signal itself, for example by digital compression of sound. This kind of distortion may be even more detrimental to speech intelligibility than external distortion, given that the auditory system will not be able to utilize sound source-specific acoustic features, such as spatial location, to separate the distortion from the speech signal. The perceptual consequences of acoustic distortions on speech intelligibility have been extensively studied. However, the cortical mechanisms of speech perception in adverse listening conditions are not well known at present, particularly in situations where the speech signal itself is distorted. The aim of this thesis was to investigate the cortical mechanisms underlying speech perception in conditions where speech is less intelligible due to external distortion or as a result of digital compression. In the studies of this thesis, the intelligibility of speech was varied either by digital compression or addition of stochastic noise. Cortical activity related to the speech stimuli was measured using magnetoencephalography (MEG). The results indicated that degradation of speech sounds by digital compression enhanced the evoked responses originating from the auditory cortex, whereas addition of stochastic noise did not modulate the cortical responses. Furthermore, it was shown that if the distortion was presented continuously in the background, the transient activity of auditory cortex was delayed. On the perceptual level, digital compression reduced the comprehensibility of speech more than additive stochastic noise. In addition, it was also demonstrated that prior knowledge of speech content enhanced the intelligibility of distorted speech substantially, and this perceptual change was associated with an increase in cortical activity within several regions adjacent to auditory cortex. In conclusion, the results of this thesis show that the auditory cortex is very sensitive to the acoustic features of the distortion, while at later processing stages, several cortical areas reflect the intelligibility of speech. These findings suggest that the auditory system rapidly adapts to the variability of the auditory environment, and can efficiently utilize previous knowledge of speech content in deciphering acoustically degraded speech signals.Puheen havaitseminen on useimmiten vaivatonta ja luotettavaa myös erittäin huonoissa kuunteluolosuhteissa. Puheen ymmärrettävyys voi kuitenkin heikentyä ympäristön häiriölähteiden lisäksi myös silloin, kun puhesignaalin rakennetta muutetaan esimerkiksi pakkaamalla digitaalista ääntä. Tällainen häiriö voi heikentää ymmärrettävyyttä jopa ulkoisia häiriöitä voimakkaammin, koska kuulojärjestelmä ei pysty hyödyntämään äänilähteen ominaisuuksia, kuten äänen tulosuuntaa, häiriön erottelemisessa puheesta. Akustisten häiriöiden vaikutuksia puheen havaitsemiseen on tutkttu laajalti, mutta havaitsemiseen liittyvät aivomekanismit tunnetaan edelleen melko puutteelisesti etenkin tilanteissa, joissa itse puhesignaali on laadultaan heikentynyt. Tämän väitöskirjan tavoitteena oli tutkia puheen havaitsemisen aivomekanismeja tilanteissa, joissa puhesignaali on vaikeammin ymmärrettävissä joko ulkoisen äänilähteen tai digitaalisen pakkauksen vuoksi. Väitöskirjan neljässä osatutkimuksessa lyhyiden puheäänien ja jatkuvan puheen ymmärrettävyyttä muokattiin joko digitaalisen pakkauksen kautta tai lisäämällä puhesignaaliin satunnaiskohinaa. Puheärsykkeisiin liittyvää aivotoimintaa tutkittiin magnetoenkefalografia-mittauksilla. Tutkimuksissa havaittiin, että kuuloaivokuorella syntyneet herätevasteet voimistuivat, kun puheääntä pakattiin digitaalisesti. Sen sijaan puheääniin lisätty satunnaiskohina ei vaikuttanut herätevasteisiin. Edelleen, mikäli puheäänien taustalla esitettiin jatkuvaa häiriötä, kuuloaivokuoren aktivoituminen viivästyi häiriön intensiteetin kasvaessa. Kuuntelukokeissa havaittiin, että digitaalinen pakkaus heikentää puheäänien ymmärrettävyyttä voimakkaammin kuin satunnaiskohina. Lisäksi osoitettiin, että aiempi tieto puheen sisällöstä paransi merkittävästi häiriöisen puheen ymmärrettävyyttä, mikä heijastui aivotoimintaan kuuloaivokuoren viereisillä aivoalueilla siten, että ymmärrettävä puhe aiheutti suuremman aktivaation kuin heikosti ymmärrettävä puhe. Väitöskirjan tulokset osoittavat, että kuuloaivokuori on erittäin herkkä puheäänien akustisille häiriöille, ja myöhemmissä prosessoinnin vaiheissa useat kuuloaivokuoren viereiset aivoalueet heijastavat puheen ymmärrettävyyttä. Tulosten mukaan voi olettaa, että kuulojärjestelmä mukautuu nopeasti ääniympäristön vaihteluihin muun muassa hyödyntämällä aiempaa tietoa puheen sisällöstä tulkitessaan häiriöistä puhesignaalia

    Hemispheric Specialization in Dogs for Processing Different Acoustic Stimuli

    Get PDF
    Considerable experimental evidence shows that functional cerebral asymmetries are widespread in animals. Activity of the right cerebral hemisphere has been associated with responses to novel stimuli and the expression of intense emotions, such as aggression, escape behaviour and fear. The left hemisphere uses learned patterns and responds to familiar stimuli. Although such lateralization has been studied mainly for visual responses, there is evidence in primates that auditory perception is lateralized and that vocal communication depends on differential processing by the hemispheres. The aim of the present work was to investigate whether dogs use different hemispheres to process different acoustic stimuli by presenting them with playbacks of a thunderstorm and their species-typical vocalizations. The results revealed that dogs usually process their species-typical vocalizations using the left hemisphere and the thunderstorm sounds using the right hemisphere. Nevertheless, conspecific vocalizations are not always processed by the left hemisphere, since the right hemisphere is used for processing vocalizations when they elicit intense emotion, including fear. These findings suggest that the specialisation of the left hemisphere for intraspecific communication is more ancient that previously thought, and so is specialisation of the right hemisphere for intense emotions

    Variability in the articulation and perception of a word

    Get PDF
    The words making up a speaker’s mental lexicon may be stored as abstract phonological representations or else they may be stored as detailed acoustic-phonetic representations. The speaker’s articulatory gestures intended to represent a word show relatively high variability in spontaneous speech. The aim of this paper is to explore the acoustic-phonetic patterns of the Hungarian word akkor ‘then, at that time’. Ten speakers’ recorded spontaneous speech with a total duration of 255 minutes and containing 286 occurrences of akkor were submitted to analysis. Durational and frequency patterns were measured by means of the Praat software. The results obtained show higher variability both within and across speakers than it had been expected. Both the durations of the words and those of the speech sounds, as well as the vowel formants, turned out to significantly differ across speakers. In addition, the results showed considerable within-speaker variation as well. The correspondence between variability in the objective acoustic-phonetic data and the flexibility and adaptive nature of the mental representation of a word will be discussed.For the perception experiments, two speakers of the previous experiment were selected whose 48 words were then used as speech material. The listeners had to judge the quality of the words they heard using a five-point scale. The results confirmed that the listeners used diverse strategies and representations depending on the acoustic-phonetic parameters of the series of occurrences of akkor

    Complex Processes from Dynamical Architectures with Time-Scale Hierarchy

    Get PDF
    The idea that complex motor, perceptual, and cognitive behaviors are composed of smaller units, which are somehow brought into a meaningful relation, permeates the biological and life sciences. However, no principled framework defining the constituent elementary processes has been developed to this date. Consequently, functional configurations (or architectures) relating elementary processes and external influences are mostly piecemeal formulations suitable to particular instances only. Here, we develop a general dynamical framework for distinct functional architectures characterized by the time-scale separation of their constituents and evaluate their efficiency. Thereto, we build on the (phase) flow of a system, which prescribes the temporal evolution of its state variables. The phase flow topology allows for the unambiguous classification of qualitatively distinct processes, which we consider to represent the functional units or modes within the dynamical architecture. Using the example of a composite movement we illustrate how different architectures can be characterized by their degree of time scale separation between the internal elements of the architecture (i.e. the functional modes) and external interventions. We reveal a tradeoff of the interactions between internal and external influences, which offers a theoretical justification for the efficient composition of complex processes out of non-trivial elementary processes or functional modes
    corecore