387 research outputs found

    Representation of statistical sound properties in human auditory cortex

    Get PDF
    The work carried out in this doctoral thesis investigated the representation of statistical sound properties in human auditory cortex. It addressed four key aspects in auditory neuroscience: the representation of different analysis time windows in auditory cortex; mechanisms for the analysis and segregation of auditory objects; information-theoretic constraints on pitch sequence processing; and the analysis of local and global pitch patterns. The majority of the studies employed a parametric design in which the statistical properties of a single acoustic parameter were altered along a continuum, while keeping other sound properties fixed. The thesis is divided into four parts. Part I (Chapter 1) examines principles of anatomical and functional organisation that constrain the problems addressed. Part II (Chapter 2) introduces approaches to digital stimulus design, principles of functional magnetic resonance imaging (fMRI), and the analysis of fMRI data. Part III (Chapters 3-6) reports five experimental studies. Study 1 controlled the spectrotemporal correlation in complex acoustic spectra and showed that activity in auditory association cortex increases as a function of spectrotemporal correlation. Study 2 demonstrated a functional hierarchy of the representation of auditory object boundaries and object salience. Studies 3 and 4 investigated cortical mechanisms for encoding entropy in pitch sequences and showed that the planum temporale acts as a computational hub, requiring more computational resources for sequences with high entropy than for those with high redundancy. Study 5 provided evidence for a hierarchical organisation of local and global pitch pattern processing in neurologically normal participants. Finally, Part IV (Chapter 7) concludes with a general discussion of the results and future perspectives

    Auditory edge detection: the dynamics of the construction of auditory perceptual representations

    Get PDF
    This dissertation investigates aspects of auditory scene analysis such as the detection of a new object in the environment. Specifically I try to learn about these processes by studying the temporal dynamics of magnetic signals recorded from outside the scalp of human listeners, and comparing these dynamics with psychophysical measures. In total nine behavioral and Magneto-encephalography (MEG) brain-imaging experiments are reported. These studies relate to the extraction of tonal targets from background noise and the detection of change within ongoing sounds. The MEG deflections we observe between 50-200 ms post transition reflect the first stages of perceptual organization. I interpret the temporal dynamics of these responses in terms of activation of cortical systems that participate in the detection of acoustic events and the discrimination of targets from backgrounds. The data shed light on the statistical heuristics with which our brains sample, represent, and detect changes in the world, including changes that are not the immediate focus of attention. In particular, the asymmetry of responses to transitions between 'order' and 'disorder' within a stimulus can be interpreted in terms of different requirements for temporal integration. The similarity of these transition-responses with commonly observed onset M50 and M100 auditory-evoked fields allows us to suggest a hypothesis as to their underlying functional significance, which so far has remained unclear. The comparison of MEG and psychophysics demonstrates a striking dissociation between higher level mechanisms related to conscious detection and the lower-level, pre-attentive cortical mechanisms that sub-serve the early organization of auditory information. The implications of these data for the processes that underlie the creation of perceptual representations are discussed. A comparison of the behavior of normal and dyslexic subjects in a tone-in-noise detection task revealed a general difficulty in extracting tonal objects from background noise, manifested by a globally delayed detection speed, associated with dyslexia. This finding may enable us to tease apart the physiological and behavioral corollaries of these early, pre-attentive processes. In conclusion, the sum of these results suggests that the combination of behavioral and MEG investigative tools can provide new insights into the processes by which perceptual representations emerge from sensory input

    Towards understanding the role of central processing in release from masking

    Get PDF
    People with normal hearing have the ability to listen to a desired target sound while filtering out unwanted sounds in the background. However, most patients with hearing impairment struggle in noisy environments, a perceptual deficit which current hearing aids and cochlear implants cannot resolve. Even though peripheral dysfunction of the ears undoubtedly contribute to this deficit, surmounting evidence has implicated central processing in the inability to detect sounds in background noise. Therefore, it is essential to better understand the underlying neural mechanisms by which target sounds are dissociated from competing maskers. This research focuses on two phenomena that help suppress background sounds: 1) dip-listening, and 2) directional hearing. When background noise fluctuates slowly over time, both humans and animals can listen in the dips of the noise envelope to detect target sound, a phenomenon referred to as dip-listening. Detection of target sound is facilitated by a central neuronal mechanism called envelope locking suppression. At both positive and negative signal-to-noise ratios (SNRs), the presence of target energy can suppress the strength by which neurons in auditory cortex track background sound, at least in anesthetized animals. However, in humans and animals, most of the perceptual advantage gained by listening in the dips of fluctuating noise emerges when a target is softer than the background sound. This raises the possibility that SNR shapes the reliance on different processing strategies, a hypothesis tested here in awake behaving animals. Neural activity of Mongolian gerbils is measured by chronic implantation of silicon probes in the core auditory cortex. Using appetitive conditioning, gerbils detect target tones in the presence of temporally fluctuating amplitude-modulated background noise, called masker. Using rate- vs. timing-based decoding strategies, analysis of single-unit activity show that both mechanisms can be used for detecting tones at positive SNR. However, only temporal decoding provides an SNR-invariant readout strategy that is viable at both positive and negative SNRs. In addition to dip-listening, spatial cues can facilitate the dissociation of target sounds from background noise. Specifically, an important cue for computing sound direction is the time difference in arrival of acoustic energy reaching each ear, called interaural time difference (ITD). ITDs allow localization of low frequency sounds from left to right inside the listener\u27s head, also called sound lateralization. Models of sound localization commonly assume that sound lateralization from interaural time differences is level invariant. Here, two prevalent theories of sound localization are observed to make opposing predictions. The labelled-line model encodes location through tuned representations of spatial location and predicts that perceived direction is level invariant. In contrast, the hemispheric-difference model encodes location through spike-rate and predicts that perceived direction becomes medially biased at low sound levels. In this research, through behavioral experiments on sound lateralization, the computation of sound location with ITDs is tested. Four groups of normally hearing listeners lateralize sounds based on ITDs as a function of sound intensity, exposure hemisphere, and stimulus history. Stimuli consists of low-frequency band-limited white noise. Statistical analysis, which partial out overall differences between listeners, is inconsistent with the place-coding scheme of sound localization, and supports the hypothesis that human sound localization is instead encoded through a population rate-code

    On the role of neuronal oscillations in auditory cortical processing

    Full text link
    Although it has been over 100 years since William James stated that everyone knows what attention is , its underlying neural mechanisms are still being debated today. The goal of this research was to describe the physiological mechanisms of auditory attention using direct electrophysiological recordings in macaque primary auditory cortex (A1). A major focus of my research was on the role ongoing neuronal oscillations play in attentional modulation of auditory responses in A1. For all studies, laminar profiles of synaptic activity, (indexed by current source density analysis) and concomitant firing patterns in local neurons (multiunit activity) were acquired simultaneously via linear array multielectrodes positioned in A1. The initial study of this dissertation examined the contribution of ongoing oscillatory activity to excitatory and inhibitory responses in A1 in passive (no task) conditions. Next, the function of ongoing oscillations in modulating the frequency tuning of A1 during an intermodal selective attention oddball task was investigated. The last study was aimed at establishing whether there is a hemispheric asymmetry in the way neuronal oscillations are utilized by attention, corresponding to that noted in humans. The results of the first study indicate that in passive conditions, ongoing oscillations reset by stimulus related inputs modulate both excitatory and inhibitory components of local neuronal ensemble responses in A1. The second set of experiments demonstrates that this mechanism is utilized by attention to modulate and sharpen frequency tuning. Finally, we show that as in humans, there appears to be a specialization of left A1 for temporal processing, as signified by greater temporal precision of neuronal oscillatory alignment. Taken together these results underline the importance of neuronal oscillations in perceptual processes, and the validity of the macaque monkey as a model of human auditory processing

    Neurophysiological assessments of low-level and high-level interdependencies between auditory and visual systems in the human brain

    Get PDF
    This dissertation investigates the functional interplay between visual and auditory systems and its degree of experience-dependent plasticity. To function efficiently in everyday life, we must rely on our senses, building complex hierarchical representations about the environment. Early sensory deprivation, congenital (from birth) or within the first year of life, is a key model to study sensory experience and the degree of compensatory reorganizations (i.e., neuroplasticity). Neuroplasticity can be intramodal (within the sensory system) and crossmodal (the recruitment of deprived cortical areas for remaining senses). However, the exact role of early sensory experience and the mechanisms guiding experience-driven plasticity need further investigation. To this aim, we performed three electroencephalographic studies, considering the aspects: 1) sensory modality (auditory/visual), 2) hierarchy of the brain functional organization (low-/high-level), and 3)sensory deprivation (deprived/non-deprived cortices). The first study explored how early auditory experience affects low-level visual processing, using time-frequency analysis on the data of early deaf individuals and their hearing counterparts. The second study investigated experience- dependent plasticity in hierarchically organized face processing, applying fast periodic visual stimulation in congenitally deaf signers and their hearing controls. The third study assessed neural responses of blindfolded participants, using naturalistic stimuli together with temporal response function, and evaluated neural tracking in hierarchically organized speech processing when retinal input is absent, focusing on the role of the visual cortex. The results demonstrate the importance of atypical early sensory experience in shaping (via intra-and crossmodal changes) the brain organization at various hierarchical stages of sensory processing but also support the idea that some crossmodal effects emerge even with typical experience. This dissertation provides new insights into understanding the functional interplay between visual and auditory systems and the related mechanisms driving experience-dependent plasticity and may contribute to the development of sensory restoration tools and rehabilitation strategies for sensory-typical and sensory-deprived populations

    How does the brain extract acoustic patterns? A behavioural and neural study

    Get PDF
    In complex auditory scenes the brain exploits statistical regularities to group sound elements into streams. Previous studies using tones that transition from being randomly drawn to regularly repeating, have highlighted a network of brain regions involved during this process of regularity detection, including auditory cortex (AC) and hippocampus (HPC; Barascud et al., 2016). In this thesis, I seek to understand how the neurons within AC and HPC detect and maintain a representation of deterministic acoustic regularity. I trained ferrets (n = 6) on a GO/NO-GO task to detect the transition from a random sequence of tones to a repeating pattern of tones, with increasing pattern lengths (3, 5 and 7). All animals performed significantly above chance, with longer reaction times and declining performance as the pattern length increased. During performance of the behavioural task, or passive listening, I recorded from primary and secondary fields of AC with multi-electrode arrays (behaving: n = 3), or AC and HPC using Neuropixels probes (behaving: n = 1; passive: n = 1). In the local field potential, I identified no differences in the evoked response between presentations of random or regular sequences. Instead, I observed significant increases in oscillatory power at the rate of the repeating pattern, and decreases at the tone presentation rate, during regularity. Neurons in AC, across the population, showed higher firing with more repetitions of the pattern and for shorter pattern lengths. Single-units within AC showed higher precision in their firing when responding to their best frequency during regularity. Neurons in AC and HPC both entrained to the pattern rate during presentation of the regular sequence when compared to the random sequence. Lastly, development of an optogenetic approach to inactivate AC in the ferret paves the way for future work to probe the causal involvement of these brain regions

    The Human Auditory System

    Get PDF
    This book presents the latest findings in clinical audiology with a strong emphasis on new emerging technologies that facilitate and optimize a better assessment of the patient. The book has been edited with a strong educational perspective (all chapters include an introduction to their corresponding topic and a glossary of terms). The book contains material suitable for graduate students in audiology, ENT, hearing science and neuroscience
    • …
    corecore