387 research outputs found
Representation of statistical sound properties in human auditory cortex
The work carried out in this doctoral thesis investigated the representation of
statistical sound properties in human auditory cortex. It addressed four key aspects in
auditory neuroscience: the representation of different analysis time windows in
auditory cortex; mechanisms for the analysis and segregation of auditory objects;
information-theoretic constraints on pitch sequence processing; and the analysis of
local and global pitch patterns. The majority of the studies employed a parametric
design in which the statistical properties of a single acoustic parameter were altered
along a continuum, while keeping other sound properties fixed.
The thesis is divided into four parts. Part I (Chapter 1) examines principles of
anatomical and functional organisation that constrain the problems addressed. Part II
(Chapter 2) introduces approaches to digital stimulus design, principles of functional
magnetic resonance imaging (fMRI), and the analysis of fMRI data. Part III (Chapters
3-6) reports five experimental studies. Study 1 controlled the spectrotemporal
correlation in complex acoustic spectra and showed that activity in auditory
association cortex increases as a function of spectrotemporal correlation. Study 2
demonstrated a functional hierarchy of the representation of auditory object
boundaries and object salience. Studies 3 and 4 investigated cortical mechanisms for
encoding entropy in pitch sequences and showed that the planum temporale acts as a
computational hub, requiring more computational resources for sequences with high
entropy than for those with high redundancy. Study 5 provided evidence for a
hierarchical organisation of local and global pitch pattern processing in neurologically
normal participants. Finally, Part IV (Chapter 7) concludes with a general discussion
of the results and future perspectives
Auditory edge detection: the dynamics of the construction of auditory perceptual representations
This dissertation investigates aspects of auditory scene analysis such as the detection of a new object in the environment. Specifically I try to learn about these processes by studying the temporal dynamics of magnetic signals recorded from outside the scalp of human listeners, and comparing these dynamics with psychophysical measures. In total nine behavioral and Magneto-encephalography (MEG) brain-imaging experiments are reported. These studies relate to the extraction of tonal targets from background noise and the detection of change within ongoing sounds. The MEG deflections we observe between 50-200 ms post transition reflect the first stages of perceptual organization. I interpret the temporal dynamics of these responses in terms of activation of cortical systems that participate in the detection of acoustic events and the discrimination of targets from backgrounds. The data shed light on the statistical heuristics with which our brains sample, represent, and detect changes in the world, including changes that are not the immediate focus of attention. In particular, the asymmetry of responses to transitions between 'order' and 'disorder' within a stimulus can be interpreted in terms of different requirements for temporal integration. The similarity of these transition-responses with commonly observed onset M50 and M100 auditory-evoked fields allows us to suggest a hypothesis as to their underlying functional significance, which so far has remained unclear. The comparison of MEG and psychophysics demonstrates a striking dissociation between higher level mechanisms related to conscious detection and the lower-level, pre-attentive cortical mechanisms that sub-serve the early organization of auditory information. The implications of these data for the processes that underlie the creation of perceptual representations are discussed. A comparison of the behavior of normal and dyslexic subjects in a tone-in-noise detection task revealed a general difficulty in extracting tonal objects from background noise, manifested by a globally delayed detection speed, associated with dyslexia. This finding may enable us to tease apart the physiological and behavioral corollaries of these early, pre-attentive processes. In conclusion, the sum of these results suggests that the combination of behavioral and MEG investigative tools can provide new insights into the processes by which perceptual representations emerge from sensory input
Towards understanding the role of central processing in release from masking
People with normal hearing have the ability to listen to a desired target sound while filtering out unwanted sounds in the background. However, most patients with hearing impairment struggle in noisy environments, a perceptual deficit which current hearing aids and cochlear implants cannot resolve. Even though peripheral dysfunction of the ears undoubtedly contribute to this deficit, surmounting evidence has implicated central processing in the inability to detect sounds in background noise. Therefore, it is essential to better understand the underlying neural mechanisms by which target sounds are dissociated from competing maskers. This research focuses on two phenomena that help suppress background sounds: 1) dip-listening, and 2) directional hearing.
When background noise fluctuates slowly over time, both humans and animals can listen in the dips of the noise envelope to detect target sound, a phenomenon referred to as dip-listening. Detection of target sound is facilitated by a central neuronal mechanism called envelope locking suppression. At both positive and negative signal-to-noise ratios (SNRs), the presence of target energy can suppress the strength by which neurons in auditory cortex track background sound, at least in anesthetized animals. However, in humans and animals, most of the perceptual advantage gained by listening in the dips of fluctuating noise emerges when a target is softer than the background sound. This raises the possibility that SNR shapes the reliance on different processing strategies, a hypothesis tested here in awake behaving animals. Neural activity of Mongolian gerbils is measured by chronic implantation of silicon probes in the core auditory cortex. Using appetitive conditioning, gerbils detect target tones in the presence of temporally fluctuating amplitude-modulated background noise, called masker. Using rate- vs. timing-based decoding strategies, analysis of single-unit activity show that both mechanisms can be used for detecting tones at positive SNR. However, only temporal decoding provides an SNR-invariant readout strategy that is viable at both positive and negative SNRs.
In addition to dip-listening, spatial cues can facilitate the dissociation of target sounds from background noise. Specifically, an important cue for computing sound direction is the time difference in arrival of acoustic energy reaching each ear, called interaural time difference (ITD). ITDs allow localization of low frequency sounds from left to right inside the listener\u27s head, also called sound lateralization. Models of sound localization commonly assume that sound lateralization from interaural time differences is level invariant. Here, two prevalent theories of sound localization are observed to make opposing predictions. The labelled-line model encodes location through tuned representations of spatial location and predicts that perceived direction is level invariant. In contrast, the hemispheric-difference model encodes location through spike-rate and predicts that perceived direction becomes medially biased at low sound levels. In this research, through behavioral experiments on sound lateralization, the computation of sound location with ITDs is tested. Four groups of normally hearing listeners lateralize sounds based on ITDs as a function of sound intensity, exposure hemisphere, and stimulus history. Stimuli consists of low-frequency band-limited white noise. Statistical analysis, which partial out overall differences between listeners, is inconsistent with the place-coding scheme of sound localization, and supports the hypothesis that human sound localization is instead encoded through a population rate-code
On the role of neuronal oscillations in auditory cortical processing
Although it has been over 100 years since William James stated that everyone knows what attention is , its underlying neural mechanisms are still being debated today. The goal of this research was to describe the physiological mechanisms of auditory attention using direct electrophysiological recordings in macaque primary auditory cortex (A1). A major focus of my research was on the role ongoing neuronal oscillations play in attentional modulation of auditory responses in A1.
For all studies, laminar profiles of synaptic activity, (indexed by current source density analysis) and concomitant firing patterns in local neurons (multiunit activity) were acquired simultaneously via linear array multielectrodes positioned in A1. The initial study of this dissertation examined the contribution of ongoing oscillatory activity to excitatory and inhibitory responses in A1 in passive (no task) conditions. Next, the function of ongoing oscillations in modulating the frequency tuning of A1 during an intermodal selective attention oddball task was investigated. The last study was aimed at establishing whether there is a hemispheric asymmetry in the way neuronal oscillations are utilized by attention, corresponding to that noted in humans.
The results of the first study indicate that in passive conditions, ongoing oscillations reset by stimulus related inputs modulate both excitatory and inhibitory components of local neuronal ensemble responses in A1. The second set of experiments demonstrates that this mechanism is utilized by attention to modulate and sharpen frequency tuning. Finally, we show that as in humans, there appears to be a specialization of left A1 for temporal processing, as signified by greater temporal precision of neuronal oscillatory alignment. Taken together these results underline the importance of neuronal oscillations in perceptual processes, and the validity of the macaque monkey as a model of human auditory processing
Recommended from our members
Investigating the function of alpha frequency oscillatory activity
This thesis was submitted for the award of Doctor of Philosophy and was awarded by Brunel University LondonA fundamental challenge in modern neuroscience is to understand the role of synchronous oscillatory activity of groups of neurons in information processing. This thesis addressed the problem of how alpha frequency oscillatory activity might help control the flow of information from both the external world and from higher cognitive areas (responsible for inhibitory control, top-down and bottom-up information flow). A series of experiments investigated how alpha neuronal dynamics might aid/control cognition. In order to study the functional significance of alpha frequency oscillatory activity, the effects on performance in cognitive tasks of alpha activity directly elicited using photic stimulation were examined. Initially, we were interested in the role of alpha oscillations in information transfer across cortical areas, which was probed using a numerical Stroop task with every trial preceded by a flicker prime. The incongruent trials of the Stroop task introduce a conflict between competing responses which results in people being slower in responses to the task compared with congruent trials. That slower response has been related to increased communication between conflict processing fronto-parietal and early somatosensory regions. If alpha oscillations improve communication efficiency across the cortex it was predicted that inducing stronger alpha oscillations would affect the performance, (i.e. the Stroop cost would diminish). That hypothesis was tested in a series of three experiments. None of the manipulations (different frequencies, amplitudes induced and alpha phases where the Stroop task was initiated) showed that alpha oscillatory activity reduces the Stroop effect. However, the last task showed that people were faster when the task was preceded by an alpha frequency flicker prime, especially around 10Hz. The fourth experiment built on the well-established phenomena that when alpha activity is elicited in a particular hemisphere it attenuates processing of sensory information in that hemisphere, while the opposite hemisphere, is characterised by increased efficiency of information processing/flow. The study tested whether that could occur within a hemisphere by localised entrainment of part of the visual field. This hypothesis was tested by examining whether it could resolve differences in results previously published by Mathewson et al., (2012) and Spaak et al., (2014). In this study, a target circle was presented at time points after the offset of an alpha flicker prime, such that it was either in or out of phase with the prime. The target was displayed briefly, and then a masking ring appeared around the target location. There were two experimental conditions. First priming occurred at the central target location, and this was expected to inhibit perception at that location, (i.e. the target would be best detected at out of alpha phase time points). In contrast, in the second condition, the target surround area (e.g. the mask location) was stimulated, and this was expected to inhibit perception at that location, (i.e. the mask would be most effective in phase time points and so the target more easily detected). However, in both instances, target detection was best at in-phase time points and attenuated at out of phase time points, in line with Mathewson et al., (2012) results. This gives us some insight into the role of the alpha phase in allowing the external stimuli to be perceived/detected. The fifth experiment tested whether the level of spatial uncertainty of briefly presented target determines the alpha phase position for its best detection. This task used a similar masked circle paradigm as the fourth experiment, but the target could appear at one of two locations either side of fixation, which were both preceded by a flicker prime (either alpha frequency or randomly jittered) and followed by masking rings. The hypothesis was that the optimal alpha phase for target detection depends on whether people are pre-guided (by an arrow cue) to the target location or uncued (a higher level of spatial uncertainty). This hypothesis was again tested by examining whether it could resolve differences in results previously published by Mathewson et al., (2012) and Spaak et al., (2014). This experiment showed that the level of spatial uncertainty of briefly presented target determines the optimal alpha phase for its detection. Targets whose location was not pre-guided were the most likely to be detected when presented at time points out of phase with the entrained alpha prime; targets whose location was pre-guided by a brief arrow were the most likely to be detected when presented at time points in phase with the entrained alpha prime. The sixth experiment used EEG to investigate the neural dynamics underlying the behaviourally tested phenomenon in the previous experiment. Results showed that for targets with a high level of spatial uncertainty, the average alpha power peak was detected earlier in anterior electrodes compared with posterior electrodes, which is consistent with a greater reliance on alpha top-down dynamics. In contrast, for targets at a spatially cued location, the average alpha power peak was detected earlier at posterior electrodes, which suggests a greater reliance on bottom-up alpha neuronal dynamics. In summary, this thesis confirmed that mid-alpha phase determines the probability of detection of a briefly presented target. Also, it showed that optimal alpha phase for detecting briefly presented target would differ depending on the level of spatial uncertainty of that target. Targets at non-predictable locations are more likely to be detected at a trough in the phase of alpha activity whilst those at cued locations are most likely to be detected in-phase. Hence, perception depends not only on the internal neuronal alpha dynamics but also on the type of the visual percept. This difference may highlight the role of two different neuronal alpha sources which dominate in the different scenarios. When the target location is uncertain, top-down alpha dynamics dominate. However, when the target location is pre-guided, bottom-up alpha dynamics dominate
Neurophysiological assessments of low-level and high-level interdependencies between auditory and visual systems in the human brain
This dissertation investigates the functional interplay between visual and auditory systems and its degree of experience-dependent plasticity. To function efficiently in everyday life, we must rely on our senses, building complex hierarchical representations about the environment. Early sensory deprivation, congenital (from birth) or within the first year of life, is a key model to study sensory experience and the degree of compensatory reorganizations (i.e., neuroplasticity). Neuroplasticity can be intramodal (within the sensory system) and crossmodal (the recruitment of deprived cortical areas for remaining senses). However,
the exact role of early sensory experience and the mechanisms guiding experience-driven plasticity need further investigation. To this aim, we performed three electroencephalographic studies, considering the aspects: 1) sensory modality (auditory/visual), 2) hierarchy of the brain functional organization (low-/high-level), and 3)sensory deprivation (deprived/non-deprived cortices). The first study explored how early auditory experience affects low-level visual processing, using time-frequency analysis on the data of early deaf individuals and their hearing counterparts. The second study investigated experience-
dependent plasticity in hierarchically organized face processing, applying fast periodic visual stimulation in congenitally deaf signers and their hearing controls. The third study assessed neural responses of blindfolded participants, using naturalistic stimuli together with
temporal response function, and evaluated neural tracking in
hierarchically organized speech processing when retinal input is absent, focusing on the role of the visual cortex. The results demonstrate the importance of atypical early sensory experience in shaping (via intra-and crossmodal changes) the brain organization at various hierarchical
stages of sensory processing but also support the idea that some crossmodal effects emerge even with typical experience. This dissertation provides new insights into understanding the functional interplay between visual and auditory systems and the related mechanisms driving experience-dependent plasticity and may contribute to the development of sensory restoration tools and rehabilitation strategies for sensory-typical and sensory-deprived populations
How does the brain extract acoustic patterns? A behavioural and neural study
In complex auditory scenes the brain exploits statistical regularities to group sound elements into streams. Previous studies using tones that transition from being randomly drawn to regularly repeating, have highlighted a network of brain regions involved during this process of regularity detection, including auditory cortex (AC) and hippocampus (HPC; Barascud et al., 2016). In this thesis, I seek to understand how the neurons within AC and HPC detect and maintain a representation of deterministic acoustic regularity.
I trained ferrets (n = 6) on a GO/NO-GO task to detect the transition from a random sequence of tones to a repeating pattern of tones, with increasing pattern lengths (3, 5 and 7). All animals performed significantly above chance, with longer reaction times and declining performance as the pattern length increased. During performance of the behavioural task, or passive listening, I recorded from primary and secondary fields of AC with multi-electrode arrays (behaving: n = 3), or AC and HPC using Neuropixels probes (behaving: n = 1; passive: n = 1).
In the local field potential, I identified no differences in the evoked response between presentations of random or regular sequences. Instead, I observed significant increases in oscillatory power at the rate of the repeating pattern, and decreases at the tone presentation rate, during regularity. Neurons in AC, across the population, showed higher firing with more repetitions of the pattern and for shorter pattern lengths. Single-units within AC showed higher precision in their firing when responding to their best frequency during regularity. Neurons in AC and HPC both entrained to the pattern rate during presentation of the regular sequence when compared to the random sequence. Lastly, development of an optogenetic approach to inactivate AC in the ferret paves the way for future work to probe the causal involvement of these brain regions
The Human Auditory System
This book presents the latest findings in clinical audiology with a strong emphasis on new emerging technologies that facilitate and optimize a better assessment of the patient. The book has been edited with a strong educational perspective (all chapters include an introduction to their corresponding topic and a glossary of terms). The book contains material suitable for graduate students in audiology, ENT, hearing science and neuroscience
- …