62 research outputs found
Investigations of the effects of sequential tones on the responses of neurons in the guinea pig primary auditory cortex
The auditory system needs to be able to analyse complex acoustic waveforms. Many ecologically relevant sounds, for example speech and animal calls, vary over time. This thesis investigates how the auditory system processes sounds that occur sequentially. The focus is on how the responses of neurons in the primary auditory cortex ‘adapt’ when there are two or more tones.
When two sounds are presented in quick succession, the neural response to the second sound can decrease relative to when it is presented alone. Previous two-tone experiments have not determined whether the frequency tuning of cortical suppression was determined by the receptive field of the neuron or the exact relationship between the frequencies of the two tones. In the first experiment, it is shown that forward suppression does depend on the relationship between the two tones. This confirmed that cortical forward suppression is ‘frequency specific’ at the shortest possible timescale.
Sequences of interleaved tones with two different frequencies have been used to investigate the perceptual grouping of sequential sounds. A neural correlate of this auditory streaming has been demonstrated in awake monkeys, birds and bats. The second experiment investigates the responses of neurons in the primary auditory cortex of anaesthetised guinea pigs to alternating tone sequences. The responses are generally consistent with awake recordings, though adaptation was more rapid and at fast rates, responses were often poorly synchronised to the tones.
In the third experiment, the way in which responses to tone sequences build up is investigated by varying the number of tones that are presented before a probe tone. The suppression that is observed is again strongest when the frequency of the two tones is similar. However, the frequencies to which a neuron preferentially responds remain irrespective of the frequency and number of preceding tones. This implies that through frequency specific adaptation neurons become more selective to their preferred stimuli in the presence of a preceding stimulus
Forward suppression in the auditory cortex is frequency-specific
We investigated how physiologically observed forward suppression interacts with stimulus frequency in neuronal responses in the guinea pig auditory cortex. The temporal order and frequency proximity of sounds influence both their perception and neuronal responses. Psychophysically, preceding sounds (conditioners) can make successive sounds (probes) harder to hear. These effects are larger when the two sounds are spectrally similar. Physiological forward suppression is usually maximal for conditioner tones near to a unit's characteristic frequency (CF), the frequency to which a neuron is most sensitive. However, in most physiological studies, the frequency of the probe tone and CF are identical, so the role of unit CF and probe frequency cannot be distinguished. Here, we systemically varied the frequency of the probe tone, and found that the tuning of suppression was often more closely related to the frequency of the probe tone than to the unit's CF, i.e. suppressed tuning was specific to probe frequency. This relationship was maintained for all measured gaps between the conditioner and the probe tones. However, when the probe frequency and CF were similar, CF tended to determine suppressed tuning. In addition, the bandwidth of suppression was slightly wider for off-CF probes. Changes in tuning were also reflected in the firing rate in response to probe tones, which was maximally reduced when probe and conditioner tones were matched in frequency. These data are consistent with the idea that cortical neurons receive convergent inputs with a wide range of tuning properties that can adapt independently
Investigations of the effects of sequential tones on the responses of neurons in the guinea pig primary auditory cortex
The auditory system needs to be able to analyse complex acoustic waveforms. Many ecologically relevant sounds, for example speech and animal calls, vary over time. This thesis investigates how the auditory system processes sounds that occur sequentially. The focus is on how the responses of neurons in the primary auditory cortex ‘adapt’ when there are two or more tones.
When two sounds are presented in quick succession, the neural response to the second sound can decrease relative to when it is presented alone. Previous two-tone experiments have not determined whether the frequency tuning of cortical suppression was determined by the receptive field of the neuron or the exact relationship between the frequencies of the two tones. In the first experiment, it is shown that forward suppression does depend on the relationship between the two tones. This confirmed that cortical forward suppression is ‘frequency specific’ at the shortest possible timescale.
Sequences of interleaved tones with two different frequencies have been used to investigate the perceptual grouping of sequential sounds. A neural correlate of this auditory streaming has been demonstrated in awake monkeys, birds and bats. The second experiment investigates the responses of neurons in the primary auditory cortex of anaesthetised guinea pigs to alternating tone sequences. The responses are generally consistent with awake recordings, though adaptation was more rapid and at fast rates, responses were often poorly synchronised to the tones.
In the third experiment, the way in which responses to tone sequences build up is investigated by varying the number of tones that are presented before a probe tone. The suppression that is observed is again strongest when the frequency of the two tones is similar. However, the frequencies to which a neuron preferentially responds remain irrespective of the frequency and number of preceding tones. This implies that through frequency specific adaptation neurons become more selective to their preferred stimuli in the presence of a preceding stimulus
Selective modulation of visual sensitivity during fixation
During periods of steady fixation, we make small amplitude ocular movements, termed microsaccades, at a rate of 1-2 every second. Early studies provided evidence that visual sensitivity is reduced during microsaccades - akin to the well-established suppression associated with larger saccades. However, the results of more recent work suggest that microsaccades may alter retinal input in a manner that enhances visual sensitivity to some stimuli. Here, we parametrically varied the spatial frequency of a stimulus during a detection task and tracked contrast sensitivity as a function of time relative to microsaccades. Our data reveal two distinct modulations of sensitivity: suppression during the eye movement itself, and facilitation after the eye has stopped moving. The magnitude of suppression and facilitation of visual sensitivity is related to the spatial content of the stimulus: suppression is greatest for low spatial frequencies while sensitivity is enhanced most for stimuli of 1-2 c/deg, spatial frequencies at which we are already most sensitive in the absence of eye movements. We present a model where the tuning of suppression and facilitation is explained by delayed lateral inhibition between spatial frequency channels. Our data show that eye movements actively modulate visual sensitivity even during fixation: the detectability of images at different spatial scales can be increased or decreased depending on when the image occurs relative to a microsaccade
The interrelationship between the face and vocal tract configuration during audiovisual speech
It is well established that speech perception is improved when we are able to see the speaker talking along with hearing their voice, especially when the speech is noisy. While we have a good understanding of where speech integration occurs in the brain, it is unclear how visual and auditory cues are combined to improve speech perception. One suggestion is that integration can occur as both visual and auditory cues arise from a common generator: the vocal tract. Here, we investigate whether facial and vocal tract movements are linked during speech production by comparing videos of the face and fast magnetic resonance (MR) image sequences of the vocal tract. The joint variation in the face and vocal tract was extracted using an application of principal components analysis (PCA), and we demonstrate that MR image sequences can be reconstructed with high fidelity using only the facial video and PCA. Reconstruction fidelity was significantly higher when images from the two sequences corresponded in time, and including implicit temporal information by combining contiguous frames also led to a significant increase in fidelity. A “Bubbles” technique was used to identify which areas of the face were important for recovering information about the vocal tract, and vice versa, on a frame-by-frame basis. Our data reveal that there is sufficient information in the face to recover vocal tract shape during speech. In addition, the facial and vocal tract regions that are important for reconstruction are those that are used to generate the acoustic speech signal
Learning to silence saccadic suppression
Perceptual stability is facilitated by a decrease in visual sensitivity during rapid eye movements, called saccadic suppression. While a large body of evidence demonstrates that saccadic programming is plastic, little is known about whether the perceptual consequences of saccades can be modified. Here, we demonstrate that saccadic suppression is attenuated during learning on a standard visual detection-in-noise task, to the point that it is effectively silenced. Across a period of seven days, 44 participants were trained to detect brief, low contrast stimuli embedded within dynamic noise, while eye position was tracked. Although instructed to fixate, participants regularly made small fixational saccades. Data were accumulated over a large number of trials, allowing us to assess changes in performance as a function of the temporal proximity of stimuli and saccades. This analysis revealed that improvements in sensitivity over the training period were accompanied by a systematic change in the impact of saccades on performance - robust saccadic suppression on day 1 declined gradually over subsequent days until its magnitude became indistinguishable from zero. This silencing of suppression was not explained by learning-related changes in saccade characteristics and generalized to an untrained retinal location and stimulus orientation. Suppression was restored when learned stimulus timing was perturbed, consistent with the operation of a mechanism that temporarily reduces or eliminates saccadic suppression, but only when it is behaviorally advantageous to do so. Our results indicate that learning can circumvent saccadic suppression to improve performance, without compromising its functional benefits in other viewing contexts
Fixational eye movements predict visual sensitivity
© 2015 The Author(s) Published by the Royal Society. All rights reserved. During steady fixation, observers make small fixational saccades at a rate of around 1–2 per second. Presentation of a visual stimulus triggers a biphasic modulation in fixational saccade rate—an initial inhibition followed by a period of elevated rate and a subsequent return to baseline. Here we show that, during passive viewing, this rate signature is highly sensitive to small changes in stimulus contrast. By training a linear support vector machine to classify trials in which a stimulus is either present or absent, we directly compared the contrast sensitivity of fixational eye movements with individuals’ psychophysical judgements. Classification accuracy closely matched psychophysical performance, and predicted individuals’ threshold estimates with less bias and overall error than those obtained using specific features of the signature. Performance of the classifier was robust to changes in the training set (novel subjects and/or contrasts) and good prediction accuracy was obtained with a practicable number of trials. Our results indicate a tight coupling between the sensitivity of visual perceptual judgements and fixational eye control mechanisms. This raises the possibility that fixational saccades could provide a novel and objective means of estimating visual contrast sensitivity without the need for observers to make any explicit judgement
Stream segregation in the anesthetized auditory cortex
Auditory stream segregation describes the way that sounds are perceptually segregated into groups or streams on the basis of perceptual attributes such as pitch or spectral content. For sequences of pure tones, segregation depends on the tones' proximity in frequency and time. In the auditory cortex (and elsewhere) responses to sequences of tones are dependent on stimulus conditions in a similar way to the perception of these stimuli. However, although highly dependent on stimulus conditions, perception is also clearly influenced by factors unrelated to the stimulus, such as attention. Exactly how ‘bottom-up’ sensory processes and non-sensory ‘top-down’ influences interact is still not clear.
Here, we recorded responses to alternating tones (ABAB …) of varying frequency difference (FD) and rate of presentation (PR) in the auditory cortex of anesthetized guinea-pigs. These data complement previous studies, in that top-down processing resulting from conscious perception should be absent or at least considerably attenuated.
Under anesthesia, the responses of cortical neurons to the tone sequences adapted rapidly, in a manner sensitive to both the FD and PR of the sequences. While the responses to tones at frequencies more distant from neuron best frequencies (BFs) decreased as the FD increased, the responses to tones near to BF increased, consistent with a release from adaptation, or forward suppression. Increases in PR resulted in reductions in responses to all tones, but the reduction was greater for tones further from BF. Although asymptotically adapted responses to tones showed behavior that was qualitatively consistent with perceptual stream segregation, responses reached asymptote within 2 s, and responses to all tones were very weak at high PRs (>12 tones per second).
A signal-detection model, driven by the cortical population response, made decisions that were dependent on both FD and PR in ways consistent with perceptual stream segregation. This included showing a range of conditions over which decisions could be made either in favor of perceptual integration or segregation, depending on the model ‘decision criterion’. However, the rate of ‘build-up’ was more rapid than seen perceptually, and at high PR responses to tones were sometimes so weak as to be undetectable by the model.
Under anesthesia, adaptation occurs rapidly, and at high PRs tones are generally poorly represented, which compromises the interpretation of the experiment. However, within these limitations, these results complement experiments in awake animals and humans. They generally support the hypothesis that ‘bottom-up’ sensory processing plays a major role in perceptual organization, and that processes underlying stream segregation are active in the absence of attention
Retuning of Inferior Colliculus Neurons Following Spiral Ganglion Lesions: A Single-Neuron Model of Converging Inputs
Lesions of spiral ganglion cells, representing a restricted sector of the auditory nerve array, produce immediate changes in the frequency tuning of inferior colliculus (IC) neurons. There is a loss of excitation at the lesion frequencies, yet responses to adjacent frequencies remain intact and new regions of activity appear. This leads to immediate changes in tuning and in tonotopic progression. Similar effects are seen after different methods of peripheral damage and in auditory neurons in other nuclei. The mechanisms that underlie these postlesion changes are unknown, but the acute effects seen in IC strongly suggest the “unmasking” of latent inputs by the removal of inhibition. In this study, we explore computational models of single neurons with a convergence of excitatory and inhibitory inputs from a range of characteristic frequencies (CFs), which can simulate the narrow prelesion tuning of IC neurons, and account for the changes in CF tuning after a lesion. The models can reproduce the data if inputs are aligned relative to one another in a precise order along the dendrites of model IC neurons. Frequency tuning in these neurons approximates that seen physiologically. Removal of inputs representing a narrow range of frequencies leads to unmasking of previously subthreshold excitatory inputs, which causes changes in CF. Conversely, if all of the inputs converge at the same point on the cell body, receptive fields are broad and unmasking rarely results in CF changes. However, if the inhibition is tonic with no stimulus-driven component, then unmasking can still produce changes in CF
- …