145 research outputs found
Spatial processing is frequency-specific in auditory cortex but not in the midbrain
The cochlea behaves like a bank of band-pass filters, segregating information into different frequency channels. Some aspects of perception reflect processing within individual channels, but others involve the integration of information across them. One instance of this is sound localization, which improves with increasing bandwidth. The processing of binaural cues for sound location has been extensively studied. However, while the advantage conferred by bandwidth is clear we currently know little about how this additional information is combined to form our percept of space. We investigated the ability of cells in the auditory system of guinea pigs to compare interaural level differences (ILDs), a key localization cue, between tones of disparate frequencies in each ear. Cells in auditory cortex, believed to be integral to ILD processing (Excitatory from one ear, Inhibitory from the other: EI cells), separately compare ILDs over restricted frequency ranges, not consistent with their monaural tuning. In contrast, cortical EE (Excitatory from both ears) cells showed no evidence of frequency-specific processing. Both cell types are explained by a model in which ILDs are computed within separate frequency channels and subsequently combined in a single cortical cell. Interestingly, ILD processing in all inferior colliculus cell types (EE and EI) is largely consistent with processing within single matched frequency channels from each ear. Our data suggests a clear constraint on the way that localisation cues are integrated: cortical ILD tuning to broadband sounds is a composite of separate frequency-specific binaurally sensitive channels. This frequency-specific processing appears after the midbrain
Forward suppression in the auditory cortex is frequency-specific
We investigated how physiologically observed forward suppression interacts with stimulus frequency in neuronal responses in the guinea pig auditory cortex. The temporal order and frequency proximity of sounds influence both their perception and neuronal responses. Psychophysically, preceding sounds (conditioners) can make successive sounds (probes) harder to hear. These effects are larger when the two sounds are spectrally similar. Physiological forward suppression is usually maximal for conditioner tones near to a unit's characteristic frequency (CF), the frequency to which a neuron is most sensitive. However, in most physiological studies, the frequency of the probe tone and CF are identical, so the role of unit CF and probe frequency cannot be distinguished. Here, we systemically varied the frequency of the probe tone, and found that the tuning of suppression was often more closely related to the frequency of the probe tone than to the unit's CF, i.e. suppressed tuning was specific to probe frequency. This relationship was maintained for all measured gaps between the conditioner and the probe tones. However, when the probe frequency and CF were similar, CF tended to determine suppressed tuning. In addition, the bandwidth of suppression was slightly wider for off-CF probes. Changes in tuning were also reflected in the firing rate in response to probe tones, which was maximally reduced when probe and conditioner tones were matched in frequency. These data are consistent with the idea that cortical neurons receive convergent inputs with a wide range of tuning properties that can adapt independently
Changes in neuronal representations of consonants in the ascending auditory system and their role in speech recognition
A fundamental task of the ascending auditory system is to produce representations that facilitate the recognition of complex sounds. This is particularly challenging in the context of acoustic variability, such as that between different talkers producing the same phoneme. These representations are transformed as information is propagated throughout the ascending auditory system from the inner ear to the auditory cortex (AI). Investigating these transformations and their role in speech recognition is key to understanding hearing impairment and the development of future clinical interventions. Here, we obtained neural responses to an extensive set of natural vowel-consonant-vowel phoneme sequences, each produced by multiple talkers, in three stages of the auditory processing pathway. Auditory nerve (AN) representations were simulated using a model of the peripheral auditory system and extracellular neuronal activity was recorded in the inferior colliculus (IC) and primary auditory cortex (AI) of anaesthetized guinea pigs. A classifier was developed to examine the efficacy of these representations for recognizing the speech sounds. Individual neurons convey progressively less information from AN to AI. Nonetheless, at the population level, representations are sufficiently rich to facilitate recognition of consonants with a high degree of accuracy at all stages indicating a progression from a dense, redundant representation to a sparse, distributed one. We examined the timescale of the neural code for consonant recognition and found that optimal timescales increase throughout the ascending auditory system from a few milliseconds in the periphery to several tens of milliseconds in the cortex. Despite these longer timescales, we found little evidence to suggest that representations up to the level of AI become increasingly invariant to across-talker differences. Instead, our results support the idea that the role of the subcortical auditory system is one of dimensionality expansion, which could provide a basis for flexible classification of arbitrary speech sounds
Light adaptation mechanisms in the eye of the fiddler crab <i>Afruca tangeri</i>
A great diversity of adaptations is found among animals with compound eyes and even closely related taxa can show variation in their light‐adaptation strategies. A prime example of a visual system evolved to function in specific light environments is the fiddler crab, used widely as a model to research aspects of crustacean vision and neural pathways. However, questions remain regarding how their eyes respond to the changes in brightness spanning many orders of magnitude, associated with their habitat and ecology. The fiddler crab Afruca tangeri forages at low tide on tropical and semi‐tropical mudflats, under bright sunlight and on moonless nights, suggesting that their eyes undergo effective light adaptation. Using synchrotron X‐ray tomography, light and transmission electron microscopy and in vivo ophthalmoscopy, we describe the ultrastructural changes in the eye between day and night. Dark adaptation at dusk triggered extensive widening of the rhabdoms and crystalline cone tips. This doubled the ommatidial acceptance angles and increased microvillar surface area for light capture in the rhabdom, theoretically boosting optical sensitivity 7.4 times. During daytime, only partial dark‐adaptation was achieved and rhabdoms remained narrow, indicating strong circadian control on the process. Bright light did not evoke changes in screening pigment distributions, suggesting a structural inability to adapt rapidly to the light level fluctuations frequently experienced when entering their burrow to escape predators. This should enable fiddler crabs to shelter for several minutes without undergoing significant dark‐adaptation, their vision remaining effectively adapted for predator detection when surfacing again in bright light
Mammalian behavior and physiology converge to confirm sharper cochlear tuning in humans
Frequency analysis of sound by the cochlea is the most fundamental property of the auditory system. Despite its importance, the resolution of this frequency analysis in humans remains controversial. The controversy persists because the methods used to estimate tuning in humans are indirect and have not all been independently validated in other species. Some data suggest that human cochlear tuning is considerably sharper than that of laboratory animals, while others suggest little or no difference between species. We show here in a single species (ferret) that behavioral estimates of tuning bandwidths obtained using perceptual masking methods, and objective estimates obtained using otoacoustic emissions, both also employed in humans, agree closely with direct physiological measurements from single auditory-nerve fibers. Combined with human behavioral data, this outcome indicates that the frequency analysis performed by the human cochlea is of significantly higher resolution than found in common laboratory animals. This finding raises important questions about the evolutionary origins of human cochlear tuning, its role in the emergence of speech communication, and the mechanisms underlying our ability to separate and process natural sounds in complex acoustic environments
Evolution of compound eye morphology underlies differences in vision between closely related Drosophila species
Background: Insects have evolved complex visual systems and display an astonishing range of adaptations for diverse ecological niches. Species of Drosophila melanogaster subgroup exhibit extensive intra- and interspecific differences in compound eye size. These differences provide an excellent opportunity to better understand variation in insect eye structure and the impact on vision. Here we further explored the difference in eye size between D. mauritiana and its sibling species D. simulans. Results: We confirmed that D. mauritiana have rapidly evolved larger eyes as a result of more and wider ommatidia than D. simulans since they recently diverged approximately 240,000 years ago. The functional impact of eye size, and specifically ommatidia size, is often only estimated based on the rigid surface morphology of the compound eye. Therefore, we used 3D synchrotron radiation tomography to measure optical parameters in 3D, predict optical capacity, and compare the modelled vision to in vivo optomotor responses. Our optical models predicted higher contrast sensitivity for D. mauritiana, which we verified by presenting sinusoidal gratings to tethered flies in a flight arena. Similarly, we confirmed the higher spatial acuity predicted for Drosophila simulans with smaller ommatidia and found evidence for higher temporal resolution. Conclusions: Our study demonstrates that even subtle differences in ommatidia size between closely related Drosophila species can impact the vision of these insects. Therefore, further comparative studies of intra- and interspecific variation in eye morphology and the consequences for vision among other Drosophila species, other dipterans and other insects are needed to better understand compound eye structure–function and how the diversification of eye size, shape, and function has helped insects to adapt to the vast range of ecological niches
A Detailed Observational Analysis of V1324 Sco, the Most Gamma-Ray Luminous Classical Nova to Date
It has recently been discovered that some, if not all, classical novae emit
GeV gamma rays during outburst, but the mechanisms involved in the production
of the gamma rays are still not well understood. We present here a
comprehensive multi-wavelength dataset---from radio to X-rays---for the most
gamma-ray luminous classical nova to-date, V1324 Sco. Using this dataset, we
show that V1324 Sco is a canonical dusty Fe-II type nova, with a maximum ejecta
velocity of 2600 km s and an ejecta mass of few
M. There is also evidence for complex shock interactions, including a
double-peaked radio light curve which shows high brightness temperatures at
early times. To explore why V1324~Sco was so gamma-ray luminous, we present a
model of the nova ejecta featuring strong internal shocks, and find that higher
gamma-ray luminosities result from higher ejecta velocities and/or mass-loss
rates. Comparison of V1324~Sco with other gamma-ray detected novae does not
show clear signatures of either, and we conclude that a larger sample of
similarly well-observed novae is needed to understand the origin and variation
of gamma rays in novae.Comment: 26 pages, 13 figure
Identifying tinnitus in mice by tracking the motion of body markers in response to an acoustic startle
Rodent models of tinnitus are commonly used to study its mechanisms and potential treatments. Tinnitus can be identified by changes in the gap-induced prepulse inhibition of the acoustic startle (GPIAS), most commonly by using pressure detectors to measure the whole-body startle (WBS). Unfortunately, the WBS habituates quickly, the measuring system can introduce mechanical oscillations and the response shows considerable variability. We have instead used a motion tracking system to measure the localized motion of small reflective markers in response to an acoustic startle reflex in guinea pigs and mice. For guinea pigs, the pinna had the largest responses both in terms of displacement between pairs of markers and in terms of the speed of the reflex movement. Smaller, but still reliable responses were observed with markers on the thorax, abdomen and back. The peak speed of the pinna reflex was the most sensitive measure for calculating GPIAS in the guinea pig. Recording the pinna reflex in mice proved impractical due to removal of the markers during grooming. However, recordings from their back and tail allowed us to measure the peak speed and the twitch amplitude (area under curve) of reflex responses and both analysis methods showed robust GPIAS. When mice were administered high doses of sodium salicylate, which induces tinnitus in humans, there was a significant reduction in GPIAS, consistent with the presence of tinnitus. Thus, measurement of the peak speed or twitch amplitude of pinna, back and tail markers provides a reliable assessment of tinnitus in rodents
Retuning of Inferior Colliculus Neurons Following Spiral Ganglion Lesions: A Single-Neuron Model of Converging Inputs
Lesions of spiral ganglion cells, representing a restricted sector of the auditory nerve array, produce immediate changes in the frequency tuning of inferior colliculus (IC) neurons. There is a loss of excitation at the lesion frequencies, yet responses to adjacent frequencies remain intact and new regions of activity appear. This leads to immediate changes in tuning and in tonotopic progression. Similar effects are seen after different methods of peripheral damage and in auditory neurons in other nuclei. The mechanisms that underlie these postlesion changes are unknown, but the acute effects seen in IC strongly suggest the “unmasking” of latent inputs by the removal of inhibition. In this study, we explore computational models of single neurons with a convergence of excitatory and inhibitory inputs from a range of characteristic frequencies (CFs), which can simulate the narrow prelesion tuning of IC neurons, and account for the changes in CF tuning after a lesion. The models can reproduce the data if inputs are aligned relative to one another in a precise order along the dendrites of model IC neurons. Frequency tuning in these neurons approximates that seen physiologically. Removal of inputs representing a narrow range of frequencies leads to unmasking of previously subthreshold excitatory inputs, which causes changes in CF. Conversely, if all of the inputs converge at the same point on the cell body, receptive fields are broad and unmasking rarely results in CF changes. However, if the inhibition is tonic with no stimulus-driven component, then unmasking can still produce changes in CF
The contribution of visual information to the perception of speech in noise with and without informative temporal fine structure
Understanding what is said in demanding listening situations is assisted greatly by looking at the face of a talker. Previous studies have observed that normal-hearing listeners can benefit from this visual information when a talker’s voice is presented in background noise. These benefits have also been observed in quiet listening conditions in cochlear-implant users, whose device does not convey the informative temporal fine structure cues in speech, and when normal-hearing individuals listen to speech processed to remove these informative temporal fine structure cues. The current study (1) characterised the benefits of visual information when listening in background noise; and (2) used sine-wave vocoding to compare the size of the visual benefit when speech is presented with or without informative temporal fine structure. The accuracy with which normal-hearing individuals reported words in spoken sentences was assessed across three experiments. The availability of visual information and informative temporal fine structure cues was varied within and across the experiments. The results showed that visual benefit was observed using open- and closed-set tests of speech perception. The size of the benefit increased when informative temporal fine structure cues were removed. This finding suggests that visual information may play an important role in the ability of cochlear-implant users to understand speech in many everyday situations. Models of audio-visual integration were able to account for the additional benefit of visual information when speech was degraded and suggested that auditory and visual information was being integrated in a similar way in all conditions. The modelling results were consistent with the notion that audio-visual benefit is derived from the optimal combination of auditory and visual sensory cues
- …