109 research outputs found

    Zebra finches and Dutch adults exhibit the same cue weighting bias in vowel perception

    Get PDF
    Vocal tract resonances, called formants, are the most important parameters in human speech production and perception. They encode linguistic meaning and have been shown to be perceived by a wide range of species. Songbirds are also sensitive to different formant patterns in human speech. They can categorize words differing only in their vowels based on the formant patterns independent of speaker identity in a way comparable to humans. These results indicate that speech perception mechanisms are more similar between songbirds and humans than realized before. One of the major questions regarding formant perception concerns the weighting of different formants in the speech signal (“acoustic cue weighting”) and whether this process is unique to humans. Using an operant Go/NoGo design, we trained zebra finches to discriminate syllables, whose vowels differed in their first three formants. When subsequently tested with novel vowels, similar in either their first formant or their second and third formants to the familiar vowels, similarity in the higher formants was weighted much more strongly than similarity in the lower formant. Thus, zebra finches indeed exhibit a cue weighting bias. Interestingly, we also found that Dutch speakers when tested with the same paradigm exhibit the same cue weighting bias. This, together with earlier findings, supports the hypothesis that human speech evolution might have exploited general properties of the vertebrate auditory system

    Higher songs of city birds may not be an individual response to noise

    Get PDF
    It has been observed in many songbird species that populations in noisy urban areas sing with a higher minimum frequency than do matched populations in quieter, less developed areas. However, why and how this divergence occurs is not yet understood. We experimentally tested whether chronic noise exposure during vocal learning results in songs with higher minimum frequencies in great tits (Parus major), the first species for which a correlation between anthropogenic noise and song frequency was observed. We also tested vocal plasticity of adult great tits in response to changing background noise levels by measuring song frequency and amplitude as we changed noise conditions. We show that noise exposure during ontogeny did not result in songs with higher minimum frequencies. In addition, we found that adult birds did not make any frequency or song usage adjustments when their background noise conditions were changed after song crystallization. These results challenge the common view of vocal adjustments by city birds, as they suggest that either noise itself is not the causal force driving the divergence of song frequency between urban and forest populations, or that noise induces population-wide changes over a time scale of several generations rather than causing changes in individual behaviour

    How Noisy Does a Noisy Miner Have to Be? Amplitude Adjustments of Alarm Calls in an Avian Urban ‘Adapter’

    Get PDF
    Background: Urban environments generate constant loud noise, which creates a formidable challenge for many animals relying on acoustic communication. Some birds make vocal adjustments that reduce auditory masking by altering, for example, the frequency (kHz) or timing of vocalizations. Another adjustment, well documented for birds under laboratory and natural field conditions, is a noise level-dependent change in sound signal amplitude (the ‘Lombard effect’). To date, however, field research on amplitude adjustments in urban environments has focused exclusively on bird song. Methods: We investigated amplitude regulation of alarm calls using, as our model, a successful urban ‘adapter ’ species, the Noisy miner, Manorina melanocephala. We compared several different alarm calls under contrasting noise conditions. Results: Individuals at noisier locations (arterial roads) alarm called significantly more loudly than those at quieter locations (residential streets). Other mechanisms known to improve sound signal transmission in ‘noise’, namely use of higher perches and in-flight calling, did not differ between site types. Intriguingly, the observed preferential use of different alarm calls by Noisy miners inhabiting arterial roads and residential streets was unlikely to have constituted a vocal modification made in response to sound-masking in the urban environment because the calls involved fell within the main frequency range of background anthropogenic noise. Conclusions: The results of our study suggest that a species, which has the ability to adjust the amplitude of its signals

    Auditory temporal resolution of a wild white-beaked dolphin (Lagenorhynchus albirostris)

    Get PDF
    Author Posting. © The Author(s), 2009. This is the author's version of the work. It is posted here by permission of Springer for personal use, not for redistribution. The definitive version was published in Journal of Comparative Physiology A: Neuroethology, Sensory, Neural, and Behavioral Physiology 195 (2009): 375-384, doi:10.1007/s00359-009-0415-x.Adequate temporal resolution is required across taxa to properly utilize amplitude modulated acoustic signals. Among mammals, odontocete marine mammals are considered to have relatively high temporal resolution, which is a selective advantage when processing fast traveling underwater sound. However, multiple methods used to estimate auditory temporal resolution have left comparisons among odontocetes and other mammals somewhat vague. Here we present the estimated auditory temporal resolution of an adult male white-beaked dolphin, (Lagenorhynchus albirostris), using auditory evoked potentials and click stimuli. Ours is the first of such studies performed on a wild dolphin in a capture-and-release scenario. The white-beaked dolphin followed rhythmic clicks up to a rate of approximately 1125-1250 Hz, after which the modulation rate transfer function (MRTF) cut-off steeply. However, 10% of the maximum response was still found at 1450 Hz indicating high temporal resolution. The MRTF was similar in shape and bandwidth to that of other odontocetes. The estimated maximal temporal resolution of white-beaked dolphins and other odontocetes was approximately twice that of pinnipeds and manatees, and more than ten-times faster than humans and gerbils. The exceptionally high temporal resolution abilities of odontocetes are likely due primarily to echolocation capabilities that require rapid processing of acoustic cues.We wish to thank the Danish Natural Science Research Council for major financial support (grant no. 272-05-0395)

    Neuronal precision and the limits for acoustic signal recognition in a small neuronal network

    Get PDF
    Recognition of acoustic signals may be impeded by two factors: extrinsic noise, which degrades sounds before they arrive at the receiver’s ears, and intrinsic neuronal noise, which reveals itself in the trial-to-trial variability of the responses to identical sounds. Here we analyzed how these two noise sources affect the recognition of acoustic signals from potential mates in grasshoppers. By progressively corrupting the envelope of a female song, we determined the critical degradation level at which males failed to recognize a courtship call in behavioral experiments. Using the same stimuli, we recorded intracellularly from auditory neurons at three different processing levels, and quantified the corresponding changes in spike train patterns by a spike train metric, which assigns a distance between spike trains. Unexpectedly, for most neurons, intrinsic variability accounted for the main part of the metric distance between spike trains, even at the strongest degradation levels. At consecutive levels of processing, intrinsic variability increased, while the sensitivity to external noise decreased. We followed two approaches to determine critical degradation levels from spike train dissimilarities, and compared the results with the limits of signal recognition measured in behaving animals

    A Potential Neural Substrate for Processing Functional Classes of Complex Acoustic Signals

    Get PDF
    Categorization is essential to all cognitive processes, but identifying the neural substrates underlying categorization processes is a real challenge. Among animals that have been shown to be able of categorization, songbirds are particularly interesting because they provide researchers with clear examples of categories of acoustic signals allowing different levels of recognition, and they possess a system of specialized brain structures found only in birds that learn to sing: the song system. Moreover, an avian brain nucleus that is analogous to the mammalian secondary auditory cortex (the caudo-medial nidopallium, or NCM) has recently emerged as a plausible site for sensory representation of birdsong, and appears as a well positioned brain region for categorization of songs. Hence, we tested responses in this non-primary, associative area to clear and distinct classes of songs with different functions and social values, and for a possible correspondence between these responses and the functional aspects of songs, in a highly social songbird species: the European starling. Our results clearly show differential neuronal responses to the ethologically defined classes of songs, both in the number of neurons responding, and in the response magnitude of these neurons. Most importantly, these differential responses corresponded to the functional classes of songs, with increasing activation from non-specific to species-specific and from species-specific to individual-specific sounds. These data therefore suggest a potential neural substrate for sorting natural communication signals into categories, and for individual vocal recognition of same-species members. Given the many parallels that exist between birdsong and speech, these results may contribute to a better understanding of the neural bases of speech

    Neural mechanisms of interstimulus interval-dependent responses in the primary auditory cortex of awake cats

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Primary auditory cortex (AI) neurons show qualitatively distinct response features to successive acoustic signals depending on the inter-stimulus intervals (ISI). Such ISI-dependent AI responses are believed to underlie, at least partially, categorical perception of click trains (elemental vs. fused quality) and stop consonant-vowel syllables (eg.,/da/-/ta/continuum).</p> <p>Methods</p> <p>Single unit recordings were conducted on 116 AI neurons in awake cats. Rectangular clicks were presented either alone (single click paradigm) or in a train fashion with variable ISI (2–480 ms) (click-train paradigm). Response features of AI neurons were quantified as a function of ISI: one measure was related to the degree of stimulus locking (temporal modulation transfer function [tMTF]) and another measure was based on firing rate (rate modulation transfer function [rMTF]). An additional modeling study was performed to gain insight into neurophysiological bases of the observed responses.</p> <p>Results</p> <p>In the click-train paradigm, the majority of the AI neurons ("synchronization type"; <it>n </it>= 72) showed stimulus-locking responses at long ISIs. The shorter cutoff ISI for stimulus-locking responses was on average ~30 ms and was level tolerant in accordance with the perceptual boundary of click trains and of consonant-vowel syllables. The shape of tMTF of those neurons was either band-pass or low-pass. The single click paradigm revealed, at maximum, four response periods in the following order: 1st excitation, 1st suppression, 2nd excitation then 2nd suppression. The 1st excitation and 1st suppression was found exclusively in the synchronization type, implying that the temporal interplay between excitation and suppression underlies stimulus-locking responses. Among these neurons, those showing the 2nd suppression had band-pass tMTF whereas those with low-pass tMTF never showed the 2nd suppression, implying that tMTF shape is mediated through the 2nd suppression. The recovery time course of excitability suggested the involvement of short-term plasticity. The observed phenomena were well captured by a single cell model which incorporated AMPA, GABA<sub>A</sub>, NMDA and GABA<sub>B </sub>receptors as well as short-term plasticity of thalamocortical synaptic connections.</p> <p>Conclusion</p> <p>Overall, it was suggested that ISI-dependent responses of the majority of AI neurons are configured through the temporal interplay of excitation and suppression (inhibition) along with short-term plasticity.</p

    Incidental sounds of locomotion in animal cognition

    Get PDF
    The highly synchronized formations that characterize schooling in fish and the flight of certain bird groups have frequently been explained as reducing energy expenditure. I present an alternative, or complimentary, hypothesis that synchronization of group movements may improve hearing perception. Although incidental sounds produced as a by-product of locomotion (ISOL) will be an almost constant presence to most animals, the impact on perception and cognition has been little discussed. A consequence of ISOL may be masking of critical sound signals in the surroundings. Birds in flight may generate significant noise; some produce wing beats that are readily heard on the ground at some distance from the source. Synchronization of group movements might reduce auditory masking through periods of relative silence and facilitate auditory grouping processes. Respiratory locomotor coupling and intermittent flight may be other means of reducing masking and improving hearing perception. A distinct border between ISOL and communicative signals is difficult to delineate. ISOL seems to be used by schooling fish as an aid to staying in formation and avoiding collisions. Bird and bat flocks may use ISOL in an analogous way. ISOL and interaction with animal perception, cognition, and synchronized behavior provide an interesting area for future study

    Somatic mutations affect key pathways in lung adenocarcinoma

    Full text link
    Determining the genetic basis of cancer requires comprehensive analyses of large collections of histopathologically well- classified primary tumours. Here we report the results of a collaborative study to discover somatic mutations in 188 human lung adenocarcinomas. DNA sequencing of 623 genes with known or potential relationships to cancer revealed more than 1,000 somatic mutations across the samples. Our analysis identified 26 genes that are mutated at significantly high frequencies and thus are probably involved in carcinogenesis. The frequently mutated genes include tyrosine kinases, among them the EGFR homologue ERBB4; multiple ephrin receptor genes, notably EPHA3; vascular endothelial growth factor receptor KDR; and NTRK genes. These data provide evidence of somatic mutations in primary lung adenocarcinoma for several tumour suppressor genes involved in other cancers - including NF1, APC, RB1 and ATM - and for sequence changes in PTPRD as well as the frequently deleted gene LRP1B. The observed mutational profiles correlate with clinical features, smoking status and DNA repair defects. These results are reinforced by data integration including single nucleotide polymorphism array and gene expression array. Our findings shed further light on several important signalling pathways involved in lung adenocarcinoma, and suggest new molecular targets for treatment.National Human Genome Research InstituteWe thank A. Lash, M.F. Zakowski, M.G. Kris and V. Rusch for intellectual contributions, and many members of the Baylor Human Genome Sequencing Center, the Broad Institute of Harvard and MIT, and the Genome Center at Washington University for support. This work was funded by grants from the National Human Genome Research Institute to E.S.L., R.A.G. and R.K.W.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/62885/1/nature07423.pd

    Universal mechanisms of sound production and control in birds and mammals

    Get PDF
    As animals vocalize, their vocal organ transforms motor commands into vocalizations for social communication. In birds, the physical mechanisms by which vocalizations are produced and controlled remain unresolved because of the extreme difficulty in obtaining in vivo measurements. Here, we introduce an ex vivo preparation of the avian vocal organ that allows simultaneous high-speed imaging, muscle stimulation and kinematic and acoustic analyses to reveal the mechanisms of vocal production in birds across a wide range of taxa. Remarkably, we show that all species tested employ the myoelastic-aerodynamic (MEAD) mechanism, the same mechanism used to produce human speech. Furthermore, we show substantial redundancy in the control of key vocal parameters ex vivo, suggesting that in vivo vocalizations may also not be specified by unique motor commands. We propose that such motor redundancy can aid vocal learning and is common to MEAD sound production across birds and mammals, including humans
    corecore