4 research outputs found

    Whispering to the Deaf: Communication by a Frog without External Vocal Sac or Tympanum in Noisy Environments

    Get PDF
    Atelopus franciscus is a diurnal bufonid frog that lives in South-American tropical rain forests. As in many other frogs, males produce calls to defend their territories and attract females. However, this species is a so-called “earless” frog lacking an external tympanum and is thus anatomically deaf. Moreover, A. franciscus has no external vocal sac and lives in a sound constraining environment along river banks where it competes with other calling frogs. Despite these constraints, male A. franciscus reply acoustically to the calls of conspecifics in the field. To resolve this apparent paradox, we studied the vocal apparatus and middle-ear, analysed signal content of the calls, examined sound and signal content propagation in its natural habitat, and performed playback experiments. We show that A. franciscus males can produce only low intensity calls that propagate a short distance (<8 m) as a result of the lack of an external vocal sac. The species-specific coding of the signal is based on the pulse duration, providing a simple coding that is efficient as it allows discrimination from calls of sympatric frogs. Moreover, the signal is redundant and consequently adapted to noisy environments. As such a coding system can be efficient only at short-range, territory holders established themselves at short distances from each other. Finally, we show that the middle-ear of A. franciscus does not present any particular adaptations to compensate for the lack of an external tympanum, suggesting the existence of extra-tympanic pathways for sound propagation

    Stridulations Reveal Cryptic Speciation in Neotropical Sympatric Ants

    Get PDF
    The taxonomic challenge posed by cryptic species underlines the importance of using multiple criteria in species delimitation. In the current paper we tested the use of acoustic analysis as a tool to assess the real diversity in a cryptic species complex of Neotropical ants. In order to understand the potential of acoustics and to improve consistency in the conclusions by comparing different approaches, phylogenetic relationships of all the morphs considered were assessed by the analysis of a fragment of the mitochondrial DNA cytochrome b. We observed that each of the cryptic morph studied presents a morphologically distinct stridulatory organ and that all sympatric morphs produce distinctive stridulations. This is the first evidence of such a degree of specialization in the acoustic organ and signals in ants, which suggests that stridulations may be among the cues used by these ants during inter-specific interactions. Mitochondrial DNA variation corroborated the acoustic differences observed, confirming acoustics as a helpful tool to determine cryptic species in this group of ants, and possibly in stridulating ants in general. Congruent morphological, acoustic and genetic results constitute sufficient evidence to propose each morph studied here as a valid new species, suggesting that P. apicalis is a complex of at least 6 to 9 species, even if they present different levels of divergence. Finally, our results highlight that ant stridulations may be much more informative than hitherto thought, as much for ant communication as for integrative taxonomists

    How a simple and stereotyped acoustic signal transmits individual information: the song of the White-browed Warbler Basileuterus leucoblepharus

    Get PDF
    The White-browed Warbler Basileuterus leucoblepharus, a common bird of the BrazilianAtlantic forest, emits only one distinct song type in the context of territorial defense. Individual or neighbor-stranger recognition may be more difficult when birds share similar songs. In fact, the analysis of songs of different individuals reveals slight differences in the temporal and the frequency domains. Effectively, a careful examination of the signals of different individuals (21) by 5 complementary methods of analysis reveals first, that one or two gaps in frequency occur between two successive notes at different moments of the song, and second, that their temporal and frequency positions are stereotyped for each individual. Playback experiments confirm these findings. By propagation experiments, we show that this individual information can be only transmitted at short range (< 100 m) in the forest. In regard to the size and the repartition of territories, this communication process appears efficient and adaptive.<br>O Pula-pula-assobiador Basileuterus leucoblepharus, um pássaro comum da Mata Atlântica, emite um único e distintivo tipo de canto para defesa territorial. O reconhecimento individual ou entre vizinho e estranho pode ser mais difícil quando as aves compartilham cantos semelhantes. De fato, a análise dos cantos de diferentes indivíduos revelou ligeiras diferenças nos domínios temporal e das freqüências. Efetivamente, um exame cuidadoso dos sinais de 21 indivíduos diferentes por 5 métodos complementares de análise revelou que, primeiro, um ou dois espaços na série tonal ocorrem entre duas notas sucessivas em determinados momentos do canto e, segundo, ocupam posições em tempo e freqüência estereotipadas para cada indivíduo. Experiências de "play-back" confirmam esses dados. Através de experiências de propagação, mostramos que esta informação individual pode ser transmitida somente a curta distância ( < 100 m) na mata. Considerando o tamanho e a repartição dos territórios, este processo de comunicação mostra-se eficiente e bem adaptado

    Words from spontaneous conversational speech can be recognized with human-like accuracy by an error-driven learning algorithm that discriminates between meanings straight from smart acoustic features, bypassing the phoneme as recognition unit.

    Get PDF
    Sound units play a pivotal role in cognitive models of auditory comprehension. The general consensus is that during perception listeners break down speech into auditory words and subsequently phones. Indeed, cognitive speech recognition is typically taken to be computationally intractable without phones. Here we present a computational model trained on 20 hours of conversational speech that recognizes word meanings within the range of human performance (model 25%, native speakers 20-44%), without making use of phone or word form representations. Our model also generates successfully predictions about the speed and accuracy of human auditory comprehension. At the heart of the model is a 'wide' yet sparse two-layer artificial neural network with some hundred thousand input units representing summaries of changes in acoustic frequency bands, and proxies for lexical meanings as output units. We believe that our model holds promise for resolving longstanding theoretical problems surrounding the notion of the phone in linguistic theory
    corecore