34,365 research outputs found

    Neural control of vocalization in bats: mapping of brainstem areas with electrical microstimulation eliciting species-specific echolocation calls in the rufous horseshoe bat

    Get PDF
    1. The functional role of brainstem structures in the emission of echolocation calls was investigated in the rufous horseshoe bat, Rhinolophus rouxi, with electrical low-current microstimulation procedures. 2. Vocalizations without temporal and/or spectral distortions could be consistently elicited at low threshold currents (typically below 10 A) within three clearly circumscribed brainstem areas, namely, the deep layers and ventral parts of the intermediate layers of the superior colliculus (SC), the deep mesencephalic nucleus (NMP) in the dorsal and lateral midbrain reticular formation and in a distinct area medial to the rostral parts of the dorsal nucleus of the lateral lemniscus. The mean latencies in the three vocal areas between the start of the electrical stimulus and the elicited vocalizations were 47 msec, 38 msec and 31 msec, respectively. 3. In pontine regions and the cuneiform nucleus adjacent to these three vocal areas, thresholds for eliciting vocalizations were also low, but the vocalizations showed temporal and/or spectral distortions and were often accompanied or followed by arousal of the animal. 4. Stimulus intensity systematically influenced vocalization parameters at only a few brain sites. In the caudo-ventra1 portions of the deep superior colliculus the sound pressure level of the vocalizations systematically increased with stimulus intensity. Bursts of multiple vocalizations were induced at locations ventral to the rostral parts of the cuneiform nucleus. No stimulus-intensity dependent frequency changes of the emitted vocalizations were observed. 5. The respiratory cycle was synchronized to the electrical stimuli in all regions where vocalizations could be elicited as well as in more ventrally and medially adjacent areas not yielding vocalizations on stimulation. 6. The possible functional involvement of the vocal structures in the audio-vocal feedback system of the Dopplercompensating horseshoe bat is discussed

    Automatic Classification and Speaker Identification of African Elephant (\u3cem\u3eLoxodonta africana\u3c/em\u3e) Vocalizations

    Get PDF
    A hidden Markov model (HMM) system is presented for automatically classifying African elephant vocalizations. The development of the system is motivated by successful models from human speech analysis and recognition. Classification features include frequency-shifted Mel-frequency cepstral coefficients (MFCCs) and log energy, spectrally motivated features which are commonly used in human speech processing. Experiments, including vocalization type classification and speaker identification, are performed on vocalizations collected from captive elephants in a naturalistic environment. The system classified vocalizations with accuracies of 94.3% and 82.5% for type classification and speaker identification classification experiments, respectively. Classification accuracy, statistical significance tests on the model parameters, and qualitative analysis support the effectiveness and robustness of this approach for vocalization analysis in nonhuman species

    Vocal Classification of Vocalizations of a Pair of Asian Small-Clawed Otters to Determine Stress

    Get PDF
    Asian Small-Clawed Otters (Aonyx cinerea) are a small, protected but threatened species living in freshwater. They are gregarious and live in monogamous pairs for their lifetimes, communicating via scent and acoustic vocalizations. This study utilized a hidden Markov model (HMM) to classify stress versus non-stress calls from a sibling pair under professional care. Vocalizations were expertly annotated by keepers into seven contextual categories. Four of these—aggression, separation anxiety, pain, and prefeeding—were identified as stressful contexts, and three of them—feeding, training, and play—were identified as non-stressful contexts. The vocalizations were segmented, manually categorized into broad vocal type call types, and analyzed to determine signal to noise ratios. From this information, vocalizations from the most common contextual categories were used to implement HMM-based automatic classification experiments, which included individual identification, stress vs non-stress, and individual context classification. Results indicate that both individual identity and stress vs non-stress were distinguishable, with accuracies above 90%, but that individual contexts within the stress category were not easily separable

    Interaction of the phencyclidine model of schizophrenia and nicotine on total and categorized ultrasonic vocalizations in rats

    Get PDF
    Patients with schizophrenia smoke cigarettes at a higher rate than the general population. We hypothesized that a factor in this comorbidity is sensitivity to the reinforcing and reinforcement- enhancement effects of nicotine. Phencyclidine (PCP) was used to model behavioral changes resembling negative symptoms of schizophrenia in rats. USVs in rats have been used to measure emotional states, with 50 kHz USVs indicating positive states and 22 kHz indicating negative. Total and categorized numbers of 22 and 50 kHz ultrasonic vocalizations (USVs) and USVs during a visual stimulus (e.g. a potential measure of reinforcement-enhancement) were examined in rats following .injection ofh PCP (2.0 mg/kg), and/or nicotine (0.2 or 0.4 mg/kg) daily for 7 days. PCP was then discontinued and all rats received nicotine (0.2 mg/kg and 0.4 mg/kg) and PCP (2.0 mg/kg) on 3 challenge days. PCP acutely decreased 50 kHz vocalizations while repeated nicotine potentiated rates of vocalizations, with similar patterns during light presentations. Rats in the PCP and nicotine combination groups made more 50 kHz vocalizations compared to control groups on challenge days. We conclude that PCP may produce a reward deficit that is shown by decreased 50 kHz USVs, and behaviors post-PCP exposure may best model the comorbidity between schizophrenia and nicotine

    Categories, concepts, and calls : auditory perceptual mechanisms and cognitive abilities across different types of birds.

    Get PDF
    Although involving different animals, preparations, and objectives, our laboratories (Sturdy's and Cook's) are mutually interested in category perception and concept formation. The Sturdy laboratory has a history of studying perceptual categories in songbirds, while Cook laboratory has a history of studying abstract concept formation in pigeons. Recently, we undertook a suite of collaborative projects to combine our investigations to examine abstract concept formation in songbirds, and perception of songbird vocalizations in pigeons. This talk will include our recent findings of songbird category perception, songbird abstract concept formation (same/different task), and early results from pigeons' processing of songbird vocalizations in a same/different task. Our findings indicate that (1) categorization in birds seems to be most heavily influenced by acoustic, rather than genetic or experiential factors (2) songbirds treat their vocalizations as perceptual categories, both at the level of the note and species/whole call, (3) chickadees, like pigeons, can perceive abstract, same-different relations, and (4) pigeons are not as good at discriminating chickadee vocalizations as songbirds (chickadees and finches). Our findings suggest that although there are commonalities in complex auditory processing among birds, there are potentially important comparative differences between songbirds and non-songbirds in their treatment of certain types of auditory objects.Publisher PD

    A Framework for Bioacoustic Vocalization Analysis Using Hidden Markov Models

    Get PDF
    Using Hidden Markov Models (HMMs) as a recognition framework for automatic classification of animal vocalizations has a number of benefits, including the ability to handle duration variability through nonlinear time alignment, the ability to incorporate complex language or recognition constraints, and easy extendibility to continuous recognition and detection domains. In this work, we apply HMMs to several different species and bioacoustic tasks using generalized spectral features that can be easily adjusted across species and HMM network topologies suited to each task. This experimental work includes a simple call type classification task using one HMM per vocalization for repertoire analysis of Asian elephants, a language-constrained song recognition task using syllable models as base units for ortolan bunting vocalizations, and a stress stimulus differentiation task in poultry vocalizations using a non-sequential model via a one-state HMM with Gaussian mixtures. Results show strong performance across all tasks and illustrate the flexibility of the HMM framework for a variety of species, vocalization types, and analysis tasks

    Evidence to Suggest that Copulatory Vocalizations in Women Are Not a Reflexive Consequence of Orgasm

    Get PDF
    The current studies were conducted in order to investigate the phenomenon of copulatory vocalizations and their relationship to orgasm in women. Data were collected from 71 sexually active heterosexual women (M age = 21.68 years ± .52) recruited from the local community through opportunity sampling. The studies revealed that orgasm was most frequently reported by women following self-manipulation of the clitoris, manipulation by the partner, oral sex delivered to the woman by a man, and least frequently during vaginal penetration. More detailed examination of responses during intercourse revealed that, while female orgasms were most commonly experienced during foreplay, copulatory vocalizations were reported to be made most often before and simultaneously with male ejaculation. These data together clearly demonstrate a dissociation of the timing of women experiencing orgasm and making copulatory vocalizations and indicate that there is at least an element of these responses that are under conscious control, providing women with an opportunity to manipulate male behavior to their advantage

    Ultrasonic Songs of Male Mice

    Get PDF
    Previously it was shown that male mice, when they encounter female mice or their pheromones, emit ultrasonic vocalizations with frequencies ranging over 30–110 kHz. Here, we show that these vocalizations have the characteristics of song, consisting of several different syllable types, whose temporal sequencing includes the utterance of repeated phrases. Individual males produce songs with characteristic syllabic and temporal structure. This study provides a quantitative initial description of male mouse songs, and opens the possibility of studying song production and perception in an established genetic model organism
    corecore