43,091 research outputs found
Laryngeal Nerve Activity During Pulse Emission in the CF-FM Bat, Rhinolophus ferrumequinum. II. The Recurrent Laryngeal Nerve
The activity of the recurrent laryngeal nerve (RLN) was recorded in the greater horseshoe bat,Rhinolophus ferrumequinum. Respiration, vocalization and nerve discharges were monitored while vocalizations were elicted by stimulation of the central gray matter. This stimulation evoked either expiration or expiration plus vocalization depending on the stimulus strength. When vocalization occurred it always took place during expiration.
Recordings from the RLN during respiration showed activity during the inspiration phase, but when vocalization occurred there was activity during inspiration and expiration. These results are consistent with the view that the RLN innervates muscles which control the opening and closing of the glottis. During vocalization the vocal folds are closely approximated and the discharge patterns of the nerve suggests that it controls the muscles which start and end each pulse
Vocalization Influences Auditory Processing in Collicular Neurons of the CF-FM-Bat, Rhinolophus ferrumequinum
1. In awake Greater Horseshoe bats (Rhinolophus ferrumequinum) the responses of 64 inferior colliculus neurons to electrically elicited vocalizations (VOC) and combinations of these with simulated echoes (AS: pure tones and AS(FM): sinusoidally frequency-modulated tones mimicking echoes from wing beating insects) were recorded.
2. The neurons responding to the species-specific echolocation sound elicited by electrical stimulation of the central grey matter had best frequencies between 76 and 86 kHz. The response patterns to the invariable echolocation sound varied from unit to unit (Fig. 1).
3. In 26 neurons the responses to vocalized echolocation sounds markedly differed from those to identical artificial ones copying the CF-portion of the vocalized sound (AS). These neurons reacted with a different response to the same pure tone whether it was presented artificially or vocalized by the bat (Fig. 2). In these neurons vocalization activities qualitatively alter the responsiveness to the stimulus parameters of the echoes.
4. A few neurons neither responded to vocalization nor to an identical pure tone but discharged when vocalization and pure tone were presented simultaneously.
5. In 2 neurons synchronized encoding of small frequency-modulations of the pure tone (mimicking an echo returning from a wing beating prey) occurred only during vocalization. Without vocalization the neurons did not respond to the identical stimulus set (Fig. 3). In these neurons vocalization activities enhanced FM-encoding capabilities otherwise not present in these neurons.
6. FM-encoding depended on the timing between vocalization and frequency-modulated signal (echo). As soon as vocalization and FM-signal no more overlapped or at least 60–80 ms after onset of vocalization synchronized firing to the FM was lost (4 neurons) (Fig. 4).
7. 4 neurons weakly responded to playbacks of the bat's own vocalization 1 ms after onset of vocalization. But when the playback frequency was shifted to higher frequencies by more than 400 Hz the neurons changed firing patterns and the latency of the first response peak (Fig. 5). These neurons sensitive to frequency shifts in the echoes returning during vocalization may be relevant to the Doppler-shift compensation mechanism in Greater Horseshoe bats
Role of N-methyl-D-aspartate receptors in action-based predictive coding deficits in schizophrenia
Published in final edited form as:Biol Psychiatry. 2017 March 15; 81(6): 514–524. doi:10.1016/j.biopsych.2016.06.019.BACKGROUND: Recent theoretical models of schizophrenia posit that dysfunction of the neural mechanisms subserving predictive coding contributes to symptoms and cognitive deficits, and this dysfunction is further posited to result from N-methyl-D-aspartate glutamate receptor (NMDAR) hypofunction. Previously, by examining auditory cortical responses to self-generated speech sounds, we demonstrated that predictive coding during vocalization is disrupted in schizophrenia. To test the hypothesized contribution of NMDAR hypofunction to this disruption, we examined the effects of the NMDAR antagonist, ketamine, on predictive coding during vocalization in healthy volunteers and compared them with the effects of schizophrenia.
METHODS: In two separate studies, the N1 component of the event-related potential elicited by speech sounds during vocalization (talk) and passive playback (listen) were compared to assess the degree of N1 suppression during vocalization, a putative measure of auditory predictive coding. In the crossover study, 31 healthy volunteers completed two randomly ordered test days, a saline day and a ketamine day. Event-related potentials during the talk/listen task were obtained before infusion and during infusion on both days, and N1 amplitudes were compared across days. In the case-control study, N1 amplitudes from 34 schizophrenia patients and 33 healthy control volunteers were compared.
RESULTS: N1 suppression to self-produced vocalizations was significantly and similarly diminished by ketamine (Cohen’s d = 1.14) and schizophrenia (Cohen’s d = .85).
CONCLUSIONS: Disruption of NMDARs causes dysfunction in predictive coding during vocalization in a manner similar to the dysfunction observed in schizophrenia patients, consistent with the theorized contribution of NMDAR hypofunction to predictive coding deficits in schizophrenia.This work was supported by AstraZeneca for an investigator-initiated study (DHM) and the National Institute of Mental Health Grant Nos. R01 MH-58262 (to JMF) and T32 MH089920 (to NSK). JHK was supported by the Yale Center for Clinical Investigation Grant No. UL1RR024139 and the US National Institute on Alcohol Abuse and Alcoholism Grant No. P50AA012879. (AstraZeneca for an investigator-initiated study (DHM); R01 MH-58262 - National Institute of Mental Health; T32 MH089920 - National Institute of Mental Health; UL1RR024139 - Yale Center for Clinical Investigation; P50AA012879 - US National Institute on Alcohol Abuse and Alcoholism)Accepted manuscrip
Memory-based vocalization of Arabic
The problem of vocalization, or diacritization, is essential to many tasks in Arabic NLP. Arabic is generally written without the short vowels, which leads to one written form having several pronunciations with each pronunciation carrying its own meaning(s). In the experiments reported here, we define vocalization as a classification problem in which we decide for each character in the unvocalized word whether it is followed by a short vowel. We investigate the importance of different types of context. Our results show that the combination of using memory-based learning with only a word internal context leads to a word error rate of 6.64%. If a lexical context is added, the results deteriorate slowly
REPRODUCTION AND HABITAT OF TEN BRAZILIAN FROGS (ANURA)
Basic data on habitat, behavior, and reproduction arelacking for most Neotropical frog species and even highertaxonomic groups (Crump 1974; Haddad and Prado2005), particularly for those restricted to the AtlanticForest. Basic reproductive features are the basis of comparativestudies on evolution of major natural historyfeatures (Harvey and Pagel 1998), such as the interspecific relationship between body size and egg number/size (Salthe and Duellman 1973, Crump 1974, Stearns1992). Here, we present data on habitat, reproductivebehavior and quantitative parameters such as adult sizes,egg numbers/sizes of ten sympatric frogs of an altitudinalAtlantic Forest site in Southeastern Brazil
A Framework for Bioacoustic Vocalization Analysis Using Hidden Markov Models
Using Hidden Markov Models (HMMs) as a recognition framework for automatic classification of animal vocalizations has a number of benefits, including the ability to handle duration variability through nonlinear time alignment, the ability to incorporate complex language or recognition constraints, and easy extendibility to continuous recognition and detection domains. In this work, we apply HMMs to several different species and bioacoustic tasks using generalized spectral features that can be easily adjusted across species and HMM network topologies suited to each task. This experimental work includes a simple call type classification task using one HMM per vocalization for repertoire analysis of Asian elephants, a language-constrained song recognition task using syllable models as base units for ortolan bunting vocalizations, and a stress stimulus differentiation task in poultry vocalizations using a non-sequential model via a one-state HMM with Gaussian mixtures. Results show strong performance across all tasks and illustrate the flexibility of the HMM framework for a variety of species, vocalization types, and analysis tasks
The Interaction of Yer Deletion and Nasal Assimilation in Optimality Theory1
The problem of opacity presents a challenge for generative phonology. This paper examines the process of Nasal Assimilation in Polish rendered opaque by the process of Vowel Deletion in Optimality Theory (Prince & Smolensky, 1993), which currently is a dominating model for phonological analysis. The opaque interaction of the two processes exposes the inadequacy of standard Optimality Theory arising from the fact that standard OT is a non-derivational theory. It is argued that only by introducing intermediate levels can Optimality Theory deal with complex cases of opaque interactions
Control of echolocation pulses by neurons of the nucleus ambiguus in the rufous horseshoe bat, Rhinolophus rouxi
1. Horseradish peroxidase was applied by iontophoretic injections to physiologically identified regions of the laryngeal motor nucleus, the nucleus ambiguus in the CF/FM batRhinolophus rouxi.
2. The connections of the nucleus ambiguus were analysed with regards to their possible functional significance in the vocal control system, in the respiration control system, and in mediating information from the central auditory system.
3. The nucleus ambiguus is reciprocally interconnected with nuclei involved in the generation of the vocal motor pattern, i.e., the homonomous contralateral nucleus and the area of the lateral reticular formation. Similarly, reciprocal connections are found with the nuclei controlling the rhythm of respiration, i.e., medial parts of the medulla oblongata and the parabrachial nuclei.
4. Afferents to the nucleus ambiguus derive from nuclei of the descending vocalization system (periaqueductal gray and cuneiform nuclei) and from motor control centers (red nucleus and frontal cortex).
5. Afferents to the nucleus ambiguus, possibly mediating auditory influence to the motor control of vocalization, come from the superior colliculus and from the pontine nuclei. The efferents from the pontine nuclei are restricted to rostral parts of the nucleus ambiguus, which hosts the motoneurons of the cricothyroid muscle controlling the call frequency
The self-organization of combinatoriality and phonotactics in vocalization systems
This paper shows how a society of agents can self-organize a shared vocalization system that is
discrete, combinatorial and has a form of primitive phonotactics, starting from holistic inarticulate
vocalizations. The originality of the system is that: (1) it does not include any explicit pressure for
communication; (2) agents do not possess capabilities of coordinated interactions, in particular they
do not play language games; (3) agents possess no specific linguistic capacities; and (4) initially
there exists no convention that agents can use. As a consequence, the system shows how a primitive
speech code may bootstrap in the absence of a communication system between agents, i.e. before the
appearance of language
- …
