28 research outputs found
A Bird’s Eye View of Human Language Evolution
Comparative studies of linguistic faculties in animals pose an evolutionary paradox: language involves certain perceptual and motor abilities, but it is not clear that this serves as more than an input–output channel for the externalization of language proper. Strikingly, the capability for auditory–vocal learning is not shared with our closest relatives, the apes, but is present in such remotely related groups as songbirds and marine mammals. There is increasing evidence for behavioral, neural, and genetic similarities between speech acquisition and birdsong learning. At the same time, researchers have applied formal linguistic analysis to the vocalizations of both primates and songbirds. What have all these studies taught us about the evolution of language? Is the comparative study of an apparently species-specific trait like language feasible? We argue that comparative analysis remains an important method for the evolutionary reconstruction and causal analysis of the mechanisms underlying language. On the one hand, common descent has been important in the evolution of the brain, such that avian and mammalian brains may be largely homologous, particularly in the case of brain regions involved in auditory perception, vocalization, and auditory memory. On the other hand, there has been convergent evolution of the capacity for auditory–vocal learning, and possibly for structuring of external vocalizations, such that apes lack the abilities that are shared between songbirds and humans. However, significant limitations to this comparative analysis remain. While all birdsong may be classified in terms of a particularly simple kind of concatenation system, the regular languages, there is no compelling evidence to date that birdsong matches the characteristic syntactic complexity of human language, arising from the composition of smaller forms like words and phrases into larger ones
Neurophysiology of Avian Sleep: Comparing Natural Sleep and Isoflurane Anesthesia
Propagating slow-waves in electroencephalogram (EEG) or local field potential (LFP) recordings occur during non-rapid eye-movement (NREM) sleep in both mammals and birds. Moreover, in both, input from the thalamus is thought to contribute to the genesis of NREM sleep slow-waves. Interestingly, the general features of slow-waves are also found under isoflurane anesthesia. However, it is unclear to what extent these slow-waves reflect the same processes as those giving rise to NREM sleep slow-waves. Similar slow-wave spatio-temporal properties during NREM sleep and isoflurane anesthesia would suggest that both types of slow-waves are based on related processes. We used a 32-channel silicon probe connected to a transmitter to make intra-cortical recordings of the visual hyperpallium in naturally sleeping and isoflurane anesthetized pigeons (Columba livia) using a within-bird design. Under anesthesia, the amplitude of LFP slow-waves was higher when compared to NREM sleep. Spectral power density across all frequencies (1.5–100 Hz) was also elevated. In addition, slow-wave coherence between electrode sites was higher under anesthesia, indicating higher synchrony when compared to NREM sleep. Nonetheless, the spatial distribution of slow-waves under anesthesia was more comparable to NREM sleep than to wake or REM sleep. Similar to NREM sleep, slow-wave propagation under anesthesia mainly occurred in the thalamic input layers of the hyperpallium, regions which also showed the greatest slow-wave power during both recording conditions. This suggests that the thalamus could be involved in the genesis of slow-waves under both conditions. Taken together, although slow-waves under isoflurane anesthesia are stronger, they share spatio-temporal activity characteristics with slow-waves during NREM sleep
Neural Processing of Short-Term Recurrence in Songbird Vocal Communication
BACKGROUND: Many situations involving animal communication are dominated by recurring, stereotyped signals. How do receivers optimally distinguish between frequently recurring signals and novel ones? Cortical auditory systems are known to be pre-attentively sensitive to short-term delivery statistics of artificial stimuli, but it is unknown if this phenomenon extends to the level of behaviorally relevant delivery patterns, such as those used during communication. METHODOLOGY/PRINCIPAL FINDINGS: We recorded and analyzed complete auditory scenes of spontaneously communicating zebra finch (Taeniopygia guttata) pairs over a week-long period, and show that they can produce tens of thousands of short-range contact calls per day. Individual calls recur at time scales (median interval 1.5 s) matching those at which mammalian sensory systems are sensitive to recent stimulus history. Next, we presented to anesthetized birds sequences of frequently recurring calls interspersed with rare ones, and recorded, in parallel, action and local field potential responses in the medio-caudal auditory forebrain at 32 unique sites. Variation in call recurrence rate over natural ranges leads to widespread and significant modulation in strength of neural responses. Such modulation is highly call-specific in secondary auditory areas, but not in the main thalamo-recipient, primary auditory area. CONCLUSIONS/SIGNIFICANCE: Our results support the hypothesis that pre-attentive neural sensitivity to short-term stimulus recurrence is involved in the analysis of auditory scenes at the level of delivery patterns of meaningful sounds. This may enable birds to efficiently and automatically distinguish frequently recurring vocalizations from other events in their auditory scene
An in depth view of avian sleep
Brain rhythms occurring during sleep are implicated in processing information acquired during wakefulness, but this phenomenon has almost exclusively been studied in mammals. In this review we discuss the potential value of utilizing birds to elucidate the functions and underlying mechanisms of such brain rhythms. Birds are of particular interest from a comparative perspective because even though neurons in the avian brain homologous to mammalian neocortical neurons are arranged in a nuclear, rather than a laminar manner, the avian brain generates mammalian-like sleep-states and associated brain rhythms. Nonetheless, until recently, this nuclear organization also posed technical challenges, as the standard surface EEG recording methods used to study the neocortex provide only a superficial view of the sleeping avian brain. The recent development of high-density multielectrode recording methods now provides access to sleep-related brain activity occurring deep in the avian brain. Finally, we discuss how intracerebral electrical imaging based on this technique can be used to elucidate the systems-level processing of hippocampal-dependent and imprinting memories in birds
Response strengths decrease with calling rate and are partly stimulus-specific.
<p>The responses are standardized (<i>z</i>-scores), but note that all statistical tests in this study are based on absolute response levels. Shown are the mean of these values over birds (± standard error of the mean as shaded color), binned per 100 sequential call events and split between common and rare calls. (A) Mean (<i>N</i> = 9 birds) AMUA response strength at primary auditory sites. These sites have been classified as ‘primary’ based on their stimulus-locked, stereotypic response characteristics only; such sites cluster in a shape that corresponds to the anatomical area L2. (B) Mean (<i>N</i> = 12 birds) AMUA response strength at secondary auditory sites, i.e. sites whose auditory responses are not stereotypic responses and that surround L2 (i.e. L1, L3, NCM and CMM). (C) Mean (<i>N</i> = 9 birds) LFP response strength, which is not split between primary and secondary sites because local field potentials may not originate from the immediate vicinity of the site at which they are recorded.</p
Schematic representation of the silicon multi-electrode array situated inside the auditory forebrain.
<p>(A) The four equidistant and parallel shanks of the array are situated in a parasagittal plane in the medio-caudal forebrain. (B) Each shank contains eight electrodes (‘sites’). (C) The matrix of 32 sites cover a relatively large area from which neural responses can be recorded in parallel, including the anatomical Field L, consisting of subfields L1, L2, and L3, and NCM and CMM. The black spots represent electrode sites, while the orange circles indicate that recorded potentials may originate from a field around these sites. Hp: Hippocampus, Cb: Cerebellum, NCM: caudomedial nidopallium, CMM: caudomedial mesopallium, L1, L2, L3: subdivisions of Field L; LaM, lamina mesopallialis <a href="http://www.plosone.org/article/info:doi/10.1371/journal.pone.0011129#pone.0011129-Fortune1" target="_blank">[11]</a>, <a href="http://www.plosone.org/article/info:doi/10.1371/journal.pone.0011129#pone.0011129-Vates1" target="_blank">[34]</a>.</p
Example of AMUA responses to common and rare calls in the 625 ms sequence (Bird 3, calls K and L, respectively).
<p>(A) Common calls (900, black marks) are randomly interspersed with rare calls (100, orange marks). Blue marks indicate the 100 calls (50 per type) that have been randomly selected to be shown in subfigures B and C. (B) Call stimuli are recorded synchronously with the electrophysiological signals to verify correct alignment of measurement episodes in our analyses. (C) Raster plots of AMUA signals in response to randomly selected sets of 50 common calls and 50 rare calls. Common and rare calls are shown separately although they have been presented to the bird in a random mixture (see A). Color represents AMUA amplitude, scaled per site, and clipped to 25% and 75% of the total signal range for visual presentation only.</p
Significance of factors that explain neural response strength in a linear mixed regression model.
a<p>interactions are denoted with “:”.</p
Call-event related responses can last for a long time after a call has finished and responses of calls may overlap.
<p>This is shown here using an example of LFP recordings in bird 1 at two different sites (NCM: top two rows, L2 bottom two rows) and two different recurrence rates (left column: 5000 ms series, right column 313 ms series). In the slow 5000 ms series, LFP responses to both common calls (random 25 events) and rare calls (random 25 events) can be seen to last up to seconds after the call event in both brain areas. In the fast 313 ms series, responses to common calls in the NCM site are almost completely absent, while those to rare calls are still visible. In the L2 site, responses to common calls have not disappeared but are clearly reduced. Importantly, in the 313 ms rate series responses to rare calls can be seen to continue during the presentation of a sequence of four subsequent common calls. Note that the actual common and odd call stimuli in the 5000 ms and 313 ms series are different (I/J and C/D of <a href="http://www.plosone.org/article/info:doi/10.1371/journal.pone.0011129#pone-0011129-g003" target="_blank">Figure 3</a>, respectively). The jitter that is visible in the responses in L2 to subsequent calls, relative to the first one, is due to a small amount of deterministic jitter that we applied to the delivery of stimuli (see <a href="http://www.plosone.org/article/info:doi/10.1371/journal.pone.0011129#s4" target="_blank">Materials and Methods</a>).</p
Spectrographic representation of the 12 male zebra-finch short-range contact calls used as stimuli in this study.
<p>Calls presented in columns are matched for duration. Shown are spectrograms (light bands) that have been calculated with a short-time Fourier transform, superimposed with a reassignment-based sparse time-frequency representation (dark lines; settings: 23 ms Gaussian analysis window, consensus of σ range 0.8–3.5, 0.25 ms step duration, 25 dB dynamic range; <a href="http://www.plosone.org/article/info:doi/10.1371/journal.pone.0011129#pone.0011129-Gardner1" target="_blank">[55]</a>). Call parameters (duration/mean fundamental frequency): A: 89 ms/784 Hz, B: 89 ms/528 Hz, C: 128 ms/452 Hz, D: 128 ms/591 Hz, E: 127 ms/551 Hz, F: 127 ms/433 Hz, G: 83 ms/413 Hz, H: 83 ms/570 Hz, I: 57 ms/470 Hz, J: 57 ms/530 Hz, K: 101 ms/564 Hz, L: 101 ms/470 Hz. Fundamental frequency was determined with an autocorrelation algorithm <a href="http://www.plosone.org/article/info:doi/10.1371/journal.pone.0011129#pone.0011129-Boersma1" target="_blank">[48]</a>.</p