1,656 research outputs found
Recommended from our members
Spring School on Language, Music, and Cognition: Organizing Events in Time
The interdisciplinary spring school “Language, music, and cognition: Organizing events in time” was held from February 26 to March 2, 2018 at the Institute of Musicology of the University of Cologne. Language, speech, and music as events in time were explored from different perspectives including evolutionary biology, social cognition, developmental psychology, cognitive neuroscience of speech, language, and communication, as well as computational and biological approaches to language and music. There were 10 lectures, 4 workshops, and 1 student poster session.
Overall, the spring school investigated language and music as neurocognitive systems and focused on a mechanistic approach exploring the neural substrates underlying musical, linguistic, social, and emotional processes and behaviors. In particular, researchers approached questions concerning cognitive processes, computational procedures, and neural mechanisms underlying the temporal organization of language and music, mainly from two perspectives: one was concerned with syntax or structural representations of language and music as neurocognitive systems (i.e., an intrapersonal perspective), while the other emphasized social interaction and emotions in their communicative function (i.e., an interpersonal perspective). The spring school not only acted as a platform for knowledge transfer and exchange but also generated a number of important research questions as challenges for future investigations
A Bird’s Eye View of Human Language Evolution
Comparative studies of linguistic faculties in animals pose an evolutionary paradox: language involves certain perceptual and motor abilities, but it is not clear that this serves as more than an input–output channel for the externalization of language proper. Strikingly, the capability for auditory–vocal learning is not shared with our closest relatives, the apes, but is present in such remotely related groups as songbirds and marine mammals. There is increasing evidence for behavioral, neural, and genetic similarities between speech acquisition and birdsong learning. At the same time, researchers have applied formal linguistic analysis to the vocalizations of both primates and songbirds. What have all these studies taught us about the evolution of language? Is the comparative study of an apparently species-specific trait like language feasible? We argue that comparative analysis remains an important method for the evolutionary reconstruction and causal analysis of the mechanisms underlying language. On the one hand, common descent has been important in the evolution of the brain, such that avian and mammalian brains may be largely homologous, particularly in the case of brain regions involved in auditory perception, vocalization, and auditory memory. On the other hand, there has been convergent evolution of the capacity for auditory–vocal learning, and possibly for structuring of external vocalizations, such that apes lack the abilities that are shared between songbirds and humans. However, significant limitations to this comparative analysis remain. While all birdsong may be classified in terms of a particularly simple kind of concatenation system, the regular languages, there is no compelling evidence to date that birdsong matches the characteristic syntactic complexity of human language, arising from the composition of smaller forms like words and phrases into larger ones
Testing the Template Hypothesis of Vocal Learning in Songbirds.
The auditory forebrain regions NCM and CMM of songbirds are associated with perception and complex auditory processing. Expression of the immediate-early gene ZENK varies in response to different sounds. Two hypotheses are proposed for this. First, ZENK may reflect access to a representation of song memories. Second, ZENK may reflect attention. I tested these hypotheses by measuring ZENK in response to tutored heterospecific or isolate songs compared to non-tutored wild-type song. Young zebra finch females were exposed to different tutoring conditions and later exposed to different playbacks, and the expression of ZENK in CMM and NCM measured. ZENK responses varied across playback stimuli in some brain regions, but did not interact with tutoring conditions. These results do not support the hypothesis that ZENK activation reflects auditory memories
How Could Language Have Evolved?
The evolution of the faculty of language largely remains an enigma. In this essay, we ask why. Language's evolutionary analysis is complicated because it has no equivalent in any nonhuman species. There is also no consensus regarding the essential nature of the language “phenotype.” According to the “Strong Minimalist Thesis,” the key distinguishing feature of language (and what evolutionary theory must explain) is hierarchical syntactic structure. The faculty of language is likely to have emerged quite recently in evolutionary terms, some 70,000–100,000 years ago, and does not seem to have undergone modification since then, though individual languages do of course change over time, operating within this basic framework. The recent emergence of language and its stability are both consistent with the Strong Minimalist Thesis, which has at its core a single repeatable operation that takes exactly two syntactic elements a and b and assembles them to form the set {a, b}
RoboFinch: A versatile audio-visual synchronised robotic bird model for laboratory and field research on songbirds
1. Singing in birds is accompanied by beak, head and throat movements. The role of these visual cues has long been hypothesised to be an important facilitator in vocal communication, including social interactions and song acquisition, but has seen little experimental study.
2. To address whether audio-visual cues are relevant for birdsong we used high-speed video recording, 3D scanning, 3D printing technology and colour-realistic painting to create RoboFinch, an open source adult-mimicking robot which matches temporal and chromatic properties of songbird vision. We exposed several groups of juvenile zebra finches during their song developmental phase to one of six singing robots that moved their beaks synchronised to their song and compared them with birds in a non-synchronised and two control treatments.
3. Juveniles in the synchronised treatment approached the robot setup from the start of the experiment and progressively increased the time they spent singing, contra to the other treatment groups. Interestingly, birds in the synchronised group seemed to actively listen during tutor song playback, as they sung less during the actual song playback compared to the birds in the asynchronous and audio-only control treatments.
4. Our open source RoboFinch setup thus provides an unprecedented tool for systematic study of the functionality and integration of audio-visual cues associated with song behaviour. Realistic head and beak movements aligned to specific song elements may allow future studies to assess the importance of multisensory cues during song development, sexual signalling and social behaviour. All software and assembly instructions are open source, and the robot can be easily adapted to other species. Experimental manipulations of stimulus combinations and synchronisation can further elucidate how audio-visual cues are integrated by receivers and how they may enhance signal detection, recognition, learning and memory
Social Cognition and the Evolution of Language: Constructing Cognitive Phylogenies
Human language and social cognition are closely linked: advanced social cognition is necessary for children to acquire language, and language allows forms of social understanding (and, more broadly, culture) that would otherwise be impossible. Both “language” and “social cognition” are complex constructs, involving many independent cognitive mechanisms, and the comparative approach provides a powerful route to understanding the evolution of such mechanisms. We provide a broad comparative review of mechanisms underlying social intelligence in vertebrates, with the goal of determining which human mechanisms are broadly shared, which have evolved in parallel in other clades, and which, potentially, are uniquely developed in our species. We emphasize the importance of convergent evolution for testing hypotheses about neural mechanisms and their evolution
No need to Talk, I Know You: Familiarity Influences Early Multisensory Integration in a Songbird's Brain
It is well known that visual information can affect auditory perception, as in the famous “McGurk effect,” but little is known concerning the processes involved. To address this issue, we used the best-developed animal model to study language-related processes in the brain: songbirds. European starlings were exposed to audiovisual compared to auditory-only playback of conspecific songs, while electrophysiological recordings were made in their primary auditory area (Field L). The results show that the audiovisual condition modulated the auditory responses. Enhancement and suppression were both observed, depending on the stimulus familiarity. Seeing a familiar bird led to suppressed auditory responses while seeing an unfamiliar bird led to response enhancement, suggesting that unisensory perception may be enough if the stimulus is familiar while redundancy may be required for unfamiliar items. This is to our knowledge the first evidence that multisensory integration may occur in a low-level, putatively unisensory area of a non-mammalian vertebrate brain, and also that familiarity of the stimuli may influence modulation of auditory responses by vision
Recommended from our members
Public Engagement Technology for Bioacoustic Citizen Science
Inexpensive mobile devices offer new capabilities for non-specialist use in the field for the purpose of conservation. This thesis explores the potential for such devices to be used by citizen scientists interacting with bioacoustic data such as birdsong. This thesis describes design research and field evaluation, in collaboration with conservationists and educators, and technological artefacts implemented as mobile applications for interactive educational gaming and creative composition.
This thesis considers, from a participant-centric collaborative design approach, conservationists' demand for interactive artefacts to motivate engagement in citizen science through gameful and playful interactions. Drawing on theories of motivation, frequently applied to the study of Human-Computer Interaction (HCI), and on approaches to designing for motivational engagement, this thesis introduces a novel pair of frameworks for the analysis of technological artefacts and for assessing participant engagement with bioacoustic citizen science from both game interaction design and citizen science project participation perspectives. This thesis reviews current theories of playful and gameful interaction developed for collaborative learning, data analysis, and ground-truth development, describes a process for design and analysis of motivational mobile games and toys, and explores the affordances of various game elements and mechanics for engaging participation in bioacoustic citizen science.
This thesis proposes research into progressions for scaffolding engagement with citizen science projects where participants interact with data collection and analysis artefacts. The research process includes the development of multiple designs, analyses of which explore the efficacy of game interactions to motivate engagement through interaction progressions, given proposed analysis frameworks. This thesis presents analysed results of experiments examining the usability of, and data-quality from, several prototypes and software artefacts, in both laboratory conditions and the field. This thesis culminates with an assessment of the efficacy of proposed design analysis frameworks, an analysis of designed artefacts, and a discussion of how these designs increase intrinsic and extrinsic motivation for participant engagement and affect resultant bioacoustic citizen science data quantity and quality.Non
- …