7,692 research outputs found

    Categories, concepts, and calls : auditory perceptual mechanisms and cognitive abilities across different types of birds.

    Get PDF
    Although involving different animals, preparations, and objectives, our laboratories (Sturdy's and Cook's) are mutually interested in category perception and concept formation. The Sturdy laboratory has a history of studying perceptual categories in songbirds, while Cook laboratory has a history of studying abstract concept formation in pigeons. Recently, we undertook a suite of collaborative projects to combine our investigations to examine abstract concept formation in songbirds, and perception of songbird vocalizations in pigeons. This talk will include our recent findings of songbird category perception, songbird abstract concept formation (same/different task), and early results from pigeons' processing of songbird vocalizations in a same/different task. Our findings indicate that (1) categorization in birds seems to be most heavily influenced by acoustic, rather than genetic or experiential factors (2) songbirds treat their vocalizations as perceptual categories, both at the level of the note and species/whole call, (3) chickadees, like pigeons, can perceive abstract, same-different relations, and (4) pigeons are not as good at discriminating chickadee vocalizations as songbirds (chickadees and finches). Our findings suggest that although there are commonalities in complex auditory processing among birds, there are potentially important comparative differences between songbirds and non-songbirds in their treatment of certain types of auditory objects.Publisher PD

    The neural basis of audiovisual integration

    Get PDF
    Our perception is continuous and unified. Yet, sensory information reaches our brains through different senses and needs to be processed in order to create that unified percept. Interactions between sensory modalities occur already at primary cortical levels. The purpose of such interactions and what kind of information they transmit is still largely unknown. The current thesis aimed to reveal the interactions between auditory pitch and visual size in polar coordinates, two modality specific stimulus features that have robust topographic representations in the human brain. In Chapter 1, I present the background of cross-modal interactions in early sensory cortices and of the pitch-size relationship. In Chapter 2, we explored the pitch-size relationship in a speeded classification task and, in Chapter 3, at the level of functional Magnetic Resonance Imaging activation patterns. In Chapter 4, we investigated the effects of actively learning a specific pitch-size mapping during one session on the speeded classification task. In Chapter 5, we extended learning over multiple sessions and examined learning effects with behavioral and neural measures. Finally, in Chapter 6, I summarize the findings of the thesis, its contributions to the literature, and outline directions for future research

    Ultrasonic Songs of Male Mice

    Get PDF
    Previously it was shown that male mice, when they encounter female mice or their pheromones, emit ultrasonic vocalizations with frequencies ranging over 30–110 kHz. Here, we show that these vocalizations have the characteristics of song, consisting of several different syllable types, whose temporal sequencing includes the utterance of repeated phrases. Individual males produce songs with characteristic syllabic and temporal structure. This study provides a quantitative initial description of male mouse songs, and opens the possibility of studying song production and perception in an established genetic model organism

    Speaker-normalized sound representations in the human auditory cortex

    Get PDF
    The acoustic dimensions that distinguish speech sounds (like the vowel differences in “boot” and “boat”) also differentiate speakers’ voices. Therefore, listeners must normalize across speakers without losing linguistic information. Past behavioral work suggests an important role for auditory contrast enhancement in normalization: preceding context affects listeners’ perception of subsequent speech sounds. Here, using intracranial electrocorticography in humans, we investigate whether and how such context effects arise in auditory cortex. Participants identified speech sounds that were preceded by phrases from two different speakers whose voices differed along the same acoustic dimension as target words (the lowest resonance of the vocal tract). In every participant, target vowels evoke a speaker-dependent neural response that is consistent with the listener’s perception, and which follows from a contrast enhancement model. Auditory cortex processing thus displays a critical feature of normalization, allowing listeners to extract meaningful content from the voices of diverse speakers

    Comparison of input devices in an ISEE direct timbre manipulation task

    Get PDF
    The representation and manipulation of sound within multimedia systems is an important and currently under-researched area. The paper gives an overview of the authors' work on the direct manipulation of audio information, and describes a solution based upon the navigation of four-dimensional scaled timbre spaces. Three hardware input devices were experimentally evaluated for use in a timbre space navigation task: the Apple Standard Mouse, Gravis Advanced Mousestick II joystick (absolute and relative) and the Nintendo Power Glove. Results show that the usability of these devices significantly affected the efficacy of the system, and that conventional low-cost, low-dimensional devices provided better performance than the low-cost, multidimensional dataglove

    Representation of statistical sound properties in human auditory cortex

    Get PDF
    The work carried out in this doctoral thesis investigated the representation of statistical sound properties in human auditory cortex. It addressed four key aspects in auditory neuroscience: the representation of different analysis time windows in auditory cortex; mechanisms for the analysis and segregation of auditory objects; information-theoretic constraints on pitch sequence processing; and the analysis of local and global pitch patterns. The majority of the studies employed a parametric design in which the statistical properties of a single acoustic parameter were altered along a continuum, while keeping other sound properties fixed. The thesis is divided into four parts. Part I (Chapter 1) examines principles of anatomical and functional organisation that constrain the problems addressed. Part II (Chapter 2) introduces approaches to digital stimulus design, principles of functional magnetic resonance imaging (fMRI), and the analysis of fMRI data. Part III (Chapters 3-6) reports five experimental studies. Study 1 controlled the spectrotemporal correlation in complex acoustic spectra and showed that activity in auditory association cortex increases as a function of spectrotemporal correlation. Study 2 demonstrated a functional hierarchy of the representation of auditory object boundaries and object salience. Studies 3 and 4 investigated cortical mechanisms for encoding entropy in pitch sequences and showed that the planum temporale acts as a computational hub, requiring more computational resources for sequences with high entropy than for those with high redundancy. Study 5 provided evidence for a hierarchical organisation of local and global pitch pattern processing in neurologically normal participants. Finally, Part IV (Chapter 7) concludes with a general discussion of the results and future perspectives

    On the mechanism of response latencies in auditory nerve fibers

    Get PDF
    Despite the structural differences of the middle and inner ears, the latency pattern in auditory nerve fibers to an identical sound has been found similar across numerous species. Studies have shown the similarity in remarkable species with distinct cochleae or even without a basilar membrane. This stimulus-, neuron-, and species- independent similarity of latency cannot be simply explained by the concept of cochlear traveling waves that is generally accepted as the main cause of the neural latency pattern. An original concept of Fourier pattern is defined, intended to characterize a feature of temporal processing—specifically phase encoding—that is not readily apparent in more conventional analyses. The pattern is created by marking the first amplitude maximum for each sinusoid component of the stimulus, to encode phase information. The hypothesis is that the hearing organ serves as a running analyzer whose output reflects synchronization of auditory neural activity consistent with the Fourier pattern. A combined research of experimental, correlational and meta-analysis approaches is used to test the hypothesis. Manipulations included phase encoding and stimuli to test their effects on the predicted latency pattern. Animal studies in the literature using the same stimulus were then compared to determine the degree of relationship. The results show that each marking accounts for a large percentage of a corresponding peak latency in the peristimulus-time histogram. For each of the stimuli considered, the latency predicted by the Fourier pattern is highly correlated with the observed latency in the auditory nerve fiber of representative species. The results suggest that the hearing organ analyzes not only amplitude spectrum but also phase information in Fourier analysis, to distribute the specific spikes among auditory nerve fibers and within a single unit. This phase-encoding mechanism in Fourier analysis is proposed to be the common mechanism that, in the face of species differences in peripheral auditory hardware, accounts for the considerable similarities across species in their latency-by-frequency functions, in turn assuring optimal phase encoding across species. Also, the mechanism has the potential to improve phase encoding of cochlear implants

    Music adapting to the brain: From diffusion chains to neurophysiology

    Get PDF
    During the last decade, the use of experimental approaches on cultural evolution research has provided novel insights, and supported theoretical predictions, on the principles driving the evolution of human cultural systems. Laboratory simulations of language evolution showed how general-domain constraints on learning, in addition to pressures for language to be expressive, may be responsible for the emergence of linguistic structure. Languages change when culturally transmitted, adapting to fit, among all, the cognitive abilities of their users. As a result, they become regular and compressed, easier to acquire and reproduce. Although a similar theory has been recently extended to the musical domain, the empirical investigation in this field is still scarce. In addition, no study to our knowledge directly addressed the role of cognitive constraints in cultural transmission with neurophysiological investigation. In my thesis I addressed both these issues with a combination of behavioral and neurophysiological methods, in three experimental studies. In study 1 (Chapter 2), I examined the evolution of structural regularities in artificial melodic systems while they were being transmitted across individuals via coordination and alignment. To this purpose I used a new laboratory model of music transmission: the multi-generational signaling games (MGSGs), a variant of the signaling games. This model combines classical aspects of lab-based semiotic models of communication, coordination and interaction (horizontal transmission), with the vertical transmission across generations of the iterated learning model (vertical transmission). Here, two-person signaling games are organized in diffusion chains of several individuals (generations). In each game, the two players (a sender and a receiver) must agree on a common code - here a miniature system where melodic riffs refer to emotions. The receiver in one game becomes the sender in the next game, possibly retransmitting the code previously learned to another generation of participants, and so on to complete the diffusion chain. I observed the gradual evolution of several structures features of musical phrases over generations: proximity, continuity, symmetry, and melodic compression. Crucially, these features are found in most of musical cultures of the world. I argue that we tapped into universal processing mechanisms of structured sequence processing, possibly at work in the evolution of real music. In study 2 (Chapter 3), I explored the link between cultural adaptation and neural information processing. To this purpose, I combined behavioral and EEG study on 2 successive days. I show that the latency of the mismatch negativity (MMN) recorded in a pre-attentive auditory sequence processing task on day 1, predicts how well participants learn and transmit an artificial tone system with affective semantics in two signaling games on day 2. Notably, MMN latencies also predict which structural changes are introduced by participants into the artificial tone system. In study 3 (Chapter 4), I replicated and extended behavioral and neurophysiological findings on the temporal domain of music, with two independent experiments. In the first experiment, I used MGSGs as a laboratory model of cultural evolution of rhythmic equitone patterns referring to distinct emotions. As a result of transmission, rhythms developed a universal property of music structure, namely temporal regularity (or isochronicity). In the second experiment, I anchored this result with neural predictors. I showed that neural information processing capabilities of individuals, as measured with the MMN on day 1, can predict learning, transmission, and regularization of rhythmic patterns in signaling games on day 2. In agreement with study 2, I observe that MMN brain timing may reflect the efficiency of sensory systems to process auditory patterns. Functional differences in those systems, across individuals, may produce a different sensitivity to pressures for regularities in the cultural system. Finally, I argue that neural variability can be an important source of variability of cultural traits in a population. My work is the first to systematically describe the emergence of structural properties of melodic and rhythmic systems in the laboratory, using an explicit game-theoretic model of cultural transmission in which agents freely interact and exchange information. Critically, it provides the first demonstration that social learning, transmission, and cultural adaptation are constrained and driven by individual differences in the functional organization of sensory systems

    How musical rhythms entrain the human brain : clarifying the neural mechanisms of sensory-motor entrainment to rhythms

    Get PDF
    When listening to music, people across cultures tend to spontaneously perceive and move the body along a periodic pulse-like meter. Increasing evidence suggests that this ability is supported by neural mechanisms that selectively amplify periodicities corresponding to the perceived metric pulses. However, the nature of these neural mechanisms, i.e., the endogenous or exogenous factors that may selectively enhance meter periodicities in brain responses to rhythm, remains largely unknown. This question was investigated in a series of studies in which the electroencephalogram (EEG) of healthy participants was recorded while they listened to musical rhythm. From this EEG, selective contrast at meter periodicities in the elicited neural activity was captured using frequency-tagging, a method allowing direct comparison of this contrast between the sensory input, EEG response, biologically-plausible models of auditory subcortical processing, and behavioral output. The results show that the selective amplification of meter periodicities is shaped by a continuously updated combination of factors including sound spectral content, long-term training and recent context, irrespective of attentional focus and beyond auditory subcortical nonlinear processing. Together, these observations demonstrate that perception of rhythm involves a number of processes that transform the sensory input via fixed low-level nonlinearities, but also through flexible mappings shaped by prior experience at different timescales. These higher-level neural mechanisms could represent a neurobiological basis for the remarkable flexibility and stability of meter perception relative to the acoustic input, which is commonly observed within and across individuals. Fundamentally, the current results add to the evidence that evolution has endowed the human brain with an extraordinary capacity to organize, transform, and interact with rhythmic signals, to achieve adaptive behavior in a complex dynamic environment
    • 

    corecore