150 research outputs found

    Micro-timing of backchannels in human-robot interaction

    Get PDF
    Inden B, Malisz Z, Wagner P, Wachsmuth I. Micro-timing of backchannels in human-robot interaction. Presented at the Timing in Human-Robot Interaction: Workshop in Conjunction with the 9th ACM/IEEE International Conference on Human-Robot Interaction (HRI2014), Bielefeld, Germany

    Temporal entrainment in overlapping speech

    Get PDF
    Wlodarczak M. Temporal entrainment in overlapping speech. Bielefeld: Bielefeld University; 2014

    Neural synchronization is strongest to the spectral flux of slow music and depends on familiarity and beat salience

    Get PDF
    Neural activity in the auditory system synchronizes to sound rhythms, and brain–environment synchronization is thought to be fundamental to successful auditory perception. Sound rhythms are often operationalized in terms of the sound’s amplitude envelope. We hypothesized that – especially for music – the envelope might not best capture the complex spectro-temporal fluctuations that give rise to beat perception and synchronized neural activity. This study investigated (1) neural synchronization to different musical features, (2) tempo-dependence of neural synchronization, and (3) dependence of synchronization on familiarity, enjoyment, and ease of beat perception. In this electroencephalography study, 37 human participants listened to tempo-modulated music (1–4 Hz). Independent of whether the analysis approach was based on temporal response functions (TRFs) or reliable components analysis (RCA), the spectral flux of music – as opposed to the amplitude envelope – evoked strongest neural synchronization. Moreover, music with slower beat rates, high familiarity, and easy-to-perceive beats elicited the strongest neural response. Our results demonstrate the importance of spectro-temporal fluctuations in music for driving neural synchronization, and highlight its sensitivity to musical tempo, familiarity, and beat salience

    Neural entrainment to acoustic edges in speech

    Get PDF

    Advances in the neurocognition of music and language

    Get PDF

    Contributions of Sensory Coding and Attentional Control to Individual Differences in Performance in Spatial Auditory Selective Attention Tasks

    Get PDF
    Listeners with normal hearing thresholds differ in their ability to steer attention to whatever sound source is important. This ability depends on top-down executive control, which modulates the sensory representation of sound in cortex. Yet, this sensory representation also depends on the coding fidelity of the peripheral auditory system. Both of these factors may thus contribute to the individual differences in performance. We designed a selective auditory attention paradigm in which we could simultaneously measure envelope following responses (EFRs, reflecting peripheral coding), onset event-related potentials from the scalp (ERPs, reflecting cortical responses to sound), and behavioral scores. We performed two experiments that varied stimulus conditions to alter the degree to which performance might be limited due to fine stimulus details vs. due to control of attentional focus. Consistent with past work, in both experiments we find that attention strongly modulates cortical ERPs. Importantly, in Experiment I, where coding fidelity limits the task, individual behavioral performance correlates with subcortical coding strength (derived by computing how the EFR is degraded for fully masked tones compared to partially masked tones); however, in this experiment, the effects of attention on cortical ERPs were unrelated to individual subject performance. In contrast, in Experiment II, where sensory cues for segregation are robust (and thus less of a limiting factor on task performance), inter-subject behavioral differences correlate with subcortical coding strength. In addition, after factoring out the influence of subcortical coding strength, behavioral differences are also correlated with the strength of attentional modulation of ERPs. These results support the hypothesis that behavioral abilities amongst listeners with normal hearing thresholds can arise due to both subcortical coding differences and differences in attentional control, depending on stimulus characteristics and task demands

    Computational modelling of neural mechanisms underlying natural speech perception

    Get PDF
    Humans are highly skilled at the analysis of complex auditory scenes. In particular, the human auditory system is characterized by incredible robustness to noise and can nearly effortlessly isolate the voice of a specific talker from even the busiest of mixtures. However, neural mechanisms underlying these remarkable properties remain poorly understood. This is mainly due to the inherent complexity of speech signals and multi-stage, intricate processing performed in the human auditory system. Understanding these neural mechanisms underlying speech perception is of interest for clinical practice, brain-computer interfacing and automatic speech processing systems. In this thesis, we developed computational models characterizing neural speech processing across different stages of the human auditory pathways. In particular, we studied the active role of slow cortical oscillations in speech-in-noise comprehension through a spiking neural network model for encoding spoken sentences. The neural dynamics of the model during noisy speech encoding reflected speech comprehension of young, normal-hearing adults. The proposed theoretical model was validated by predicting the effects of non-invasive brain stimulation on speech comprehension in an experimental study involving a cohort of volunteers. Moreover, we developed a modelling framework for detecting the early, high-frequency neural response to the uninterrupted speech in non-invasive neural recordings. We applied the method to investigate top-down modulation of this response by the listener's selective attention and linguistic properties of different words from a spoken narrative. We found that in both cases, the detected responses of predominantly subcortical origin were significantly modulated, which supports the functional role of feedback, between higher- and lower levels stages of the auditory pathways, in speech perception. The proposed computational models shed light on some of the poorly understood neural mechanisms underlying speech perception. The developed methods can be readily employed in future studies involving a range of experimental paradigms beyond these considered in this thesis.Open Acces

    Slow phase-locked modulations support selective attention to sound

    Get PDF
    To make sense of complex soundscapes, listeners must select and attend to task-relevant streams while ignoring uninformative sounds. One possible neural mechanism underlying this process is alignment of endogenous oscillations with the temporal structure of the target sound stream. Such a mechanism has been suggested to mediate attentional modulation of neural phase-locking to the rhythms of attended sounds. However, such modulations are compatible with an alternate framework, where attention acts as a filter that enhances exogenously-driven neural auditory responses. Here we attempted to test several predictions arising from the oscillatory account by playing two tone streams varying across conditions in tone duration and presentation rate; participants attended to one stream or listened passively. Attentional modulation of the evoked waveform was roughly sinusoidal and scaled with rate, while the passive response did not. However, there was only limited evidence for continuation of modulations through the silence between sequences. These results suggest that attentionally-driven changes in phase alignment reflect synchronization of slow endogenous activity with the temporal structure of attended stimuli

    Multiple Approaches to Auditory Rhythm: Development of Sustained Musical Beat and the Relation to Language, Development of Rhythmic Categories Via Iterated Production, and a Meta-Analytic Study of Neural Entrainment to Beat

    Full text link
    Rhythm is ubiquitous to human communication, coordination, and experience of music. In this dissertation, I address three empirical questions through three different methodologies, all of which contribute to the growing body of literature on human auditory rhythm processing. In Chapter 2, I present a registered report detailing the results of independent conceptual replications of Nozaradan, Peretz, Missal, & Mouraux (2011), all using the same vetted protocol. Listeners performed the same tasks as in Nozaradan et al. (2011), with the addition of behavioral measures of perception. In neuroscience, neural correlates to musical beat perception have been identified, yet little to no replication of seminal findings in this area leaves room for error. Meta-analyses will estimate the true population-level effect size and examine potential moderating variables. In Chapter 3, I examine the developmental trajectory of sustained musical beat perception and the relation to phonology in children. While some prior research has suggested that the beat (periodic pulse) in music can be perceived by humans as early as days after birth, more recent work (including that presented here) suggests a more elongated trajectory of attainment for this capacity, through adolescence. In this study, participants aged 4-23 years completed a musical beat discrimination task and a phonology task. Results suggest that subjective musical beat perception improves throughout middle childhood (8-9 years) and does not reach adult-like levels until adolescence (12-14 years). Furthermore, scores on the subjective beat task were positively related to phonology. Finally in Chapter 4, I investigate whether children assimilate rhythms to culture-specific structures, as previously shown with adults. Previous studies show that both adults and children can entrain their movements to a perceived musical beat, but children tend to perform much worse than adults, and whether children assimilate their tapping behavior to predictable rhythmic structures remains to be understood. In this study, children aged 6-11 years completed a rhythm perception task and a rhythm production task. In the perception task, children showed greater sensitivity to rhythmic disruptions of culturally familiar simple-meter than unfamiliar complex-meter songs. Overall, the results of these three studies demonstrate one of the first pre-registered EEG replication studies in the field of auditory neuroscience, that musical sustained beat perception develops gradually throughout childhood, and that children’s tapping behavior demonstrates enculturation to rhythm as early as 6 years of age
    • …
    corecore