298 research outputs found

    The influence of external and internal motor processes on human auditory rhythm perception

    Get PDF
    Musical rhythm is composed of organized temporal patterns, and the processes underlying rhythm perception are found to engage both auditory and motor systems. Despite behavioral and neuroscience evidence converging to this audio-motor interaction, relatively little is known about the effect of specific motor processes on auditory rhythm perception. This doctoral thesis was devoted to investigating the influence of both external and internal motor processes on the way we perceive an auditory rhythm. The first half of the thesis intended to establish whether overt body movement had a facilitatory effect on our ability to perceive the auditory rhythmic structure, and whether this effect was modulated by musical training. To this end, musicians and non-musicians performed a pulse-finding task either using natural body movement or through listening only, and produced their identified pulse by finger tapping. The results showed that overt movement benefited rhythm (pulse) perception especially for non-musicians, confirming the facilitatory role of external motor activities in hearing the rhythm, as well as its interaction with musical training. The second half of the thesis tested the idea that indirect, covert motor input, such as that transformed from the visual stimuli, could influence our perceived structure of an auditory rhythm. Three experiments examined the subjectively perceived tempo of an auditory sequence under different visual motion stimulations, while the auditory and visual streams were presented independently of each other. The results revealed that the perceived auditory tempo was accordingly influenced by the concurrent visual motion conditions, and the effect was related to the increment or decrement of visual motion speed. This supported the hypothesis that the internal motor information extracted from the visuomotor stimulation could be incorporated into the percept of an auditory rhythm. Taken together, the present thesis concludes that, rather than as a mere reaction to the given auditory input, our motor system plays an important role in contributing to the perceptual process of the auditory rhythm. This can occur via both external and internal motor activities, and may not only influence how we hear a rhythm but also under some circumstances improve our ability to hear the rhythm.Musikalische Rhythmen bestehen aus zeitlich strukturierten Mustern akustischer Stimuli. Es konnte gezeigt werden, dass die Prozesse, welche der Rhythmuswahrnehmung zugrunde liegen, sowohl motorische als auch auditive Systeme nutzen. Obwohl sich fĂŒr diese auditiv-motorischen Interaktionen sowohl in den Verhaltenswissenschaften als auch Neurowissenschaften ĂŒbereinstimmende Belege finden, weiß man bislang relativ wenig ĂŒber die Auswirkungen spezifischer motorischer Prozesse auf die auditive Rhythmuswahrnehmung. Diese Doktorarbeit untersucht den Einfluss externaler und internaler motorischer Prozesse auf die Art und Weise, wie auditive Rhythmen wahrgenommen werden. Der erste Teil der Arbeit diente dem Ziel herauszufinden, ob körperliche Bewegungen es dem Gehirn erleichtern können, die Struktur von auditiven Rhythmen zu erkennen, und, wenn ja, ob dieser Effekt durch ein musikalisches Training beeinflusst wird. Um dies herauszufinden wurde Musikern und Nichtmusikern die Aufgabe gegeben, innerhalb von prĂ€sentierten auditiven Stimuli den Puls zu finden, wobei ein Teil der Probanden wĂ€hrenddessen Körperbewegungen ausfĂŒhren sollte und der andere Teil nur zuhören sollte. Anschließend sollten die Probanden den gefundenen Puls durch Finger-Tapping ausfĂŒhren, wobei die Reizgaben sowie die Reaktionen mittels eines computerisierten Systems kontrolliert wurden. Die Ergebnisse zeigen, dass offen ausgefĂŒhrte Bewegungen die Wahrnehmung des Pulses vor allem bei Nichtmusikern verbesserten. Diese Ergebnisse bestĂ€tigen, dass Bewegungen beim Hören von Rhythmen unterstĂŒtzend wirken. Außerdem zeigte sich, dass hier eine Wechselwirkung mit dem musikalischen Training besteht. Der zweite Teil der Doktorarbeit ĂŒberprĂŒfte die Idee, dass indirekte, verdeckte Bewegungsinformationen, wie sie z.B. in visuellen Stimuli enthalten sind, die wahrgenommene Struktur von auditiven Rhythmen beeinflussen können. Drei Experimente untersuchten, inwiefern das subjektiv wahrgenommene Tempo einer akustischen Sequenz durch die PrĂ€sentation unterschiedlicher visueller Bewegungsreize beeinflusst wird, wobei die akustischen und optischen Stimuli unabhĂ€ngig voneinander prĂ€sentiert wurden. Die Ergebnisse zeigten, dass das wahrgenommene auditive Tempo durch die visuellen Bewegungsinformationen beeinflusst wird, und dass der Effekt in Verbindung mit der Zunahme oder Abnahme der visuellen Geschwindigkeit steht. Dies unterstĂŒtzt die Hypothese, dass internale Bewegungsinformationen, welche aus visuomotorischen Reizen extrahiert werden, in die Wahrnehmung eines auditiven Rhythmus integriert werden können. Zusammen genommen, 5 zeigt die vorgestellte Arbeit, dass unser motorisches System eine wichtige Rolle im Wahrnehmungsprozess von auditiven Rhythmen spielt. Dies kann sowohl durch Ă€ußere als auch durch internale motorische AktivitĂ€ten geschehen, und beeinflusst nicht nur die Art, wie wir Rhythmen hören, sondern verbessert unter bestimmten Bedingungen auch unsere FĂ€higkeit Rhythmen zu identifizieren

    The Developmental Trajectory of Contour Integration in Autism Spectrum Disorders

    Full text link
    Sensory input is inherently ambiguous and complex, so perception is believed to be achieved by combining incoming sensory information with prior knowledge. One model envisions the grouping of sensory features (the local dimensions of stimuli) to be the outcome of a predictive process relying on prior experience (the global dimension of stimuli) to disambiguate possible configurations those elements could take. Contour integration, the linking of aligned but separate visual elements, is one example of perceptual grouping. Kanizsa-type illusory contour (IC) stimuli have been widely used to explore contour integration processing. Consisting of two conditions which differ only in the alignment of their inducing elements, one induces the experience of a shape apparently defined by a contour and the second does not. This contour has no counterpart in actual visual space – it is the visual system that fills-in the gap between inducing elements. A well-tested electrophysiological index associated with this process (the IC-effect) provided us with a metric of the visual system’s contribution to contour integration. Using visually evoked potentials (VEP), we began by probing the limits of this metric to three manipulations of contour parameters previously shown to impact subjective experience of illusion strength. Next we detailed the developmental trajectory of contour integration processes over childhood and adolescence. Finally, because persons with autism spectrum disorders (ASDs) have demonstrated an altered balance of global and local processing, we hypothesized that contour integration may be atypical. We compared typical development to development in persons with ASDs to reveal possible mechanisms underlying this processing difference. Our manipulations resulted in no differences in the strength of the IC-effect in adults or children in either group. However, timing of the IC-effect was delayed in two instances: 1) peak latency was delayed by increasing the extent of contour to be filled-in relative to overall IC size and 2) onset latency was delayed in participants with ASDs relative to their neurotypical counterparts

    Phase entrainment and perceptual cycles in audition and vision

    Get PDF
    Des travaux rĂ©cents indiquent qu'il existe des diffĂ©rences fondamentales entre les systĂšmes visuel et auditif: tandis que le premier semble Ă©chantillonner le flux d'information en provenance de l'environnement, en passant d'un "instantanĂ©" Ă  un autre (crĂ©ant ainsi des cycles perceptifs), la plupart des expĂ©riences destinĂ©es Ă  examiner ce phĂ©nomĂšne de discrĂ©tisation dans le systĂšme auditif ont menĂ© Ă  des rĂ©sultats mitigĂ©s. Dans cette thĂšse, au travers de deux expĂ©riences de psychophysique, nous montrons que le sous-Ă©chantillonnage de l'information Ă  l'entrĂ©e des systĂšmes perceptifs est en effet plus destructif pour l'audition que pour la vision. Cependant, nous rĂ©vĂ©lons que des cycles perceptifs dans le systĂšme auditif pourraient exister Ă  un niveau Ă©levĂ© du traitement de l'information. En outre, nos rĂ©sultats suggĂšrent que du fait des fluctuations rapides du flot des sons en provenance de l'environnement, le systĂšme auditif tend Ă  avoir son activitĂ© alignĂ©e sur la structure rythmique de ce flux. En synchronisant la phase des oscillations neuronales, elles-mĂȘmes correspondant Ă  diffĂ©rents Ă©tats d'excitabilitĂ©, le systĂšme auditif pourrait optimiser activement le moment d'arrivĂ©e de ses "instantanĂ©s" et ainsi favoriser le traitement des informations pertinentes par rapport aux Ă©vĂ©nements de moindre importance. Non seulement nos rĂ©sultats montrent que cet entrainement de la phase des oscillations neuronales a des consĂ©quences importantes sur la façon dont sont perçus deux flux auditifs prĂ©sentĂ©s simultanĂ©ment ; mais de plus, ils dĂ©montrent que l'entraĂźnement de phase par un flux langagier inclut des mĂ©canismes de haut niveau. Dans ce but, nous avons crĂ©Ă© des stimuli parole/bruit dans lesquels les fluctuations de l'amplitude et du contenu spectral de la parole ont Ă©tĂ© enlevĂ©s, tout en conservant l'information phonĂ©tique et l'intelligibilitĂ©. Leur utilisation nous a permis de dĂ©montrer, au travers de plusieurs expĂ©riences, que le systĂšme auditif se synchronise Ă  ces stimuli. Plus prĂ©cisĂ©ment, la perception, estimĂ©e par la dĂ©tection d'un clic intĂ©grĂ© dans les stimuli parole/bruit, et les oscillations neuronales, mesurĂ©es par ElectroencĂ©phalographie chez l'humain et Ă  l'aide d'enregistrements intracrĂąniens dans le cortex auditif chez le singe, suivent la rythmique "de haut niveau" liĂ©e Ă  la parole. En rĂ©sumĂ©, les rĂ©sultats prĂ©sentĂ©s ici suggĂšrent que les oscillations neuronales sont un mĂ©canisme important pour la discrĂ©tisation des informations en provenance de l'environnement en vue de leur traitement par le cerveau, non seulement dans la vision, mais aussi dans l'audition. Pourtant, il semble exister des diffĂ©rences fondamentales entre les deux systĂšmes: contrairement au systĂšme visuel, il est essentiel pour le systĂšme auditif de se synchroniser (par entraĂźnement de phase) Ă  son environnement, avec un Ă©chantillonnage du flux des informations vraisemblablement rĂ©alisĂ© Ă  un niveau hiĂ©rarchique Ă©levĂ©.Recent research indicates fundamental differences between the auditory and visual systems: Whereas the visual system seems to sample its environment, cycling between "snapshots" at discrete moments in time (creating perceptual cycles), most attempts at discovering discrete perception in the auditory system failed. Here, we show in two psychophysical experiments that subsampling the very input to the visual and auditory systems is indeed more disruptive for audition; however, the existence of perceptual cycles in the auditory system is possible if they operate on a relatively high level of auditory processing. Moreover, we suggest that the auditory system, due to the rapidly fluctuating nature of its input, might rely to a particularly strong degree on phase entrainment, the alignment between neural activity and the rhythmic structure of its input: By using the low and high excitability phases of neural oscillations, the auditory system might actively control the timing of its "snapshots" and thereby amplify relevant information whereas irrelevant events are suppressed. Not only do our results suggest that the oscillatory phase has important consequences on how simultaneous auditory inputs are perceived; additionally, we can show that phase entrainment to speech sound does entail an active high-level mechanism. We do so by using specifically constructed speech/noise sounds in which fluctuations in low-level features (amplitude and spectral content) of speech have been removed, but intelligibility and high-level features (including, but not restricted to phonetic information) have been conserved. We demonstrate, in several experiments, that the auditory system can entrain to these stimuli, as both perception (the detection of a click embedded in the speech/noise stimuli) and neural oscillations (measured with electroencephalography, EEG, and in intracranial recordings in primary auditory cortex of the monkey) follow the conserved "high-level" rhythm of speech. Taken together, the results presented here suggest that, not only in vision, but also in audition, neural oscillations are an important tool for the discretization and processing of the brain's input. However, there seem to be fundamental differences between the two systems: In contrast to the visual system, it is critical for the auditory system to adapt (via phase entrainment) to its environment, and input subsampling is done most likely on a hierarchically high level of stimulus processing

    When Melody and Words Come Together

    Get PDF
    UIDB/00693/2020 UIDP/00693/2020For the past decades, the study of the binomial music and language has been of interest in several branches of cognitive sciences, including psychology, linguistics, anthropology, musicology, cognitive neuroscience, and education. Undoubtedly, songs are the perfect medium to study the relationship between both domains. This article will explore some contributions from neurosciences that could be deemed interesting to the field of music education, focusing on the relationship between melody and words in songs. The influence of these components on song perception and production is an ongoing matter of debate both in neurosciences and music education. The background for this discussion will be set by first mentioning the evolutionary commonalities between music and language, and a discussion on the shared leaning mechanisms for music and language. Since pitch and rhythm are important songs’ components, the comparative research of these elements across music and language will also be approached. In the intertwining of both fields, a special focus will be given to Music Learning Theory, a framework proposed by Edwin Gordon, who advocates the use of songs presented both with text and neutral syllable since infancy. Considering that songs are one of the most used resources in music education, it is questioned if the scientific advances in the neurosciences can inform musical pedagogy, thus new paths of investigation are suggested at the intersection of the two disciplines.publishersversionpublishe

    Cue-dependent circuits for illusory contours in humans.

    Get PDF
    Objects' borders are readily perceived despite absent contrast gradients, e.g. due to poor lighting or occlusion. In humans, a visual evoked potential (VEP) correlate of illusory contour (IC) sensitivity, the "IC effect", has been identified with an onset at ~90ms and generators within bilateral lateral occipital cortices (LOC). The IC effect is observed across a wide range of stimulus parameters, though until now it always involved high-contrast achromatic stimuli. Whether IC perception and its brain mechanisms differ as a function of the type of stimulus cue remains unknown. Resolving such will provide insights on whether there is a unique or multiple solutions to how the brain binds together spatially fractionated information into a cohesive perception. Here, participants discriminated IC from no-contour (NC) control stimuli that were either comprised of low-contrast achromatic stimuli or instead isoluminant chromatic contrast stimuli (presumably biasing processing to the magnocellular and parvocellular pathways, respectively) on separate blocks of trials. Behavioural analyses revealed that ICs were readily perceived independently of the stimulus cue-i.e. when defined by either chromatic or luminance contrast. VEPs were analysed within an electrical neuroimaging framework and revealed a generally similar timing of IC effects across both stimulus contrasts (i.e. at ~90ms). Additionally, an overall phase shift of the VEP on the order of ~30ms was consistently observed in response to chromatic vs. luminance contrast independently of the presence/absence of ICs. Critically, topographic differences in the IC effect were observed over the ~110-160ms period; different configurations of intracranial sources contributed to IC sensitivity as a function of stimulus contrast. Distributed source estimations localized these differences to LOC as well as V1/V2. The present data expand current models by demonstrating the existence of multiple, cue-dependent circuits in the brain for generating perceptions of illusory contours

    The Stability of the Speech-to-Song Illusion and Individual Differences

    Full text link
    Music and language are easily distinguishable for the average listener despite sharing many structural acoustic similarities. The Speech-to-Song illusion can give rise to both musical and linguistic percepts by inducing a perceptual switch after listening to multiple repetitions of a natural spoken utterance. As such, it has been used as a tool to control for low-level acoustic characteristics previously shown to drive lateralized brain responses regardless of domain-type, helping to disambiguate the contribution of high- versus low-level processes in both music and speech perception. However, there exists a lack of research on how large a role individual differences such as musical ability, tonal enculturation, sensitivity to speech prosody, and attention to lyrical content play in the elicitation and long-term stability of the Speech-to-Song illusion, which limits our understanding of how top-down musical and linguistic knowledge modulate perception. In our study, we measured the STS illusion by presenting listeners with excerpts known to elicit the STS illusion and asking them to rate the degree to which each repetition sounded song-like across delays from 0-56 days. To measure individual differences, we administered the Goldsmiths Musical Sophistication Index (Gold-MSI), a speech prosody test (PEPS-C), and a tonality test (from Corrigall & Trainor, 2015). Our results indicate the STS illusion increases in strength, is more readily elicited over delays, and also empirically validate anecdotal evidence that the STS illusion is temporally stable. Moreover, STS elicitation and consistency of STS excerpt ratings across sessions was predicted by many of our individual difference measures. This work holds important implications for understanding music and language processing, as well as memory for auditory stimuli

    Precursors to language development in typically and atypically developing infants and toddlers: the importance of embracing complexity

    Get PDF
    In order to understand how language abilities emerge in typically and atypically developing infants and toddlers, it is important to embrace complexity in development. In this paper, we describe evidence that early language development is an experience-dependent process, shaped by diverse, interconnected, interdependent developmental mechanisms, processes, and abilities (e.g. statistical learning, sampling, functional specialization, visual attention, social interaction, motor ability). We also present evidence from our studies on neurodevelopmental disorders (e.g. Down syndrome, fragile X syndrome, Williams syndrome) that variations in these factors significantly contribute to language delay. Finally, we discuss how embracing complexity, which involves integrating data from different domains and levels of description across developmental time, may lead to a better understanding of language development and, critically, lead to more effective interventions for cases when language develops atypically

    The perceptual timescape: Perceptual history on the sub-second scale

    Get PDF
    There is a high-capacity store of brief time span (∌1000 ms) which information enters from perceptual processing, often called iconic memory or sensory memory. It is proposed that a main function of this store is to hold recent perceptual information in a temporally segregated representation, named the perceptual timescape. The perceptual timescape is a continually active representation of change and continuity over time that endows the perceived present with a perceived history. This is accomplished primarily by two kinds of time marking information: time distance information, which marks all items of information in the perceptual timescape according to how far in the past they occurred, and ordinal temporal information, which organises items of information in terms of their temporal order. Added to that is information about connectivity of perceptual objects over time. These kinds of information connect individual items over a brief span of time so as to represent change, persistence, and continuity over time. It is argued that there is a one-way street of information flow from perceptual processing either to the perceived present or directly into the perceptual timescape, and thence to working memory. Consistent with that, the information structure of the perceptual timescape supports postdictive reinterpretations of recent perceptual information. Temporal integration on a time scale of hundreds of milliseconds takes place in perceptual processing and does not draw on information in the perceptual timescape, which is concerned with temporal segregation, not integration

    Spontaneous brain activity underlying auditory hallucinations in the hearing-impaired

    Get PDF
    Auditory hallucinations, the perception of a sound without a corresponding source, are common in people with hearing impairment. Two forms can be distinguished: simple (i.e., tinnitus) and complex hallucinations (speech and music). Little is known about the precise mechanisms underlying these types of hallucinations. Here we tested the assumption that spontaneous activity in the auditory pathways, following deafferentation, underlies these hallucinations and is related to their phenomenology. By extracting (fractional) Amplitude of Low Frequency Fluctuation [(f)ALFF] scores from resting state fMRI of 18 hearing impaired patients with complex hallucinations (voices or music), 18 hearing impaired patients with simple hallucinations (tinnitus or murmuring), and 20 controls with normal hearing, we investigated differences in spontaneous brain activity between these groups. Spontaneous activity in the anterior and posterior cingulate cortex of hearing-impaired groups was significantly higher than in the controls. The group with complex hallucinations showed elevated activity in the bilateral temporal cortex including Wernicke's area, while spontaneous activity of the group with simple hallucinations was mainly located in the cerebellum. These results suggest a decrease in error monitoring in both hearing-impaired groups. Spontaneous activity of language-related areas only in complex hallucinations suggests that the manifestation of the spontaneous activity represents the phenomenology of the hallucination. The link between cerebellar activity and simple hallucinations, such as tinnitus, is new and may have consequences for treatment. (C) 2020 The Author(s). Published by Elsevier Ltd

    Do we enjoy what we sense and perceive?:A dissociation between aesthetic appreciation and basic perception of environmental objects or events

    Get PDF
    This integrative review rearticulates the notion of human aesthetics by critically appraising the conventional definitions, offerring a new, more comprehensive definition, and identifying the fundamental components associated with it. It intends to advance holistic understanding of the notion by differentiating aesthetic perception from basic perceptual recognition, and by characterizing these concepts from the perspective of information processing in both visual and nonvisual modalities. To this end, we analyze the dissociative nature of information processing in the brain, introducing a novel local-global integrative model that differentiates aesthetic processing from basic perceptual processing. This model builds on the current state of the art in visual aesthetics as well as newer propositions about nonvisual aesthetics. This model comprises two analytic channels: aesthetics-only channel and perception-to-aesthetics channel. The aesthetics-only channel primarily involves restricted local processing for quality or richness (e.g., attractiveness, beauty/prettiness, elegance, sublimeness, catchiness, hedonic value) analysis, whereas the perception-to-aesthetics channel involves global/extended local processing for basic feature analysis, followed by restricted local processing for quality or richness analysis. We contend that aesthetic processing operates independently of basic perceptual processing, but not independently of cognitive processing. We further conjecture that there might be a common faculty, labeled as aesthetic cognition faculty, in the human brain for all sensory aesthetics albeit other parts of the brain can also be activated because of basic sensory processing prior to aesthetic processing, particularly during the operation of the second channel. This generalized model can account not only for simple and pure aesthetic experiences but for partial and complex aesthetic experiences as well.</p
    • 

    corecore