1,300 research outputs found

    Experience-based Auditory Predictions Modulate Brain Activity to Silence as do Real Sounds.

    Get PDF
    Interactions between stimuli's acoustic features and experience-based internal models of the environment enable listeners to compensate for the disruptions in auditory streams that are regularly encountered in noisy environments. However, whether auditory gaps are filled in predictively or restored a posteriori remains unclear. The current lack of positive statistical evidence that internal models can actually shape brain activity as would real sounds precludes accepting predictive accounts of filling-in phenomenon. We investigated the neurophysiological effects of internal models by testing whether single-trial electrophysiological responses to omitted sounds in a rule-based sequence of tones with varying pitch could be decoded from the responses to real sounds and by analyzing the ERPs to the omissions with data-driven electrical neuroimaging methods. The decoding of the brain responses to different expected, but omitted, tones in both passive and active listening conditions was above chance based on the responses to the real sound in active listening conditions. Topographic ERP analyses and electrical source estimations revealed that, in the absence of any stimulation, experience-based internal models elicit an electrophysiological activity different from noise and that the temporal dynamics of this activity depend on attention. We further found that the expected change in pitch direction of omitted tones modulated the activity of left posterior temporal areas 140-200 msec after the onset of omissions. Collectively, our results indicate that, even in the absence of any stimulation, internal models modulate brain activity as do real sounds, indicating that auditory filling in can be accounted for by predictive activity

    Resonant Neural Dynamics of Speech Perception

    Full text link
    What is the neural representation of a speech code as it evolves in time? How do listeners integrate temporally distributed phonemic information across hundreds of milliseconds, even backwards in time, into coherent representations of syllables and words? What sorts of brain mechanisms encode the correct temporal order, despite such backwards effects, during speech perception? How does the brain extract rate-invariant properties of variable-rate speech? This article describes an emerging neural model that suggests answers to these questions, while quantitatively simulating challenging data about audition, speech and word recognition. This model includes bottom-up filtering, horizontal competitive, and top-down attentional interactions between a working memory for short-term storage of phonetic items and a list categorization network for grouping sequences of items. The conscious speech and word recognition code is suggested to be a resonant wave of activation across such a network, and a percept of silence is proposed to be a temporal discontinuity in the rate with which such a resonant wave evolves. Properties of these resonant waves can be traced to the brain mechanisms whereby auditory, speech, and language representations are learned in a stable way through time. Because resonances are proposed to control stable learning, the model is called an Adaptive Resonance Theory, or ART, model.Air Force Office of Scientific Research (F49620-01-1-0397); National Science Foundation (IRI-97-20333); Office of Naval Research (N00014-01-1-0624)

    An Integrative Tinnitus Model Based on Sensory Precision.

    Get PDF
    Tinnitus is a common disorder that often complicates hearing loss. Its mechanisms are incompletely understood. Current theories proposing pathophysiology from the ear to the cortex cannot individually - or collectively - explain the range of experimental evidence available. We propose a new framework, based on predictive coding, in which spontaneous activity in the subcortical auditory pathway constitutes a 'tinnitus precursor' which is normally ignored as imprecise evidence against the prevailing percept of 'silence'. Extant models feature as contributory mechanisms acting to increase either the intensity of the precursor or its precision. If precision (i.e., postsynaptic gain) rises sufficiently then tinnitus is perceived. Perpetuation arises through focused attention, which further increases the precision of the precursor, and resetting of the default prediction to expect tinnitus

    Brain Learning, Attention, and Consciousness

    Full text link
    The processes whereby our brains continue to learn about a changing world in a stable fashion throughout life are proposed to lead to conscious experiences. These processes include the learning of top-down expectations, the matching of these expectations against bottom-up data, the focusing of attention upon the expected clusters of information, and the development of resonant states between bottom-up and top-down processes as they reach an attentive consensus between what is expected and what is there in the outside world. It is suggested that all conscious states in the brain are resonant states, and that these resonant states trigger learning of sensory and cognitive representations. The model which summarize these concepts are therefore called Adaptive Resonance Theory, or ART, models. Psychophysical and neurobiological data in support of ART are presented from early vision, visual object recognition, auditory streaming, variable-rate speech perception, somatosensory perception, and cognitive-emotional interactions, among others. It is noted that ART mechanisms seem to be operative at all levels of the visual system, and it is proposed how these mechanisms are realized by known laminar circuits of visual cortex. It is predicted that the same circuit realization of ART mechanisms will be found in the laminar circuits of all sensory and cognitive neocortex. Concepts and data are summarized concerning how some visual percepts may be visibly, or modally, perceived, whereas amoral percepts may be consciously recognized even though they are perceptually invisible. It is also suggested that sensory and cognitive processing in the What processing stream of the brain obey top-down matching and learning laws that arc often complementary to those used for spatial and motor processing in the brain's Where processing stream. This enables our sensory and cognitive representations to maintain their stability a.s we learn more about the world, while allowing spatial and motor representations to forget learned maps and gains that are no longer appropriate as our bodies develop and grow from infanthood to adulthood. Procedural memories are proposed to be unconscious because the inhibitory matching process that supports these spatial and motor processes cannot lead to resonance.Defense Advance Research Projects Agency; Office of Naval Research (N00014-95-1-0409, N00014-95-1-0657); National Science Foundation (IRI-97-20333

    Echoes of the spoken past: how auditory cortex hears context during speech perception.

    Get PDF
    What do we hear when someone speaks and what does auditory cortex (AC) do with that sound? Given how meaningful speech is, it might be hypothesized that AC is most active when other people talk so that their productions get decoded. Here, neuroimaging meta-analyses show the opposite: AC is least active and sometimes deactivated when participants listened to meaningful speech compared to less meaningful sounds. Results are explained by an active hypothesis-and-test mechanism where speech production (SP) regions are neurally re-used to predict auditory objects associated with available context. By this model, more AC activity for less meaningful sounds occurs because predictions are less successful from context, requiring further hypotheses be tested. This also explains the large overlap of AC co-activity for less meaningful sounds with meta-analyses of SP. An experiment showed a similar pattern of results for non-verbal context. Specifically, words produced less activity in AC and SP regions when preceded by co-speech gestures that visually described those words compared to those words without gestures. Results collectively suggest that what we 'hear' during real-world speech perception may come more from the brain than our ears and that the function of AC is to confirm or deny internal predictions about the identity of sounds

    Cortico-muscular coherence in sensorimotor synchronisation

    Get PDF
    This thesis sets out to investigate the neuro-muscular control mechanisms underlying the ubiquitous phenomenon of sensorimotor synchronisation (SMS). SMS is the coordination of movement to external rhythms, and is commonly observed in everyday life. A large body of research addresses the processes underlying SMS at the levels of behaviour and brain. Comparatively, little is known about the coupling between neural and behavioural processes, i.e. neuro-muscular processes. Here, the neuro-muscular processes underlying SMS were investigated in the form of cortico-muscular coherence measured based on Electroencephalography (EEG) and Electromyography (EMG) recorded in human healthy participants. These neuro-muscular processes were investigated at three levels of engagement: passive listening and observation of rhythms in the environment, imagined SMS, and executed SMS, which resulted in the testing of three hypotheses: (i) Rhythms in the environment, such as music, spontaneously modulate cortico-muscular coupling, (ii) Movement intention modulates cortico-muscular coupling, and (iii) Cortico-muscular coupling is dynamically modulated during SMS time-locked to the stimulus rhythm. These three hypotheses were tested through two studies that used Electroencephalography (EEG) and Electromyography (EMG) recordings to measure Cortico-muscular coherence (CMC). First, CMC was tested during passive music listening, to test whether temporal and spectral properties of music stimuli known to induce groove, i.e., the subjective experience of wanting to move, can spontaneously modulate the overall strength of the communication between the brain and the muscles. Second, imagined and executed movement synchronisation was used to investigate the role of movement intention and dynamics on CMC. The two studies indicate that both top-down, and somatosensory and/or proprioceptive processes modulate CMC during SMS tasks. Although CMC dynamics might be linked to movement dynamics, no direct correlation between movement performance and CMC was found. Furthermore, purely passive auditory or visual rhythmic stimulation did not affect CMC. Together, these findings thus indicate that movement intention and active engagement with rhythms in the environment might be critical in modulating CMC. Further investigations of the mechanisms and function of CMC are necessary, as they could have important implications for clinical and elderly populations, as well as athletes, where optimisation of motor control is necessary to compensate for impaired movement or to achieve elite performance

    The Resonant Dynamics of Speech Perception: Interword Integration and Duration-Dependent Backward Effects

    Full text link
    How do listeners integrate temporally distributed phonemic information into coherent representations of syllables and words? During fluent speech perception, variations in the durations of speech sounds and silent pauses can produce different pereeived groupings. For exarnple, increasing the silence interval between the words "gray chip" may result in the percept "great chip", whereas increasing the duration of fricative noise in "chip" may alter the percept to "great ship" (Repp et al., 1978). The ARTWORD neural model quantitatively simulates such context-sensitive speech data. In AHTWORD, sequential activation and storage of phonemic items in working memory provides bottom-up input to unitized representations, or list chunks, that group together sequences of items of variable length. The list chunks compete with each other as they dynamically integrate this bottom-up information. The winning groupings feed back to provide top-down supportto their phonemic items. Feedback establishes a resonance which temporarily boosts the activation levels of selected items and chunks, thereby creating an emergent conscious percept. Because the resonance evolves more slowly than wotking memory activation, it can be influenced by information presented after relatively long intervening silence intervals. The same phonemic input can hereby yield different groupings depending on its arrival time. Processes of resonant transfer and competitive teaming help determine which groupings win the competition. Habituating levels of neurotransmitter along the pathways that sustain the resonant feedback lead to a resonant collapsee that permits the formation of subsequent. resonances.Air Force Office of Scientific Research (F49620-92-J-0225); Defense Advanced Research projects Agency and Office of Naval Research (N00014-95-1-0409); National Science Foundation (IRI-97-20333); Office of Naval Research (N00014-92-J-1309, NOOO14-95-1-0657

    Electrophysiological markers of predictive coding in multisensory integration and autism spectrum disorder

    Get PDF
    Publiekssamenvatting De manier waarop we de wereld om ons heen waarnemen is niet alleen gebaseerd op informatie die we via onze zintuigen ontvangen, maar wordt ook gevormd door onze ervaringen uit het verleden. Een recent geïntroduceerde theorie over de verwerking en integratie van sensorische informatie en eerdere ervaringen, de zogeheten predictive coding theorie, gaat ervan uit dat ons brein continu een intern predictiemodel van de wereld om ons heen genereert op basis van informatie die we ontvangen via onze zintuigen en gebeurtenissen die we in het verleden hebben meegemaakt. Het kunnen voorspellen wat we in bepaalde situaties zullen gaan zien, horen, voelen, ruiken en proeven, stelt ons in staat om te anticiperen op sensorische prikkels. Om deze reden reageren we vaak sneller en accurater op voorspelbare sensorische signalen. Op neuraal niveau zijn er ook aanwijzingen gevonden voor de aanwezigheid van een intern predictiemodel. Na het horen van een geluid, bijvoorbeeld van de claxon van een auto, genereert ons brein automatisch elektrische activiteit die met behulp van elektro-encefalografie (EEG) te meten is. Wanneer we datzelfde geluid zelf initiëren, door bijvoorbeeld zelf op de claxon te drukken, kunnen we beter voorspellen wanneer het geluid optreedt en hoe dit ongeveer zal klinken. Deze toename in voorspelbaarheid van het geluid is terug te zien in een vermindering van het EEG signaal. Wanneer we luisteren naar een reeks voorspelbare geluiden waarin onverwacht een geluid wordt weggelaten genereert het brein ook een duidelijk elektrisch signaal, een zogeheten predictie error, dat kan worden gemeten met behulp van EEG. De sterkte van dit signaal wordt verondersteld samen te hangen met de hoeveelheid cognitieve vermogens die aan de onverwachtse verstoring van de voorspelling worden toegewezen. De bevindingen beschreven in dit proefschrift laten zien dat het zelf initiëren van een geluid bij mensen met autisme spectrum stoornis (ASS) niet automatisch resulteert in een afname in elektrische hersenactiviteit. Ook is gevonden dat een plotselinge verstoring in sensorische stimulatie bij mensen met ASS kan resulteren in verhoogde elektrische hersenactiviteit. Deze bevindingen suggereren dat mensen met ASS minder goed in staat lijken te zijn om te anticiperen op sensorische prikkels, en mogelijk meer moeite hebben met de verwerking van onverwachtse verstoringen in sensorische stimulatie. Een verminderd vermogen om te kunnen anticiperen op sensorische prikkels en omgaan met onverwachtse verstoringen in sensorische stimulatie kan niet alleen leiden tot atypische gedragsreacties, waaronder onder- en overgevoeligheid voor sensorische prikkels (symptomen die veel voorkomen bij ASS), maar heeft mogelijk ook gevolgen voor de sociale cognitieve vaardigheden. In sociale situaties is het kunnen anticiperen op hetgeen een ander zegt of doet van cruciaal belang. Het begrijpen van sarcasme vereist bijvoorbeeld de integratie van subtiele verschillen in auditieve (toonhoogte en prosodie) en visuele informatie (gezichtsuitdrukkingen, lichaamstaal). Het juist interpreteren van dergelijke dubbelzinnige sociale signalen is vaak lastig voor mensen met ASS. De bevindingen beschreven in dit proefschrift suggereren dat de oorzaak hiervoor mogelijk ligt in verstoringen in het vermogen om te kunnen anticiperen op sensorische prikkels. Toekomstig onderzoek moet uitwijzen of mensen met ASS ook meer moeite hebben met het anticiperen op sensorische prikkels en verwerken van onverwachtse verstoringen in andere sensorische domeinen. Naast het vergroten van wetenschappelijke kennis over de sensorische informatieverwerking in ASS kan verder onderzoek naar de neurale mechanismen van sensorische anticipatie potentieel leiden tot een elektrofysiologische marker voor ASS die kan worden toegepast als diagnostisch hulpmiddel. Met name voor mensen waarbij de gedragskenmerken niet altijd goed te beoordelen zijn kan een dergelijke biomarker mogelijk als objectief meetinstrument worden ingezet in de klinische praktijk

    Test-retest reliability of the magnetic mismatch negativity response to sound duration and omission deviants

    Get PDF
    Mismatch negativity (MMN) is a neurophysiological measure of auditory novelty detection that could serve as a translational biomarker of psychiatric disorders, such as schizophrenia. However, the replicability of its magnetoencephalographic (MEG) counterpart (MMNm) has been insufficiently addressed. In the current study, test-retest reliability of the MMNm response to both duration and omission deviants was evaluated over two MEG sessions in 16 healthy adults. MMNm amplitudes and latencies were obtained at both sensor- and source-level using a cortically-constrained minimum-norm approach. Intraclass correlations (ICC) were derived to assess stability of MEG responses over time. In addition, signal-to-noise ratios (SNR) and within-subject statistics were obtained in order to determine MMNm detectability in individual participants. ICC revealed robust values at both sensor- and source-level for both duration and omission MMNm amplitudes (ICC = 0.81-0.90), in particular in the right hemisphere, while moderate to strong values were obtained for duration MMNm and omission MMNm peak latencies (ICC = 0.74-0.88). Duration MMNm was robustly identified in individual participants with high SNR, whereas omission MMNm responses were only observed in half of the participants. Our data indicate that MMNm to unexpected duration changes and omitted sounds are highly reproducible, providing support for the use of MEG-parameters in basic and clinical research
    corecore