145 research outputs found

    Attentional Enhancement of Auditory Mismatch Responses: a DCM/MEG Study.

    Get PDF
    Despite similar behavioral effects, attention and expectation influence evoked responses differently: Attention typically enhances event-related responses, whereas expectation reduces them. This dissociation has been reconciled under predictive coding, where prediction errors are weighted by precision associated with attentional modulation. Here, we tested the predictive coding account of attention and expectation using magnetoencephalography and modeling. Temporal attention and sensory expectation were orthogonally manipulated in an auditory mismatch paradigm, revealing opposing effects on evoked response amplitude. Mismatch negativity (MMN) was enhanced by attention, speaking against its supposedly pre-attentive nature. This interaction effect was modeled in a canonical microcircuit using dynamic causal modeling, comparing models with modulation of extrinsic and intrinsic connectivity at different levels of the auditory hierarchy. While MMN was explained by recursive interplay of sensory predictions and prediction errors, attention was linked to the gain of inhibitory interneurons, consistent with its modulation of sensory precision

    Repetition suppression and its contextual determinants in predictive coding

    Get PDF
    This paper presents a review of theoretical and empirical work on repetition suppression in the context of predictive coding. Predictive coding is a neurobiologically plausible scheme explaining how biological systems might perform perceptual inference and learning. From this perspective, repetition suppression is a manifestation of minimising prediction error through adaptive changes in predictions about the content and precision of sensory inputs. Simulations of artificial neural hierarchies provide a principled way of understanding how repetition suppression - at different time scales - can be explained in terms of inference and learning implemented under predictive coding. This formulation of repetition suppression is supported by results of numerous empirical studies of repetition suppression and its contextual determinants

    Task relevance modulates the behavioural and neural effects of sensory predictions

    Get PDF
    The brain is thought to generate internal predictions to optimize behaviour. However, it is unclear whether predictions signalling is an automatic brain function or depends on task demands. Here, we manipulated the spatial/temporal predictability of visual targets, and the relevance of spatial/temporal information provided by auditory cues. We used magnetoencephalography (MEG) to measure participants' brain activity during task performance. Task relevance modulated the influence of predictions on behaviour: spatial/temporal predictability improved spatial/temporal discrimination accuracy, but not vice versa. To explain these effects, we used behavioural responses to estimate subjective predictions under an ideal-observer model. Model-based time-series of predictions and prediction errors (PEs) were associated with dissociable neural responses: predictions correlated with cue-induced beta-band activity in auditory regions and alpha-band activity in visual regions, while stimulus-bound PEs correlated with gamma-band activity in posterior regions. Crucially, task relevance modulated these spectral correlates, suggesting that current goals influence PE and prediction signalling

    Multimodal acoustic-electric trigeminal nerve stimulation modulates conscious perception

    Get PDF
    Multimodal stimulation can reverse pathological neural activity and improve symptoms in neuropsychiatric diseases. Recent research shows that multimodal acoustic-electric trigeminal-nerve stimulation (TNS) (i.e., musical stimulation synchronized to electrical stimulation of the trigeminal nerve) can improve consciousness in patients with disorders of consciousness. However, the reliability and mechanism of this novel approach remain largely unknown. We explored the effects of multimodal acoustic-electric TNS in healthy human participants by assessing conscious perception before and after stimulation using behavioral and neural measures in tactile and auditory target-detection tasks. To explore the mechanisms underlying the putative effects of acoustic-electric stimulation, we fitted a biologically plausible neural network model to the neural data using dynamic causal modeling. We observed that (1) acoustic-electric stimulation improves conscious tactile perception without a concomitant change in auditory perception, (2) this improvement is caused by the interplay of the acoustic and electric stimulation rather than any of the unimodal stimulation alone, and (3) the effect of acoustic-electric stimulation on conscious perception correlates with inter-regional connection changes in a recurrent neural processing model. These results provide evidence that acoustic-electric TNS can promote conscious perception. Alterations in inter-regional cortical connections might be the mechanism by which acoustic-electric TNS achieves its consciousness benefits

    Active inference as a computational framework for consciousness

    Get PDF
    Recently, the mechanistic framework of active inference has been put forward as a principled foundation to develop an overarching theory of consciousness which would help address conceptual disparities in the field (Wiese 2018; Hohwy and Seth 2020). For that promise to bear out, we argue that current proposals resting on the active inference scheme need refinement to become a process theory of consciousness. One way of improving a theory in mechanistic terms is to use formalisms such as computational models that implement, attune and validate the conceptual notions put forward. Here, we examine how computational modelling approaches have been used to refine the theoretical proposals linking active inference and consciousness, with a focus on the extent and success to which they have been developed to accommodate different facets of consciousness and experimental paradigms, as well as how simulations and empirical data have been used to test and improve these computational models. While current attempts using this approach have shown promising results, we argue they remain preliminary in nature. To refine their predictive and structural validity, testing those models against empirical data is needed i.e., new and unobserved neural data. A remaining challenge for active inference to become a theory of consciousness is to generalize the model to accommodate the broad range of consciousness explananda; and in particular to account for the phenomenological aspects of experience. Notwithstanding these gaps, this approach has proven to be a valuable avenue for theory advancement and holds great potential for future research

    Prediction and memory: A predictive coding account

    Get PDF
    The hippocampus is crucial for episodic memory, but it is also involved in online prediction. Evidence suggests that a unitary hippocampal code underlies both episodic memory and predictive processing, yet within a predictive coding framework the hippocampal-neocortical interactions that accompany these two phenomena are distinct and opposing. Namely, during episodic recall, the hippocampus is thought to exert an excitatory influence on the neocortex, to reinstate activity patterns across cortical circuits. This contrasts with empirical and theoretical work on predictive processing, where descending predictions suppress prediction errors to ‘explain away’ ascending inputs via cortical inhibition. In this hypothesis piece, we attempt to dissolve this previously overlooked dialectic. We consider how the hippocampus may facilitate both prediction and memory, respectively, by inhibiting neocortical prediction errors or increasing their gain. We propose that these distinct processing modes depend upon the neuromodulatory gain (or precision) ascribed to prediction error units. Within this framework, memory recall is cast as arising from fictive prediction errors that furnish training signals to optimise generative models of the world, in the absence of sensory data

    Do auditory mismatch responses differ between acoustic features?

    Get PDF
    Mismatch negativity (MMN) is the electroencephalographic (EEG) waveform obtained by subtracting event-related potential (ERP) responses evoked by unexpected deviant stimuli from responses evoked by expected standard stimuli. While the MMN is thought to reflect an unexpected change in an ongoing, predictable stimulus, it is unknown whether MMN responses evoked by changes in different stimulus features have different magnitudes, latencies, and topographies. The present study aimed to investigate whether MMN responses differ depending on whether sudden stimulus change occur in pitch, duration, location or vowel identity, respectively. To calculate ERPs to standard and deviant stimuli, EEG signals were recorded in normal-hearing participants (N = 20; 13 males, 7 females) who listened to roving oddball sequences of artificial syllables. In the roving paradigm, any given stimulus is repeated several times to form a standard, and then suddenly replaced with a deviant stimulus which differs from the standard. Here, deviants differed from preceding standards along one of four features (pitch, duration, vowel or interaural level difference). The feature levels were individually chosen to match behavioral discrimination performance. We identified neural activity evoked by unexpected violations along all four acoustic dimensions. Evoked responses to deviant stimuli increased in amplitude relative to the responses to standard stimuli. A univariate (channel-by-channel) analysis yielded no significant differences between MMN responses following violations of different features. However, in a multivariate analysis (pooling information from multiple EEG channels), acoustic features could be decoded from the topography of mismatch responses, although at later latencies than those typical for MMN. These results support the notion that deviant feature detection may be subserved by a different process than general mismatch detection

    Dissociable neural correlates of multisensory coherence and selective attention

    Get PDF
    Previous work has demonstrated that performance in an auditory selective attention task can be enhanced or impaired, depending on whether a task-irrelevant visual stimulus is temporally coherent with a target auditory stream or with a competing distractor. However, it remains unclear how audiovisual (AV) temporal coherence and auditory selective attention interact at the neurophysiological level. Here, we measured neural activity using electroencephalography (EEG) while human participants (men and women) performed an auditory selective attention task, detecting deviants in a target audio stream. The amplitude envelope of the two competing auditory streams changed independently, while the radius of a visual disc was manipulated to control the audiovisual coherence. Analysis of the neural responses to the sound envelope demonstrated that auditory responses were enhanced independently of the attentional condition: both target and masker stream responses were enhanced when temporally coherent with the visual stimulus. In contrast, attention enhanced the event-related response (ERP) evoked by the transient deviants, independently of AV coherence. Finally, in an exploratory analysis, we identified a spatiotemporal component of ERP, in which temporal coherence enhanced the deviant-evoked responses only in the unattended stream. These results provide evidence for dissociable neural signatures of bottom-up (coherence) and top-down (attention) effects in AV object formation.Significance StatementTemporal coherence between auditory stimuli and task-irrelevant visual stimuli can enhance behavioral performance in auditory selective attention tasks. However, how audiovisual temporal coherence and attention interact at the neural level has not been established. Here, we measured EEG during a behavioral task designed to independently manipulate AV coherence and auditory selective attention. While some auditory features (sound envelope) could be coherent with visual stimuli, other features (timbre) were independent of visual stimuli. We find that audiovisual integration can be observed independently of attention for sound envelopes temporally coherent with visual stimuli, while the neural responses to unexpected timbre changes are most strongly modulated by attention. Our results provide evidence for dissociable neural mechanisms of bottom-up (coherence) and top-down (attention) effects on AV object formation

    Learning boosts the decoding of sound sequences in rat auditory cortex

    Get PDF
    Continuous acoustic streams, such as speech signals, can be chunked into segments containing reoccurring patterns (e.g., words). Noninvasive recordings of neural activity in humans suggest that chunking is underpinned by low-frequency cortical entrainment to the segment presentation rate, and modulated by prior segment experience (e.g., words belonging to a familiar language). Interestingly, previous studies suggest that also primates and rodents may be able to chunk acoustic streams. Here, we test whether neural activity in the rat auditory cortex is modulated by previous segment experience. We recorded subdural responses using electrocorticography (ECoG) from the auditory cortex of 11 anesthetized rats. Prior to recording, four rats were trained to detect familiar triplets of acoustic stimuli (artificial syllables), three were passively exposed to the triplets, while another four rats had no training experience. While low-frequency neural activity peaks were observed at the syllable level, no triplet-rate peaks were observed. Notably, in trained rats (but not in passively exposed and naïve rats), familiar triplets could be decoded more accurately than unfamiliar triplets based on neural activity in the auditory cortex. These results suggest that rats process acoustic sequences, and that their cortical activity is modulated by the training experience even under subsequent anesthesia
    corecore