Brain activity from stimuli that are not perceived: Visual mismatch negativity during binocular rivalry suppression

Abstract

Predictive coding explains visual perception as the result of an interaction between bottom-up sensory input and top-down generative models at each level of the visual hierarchy. Evidence for this comes from the visual mismatch negativity (vMMN): a more negative ERP for rare, unpredictable visual stimuli deviants, than for frequent, predictable visual stimuli-standards. Here, we show that the vMMN does not require conscious experience. We measured the vMMN from monocular luminance-decrement deviants that were either perceived or not during binocular rivalry dominance or suppression, respectively. We found that both sorts of deviants elicited the vMMN at about 250 ms after stimulus onset, with perceived deviants eliciting a bigger vMMN than not-perceived deviants. These results show that vMMN occurs in the absence of consciousness, and that consciousness enhances the processing underlying vMMN. We conclude that generative models of visual perception are tested, even when sensory input for those models is not perceived

Similar works

Full text

thumbnail-image

Research Repository

redirect
Last time updated on 21/07/2017

This paper was published in Research Repository.

Having an issue?

Is data on this page outdated, violates copyrights or anything else? Report the problem now and we will take corresponding actions after reviewing your request.