Predictive coding explains visual perception as the result of an interaction between bottom-up sensory input and top-down generative models at each level of the visual hierarchy. Evidence for this comes from the visual mismatch negativity (vMMN): a more negative ERP for rare, unpredictable visual stimuli deviants, than for frequent, predictable visual stimuli-standards. Here, we show that the vMMN does not require conscious experience. We measured the vMMN from monocular luminance-decrement deviants that were either perceived or not during binocular rivalry dominance or suppression, respectively. We found that both sorts of deviants elicited the vMMN at about 250 ms after stimulus onset, with perceived deviants eliciting a bigger vMMN than not-perceived deviants. These results show that vMMN occurs in the absence of consciousness, and that consciousness enhances the processing underlying vMMN. We conclude that generative models of visual perception are tested, even when sensory input for those models is not perceived
To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.