79 research outputs found

    Behavioral and Neural Indices of Metacognitive Sensitivity in Preverbal Infants

    Get PDF
    Humans adapt their behavior not only by observing the consequences of their actions but also by internally monitoring their performance. This capacity, termed metacognitive sensitivity [1 ; 2], has traditionally been denied to young children because they have poor capacities in verbally reporting their own mental states [3; 4 ; 5]. Yet, these observations might reflect children’s limited capacities for explicit self-reports, rather than limitations in metacognition per se. Indeed, metacognitive sensitivity has been shown to reflect simple computational mechanisms [1; 6; 7 ; 8], and can be found in various non-verbal species [7; 8; 9 ; 10]. Thus, it might be that this faculty is present early in development, although it would be discernible through implicit behaviors and neural indices rather than explicit self-reports. Here, by relying on such non-verbal indices, we show that 12- and 18-month-old infants internally monitor the accuracy of their own decisions. At the behavioral level, infants showed increased persistence in their initial choice after making a correct as compared to an incorrect response, evidencing an appropriate evaluation of decision confidence. Moreover, infants were able to use decision confidence adaptively to either confirm their initial choice or change their mind. At the neural level, we found that a well-established electrophysiological signature of error monitoring in adults, the error-related negativity, is similarly elicited when infants make an incorrect choice. Hence, although explicit forms of metacognition mature later during childhood, infants already estimate decision confidence, monitor their errors, and use these metacognitive evaluations to regulate subsequent behavior

    Sustained invisibility through crowding and continuous flash suppression: a comparative review

    Get PDF
    The study of non-conscious vision benefits from several alternative methods that allow the suppression of an image from awareness. Here, we present and compare two of them that are particularly well-suited for creating sustained periods of invisibility, namely visual crowding and continuous flash suppression (CFS). In visual crowding, a peripheral image surrounded by similar flankers becomes impossible to discriminate. In CFS, an image presented to one eye becomes impossible to detect when rapidly changing patterns are presented to the other eye. After discussing the experimental specificities of each method, we give a comparative overview of the main empirical results derived from them, from the mere analysis of low-level features to the extraction of semantic contents. We conclude by proposing practical guidelines and future directions to obtain more quantitative and systematic measures of non-conscious processes under prolonged stimulation

    Inducing Task-Relevant Responses to Speech in the Sleeping Brain

    Get PDF
    Falling asleep leads to a loss of sensory awareness and to the inability to interact with the environment [1]. While this was traditionally thought as a consequence of the brain shutting down to external inputs, it is now acknowledged that incoming stimuli can still be processed, at least to some extent, during sleep [2]. For instance, sleeping participants can create novel sensory associations between tones and odors [3] or reactivate existing semantic associations, as evidenced by event-related potentials [4; 5; 6 ; 7]. Yet, the extent to which the brain continues to process external stimuli remains largely unknown. In particular, it remains unclear whether sensory information can be processed in a flexible and task-dependent manner by the sleeping brain, all the way up to the preparation of relevant actions. Here, using semantic categorization and lexical decision tasks, we studied task-relevant responses triggered by spoken stimuli in the sleeping brain. Awake participants classified words as either animals or objects (experiment 1) or as either words or pseudowords (experiment 2) by pressing a button with their right or left hand, while transitioning toward sleep. The lateralized readiness potential (LRP), an electrophysiological index of response preparation, revealed that task-specific preparatory responses are preserved during sleep. These findings demonstrate that despite the absence of awareness and behavioral responsiveness, sleepers can still extract task-relevant information from external stimuli and covertly prepare for appropriate motor responses

    L'apprentissage implicite d'un nouveau vocabulaire durant le sommeil est transférable à l'éveil avec de la généralisation cross-modale

    Full text link
    peer reviewedNew information can be learned during sleep but the extent to which we can access this knowledge after awakening is far less understood. Using a novel Associative Transfer Learning paradigm, we show that, after hearing unknown Japanese words with sounds referring to their meaning during sleep, participants could identify the images depicting the meaning of newly acquired Japanese words after awakening (N = 22). Moreover, we demonstrate that this cross-modal generalization is implicit, meaning that participants remain unaware of this knowledge. Using electroencephalography, we further show that frontal slow-wave responses to auditory stimuli during sleep predicted memory performance after awakening. This neural signature of memory formation gradually emerged over the course of the sleep phase, highlighting the dynamics of associative learning during sleep. This study provides novel evidence that the formation of new associative memories can be traced back to the dynamics of slow-wave responses to stimuli during sleep and that their implicit transfer into wakefulness can be generalized across sensory modalities

    Learning new vocabulary implicitly during sleep transfers with cross-modal generalization into wakefulness

    Full text link
    editorial reviewedNew information can be learned during sleep but the extent to which we can access this knowledge after awakening is far less understood. Using a novel Associative Transfer Learning paradigm, we show that, after hearing unknown Japanese words with sounds referring to their meaning during sleep, participants could identify the images depicting the meaning of newly acquired Japanese words after awakening (N = 22). Moreover, we demonstrate that this cross-modal generalization is implicit, meaning that participants remain unaware of this knowledge. Using electroencephalography, we further show that frontal slow-wave responses to auditory stimuli during sleep predicted memory performance after awakening. This neural signature of memory formation gradually emerged over the course of the sleep phase, highlighting the dynamics of associative learning during sleep. This study provides novel evidence that the formation of new associative memories can be traced back to the dynamics of slow-wave responses to stimuli during sleep and that their implicit transfer into wakefulness can be generalized across sensory modalities

    Task-Guided Selection of the Dual Neural Pathways for Reading

    Get PDF
    SummaryThe visual perception of words is known to activate the auditory representation of their spoken forms automatically. We examined the neural mechanism for this phonological activation using transcranial magnetic stimulation (TMS) with a masked priming paradigm. The stimulation sites (left superior temporal gyrus [L-STG] and inferior parietal lobe [L-IPL]), modality of targets (visual and auditory), and task (pronunciation and lexical decision) were manipulated independently. For both within- and cross-modal conditions, the repetition priming during pronunciation was eliminated when TMS was applied to the L-IPL, but not when applied to the L-STG, whereas the priming during lexical decision was eliminated when the L-STG, but not the L-IPL, was stimulated. The observed double dissociation suggests that the conscious task instruction modulates the stimulus-driven activation of the lateral temporal cortex for lexico-phonological activation and the inferior parietal cortex for spoken word production, and thereby engages a different neural network for generating the appropriate behavioral response
    corecore