7 research outputs found

    Test phase: Difference waves (ungrammatical minus grammatical target tones) for the Isochronous group (dashed line) and Strongly Metrical group (solid line).

    No full text
    <p>Each site represents the mean of the five electrodes included in the ROI (LF, left frontal; LC, left central; LP, left parietal; RF, right frontal; RC, right central; RP, right parietal). Only for visualization purposes waveforms are presented with a 10 Hz low-pass filter. Grey rectangles indicate time windows of ascending (light grey) and descending (dark grey) part of the N2 chosen for the analyses.</p

    Test phase: Grand-average ERPs for grammatical (solid line) and ungrammatical (dashed line) target tones in the Strongly Metrical (left side) group and Isochronous group (right side).

    No full text
    <p>Each site represents the mean of the five electrodes included in the ROI (LF, left frontal; LC, left central; LP, left parietal; RF, right frontal; RC, right central; RP, right parietal). Grey rectangles indicate time windows of ascending (light grey) and descending (dark grey) part of the N2 chosen for the analyses.</p

    Metrical Presentation Boosts Implicit Learning of Artificial Grammar

    No full text
    <div><p>The present study investigated whether a temporal hierarchical structure favors implicit learning. An artificial pitch grammar implemented with a set of tones was presented in two different temporal contexts, notably with either a strongly metrical structure or an isochronous structure. According to the Dynamic Attending Theory, external temporal regularities can entrain internal oscillators that guide attention over time, allowing for temporal expectations that influence perception of future events. Based on this framework, it was hypothesized that the metrical structure provides a benefit for artificial grammar learning in comparison to an isochronous presentation. Our study combined behavioral and event-related potential measurements. Behavioral results demonstrated similar learning in both participant groups. By contrast, analyses of event-related potentials showed a larger P300 component and an earlier N2 component for the strongly metrical group during the exposure phase and the test phase, respectively. These findings suggests that the temporal expectations in the strongly metrical condition helped listeners to better process the pitch dimension, leading to improved learning of the artificial grammar.</p></div

    Exposure phase: Grand-average ERPs for in-tune (solid line) and mistuned (dashed line) target tones in the Strongly Metrical group (left) and Isochronous group (right).

    No full text
    <p>Each site represents the mean of the five electrodes included in the ROI (LF, left frontal; LC, left central; LP, left parietal; RF, right frontal; RC, right central; RP, right parietal). Grey rectangles indicate time windows of the N2 (230–330 ms), P3a (350–550 ms) and P3b (550–900 ms) chosen for the analyses.</p

    Exposure phase: Difference waves (mistuned minus in-tune target tones) for the Isochronous group (dashed line) and Strongly Metrical group (solid line).

    No full text
    <p>Each site represents the mean of the five electrodes included in the ROI (LF, left frontal; LC, left central; LP, left parietal; RF, right frontal; RC, right central; RP, right parietal). Only for visualization purposes waveforms are presented with a 10 Hz low-pass filter. Grey rectangles indicate time windows of the N2 (230–330 ms), P3a (350–550 ms) and P3b (550–900 ms) chosen for the analyses.</p

    Data_Sheet_1_Self-processing in coma, unresponsive wakefulness syndrome and minimally conscious state.docx

    No full text
    IntroductionBehavioral and cerebral dissociation has been now clearly established in some patients with acquired disorders of consciousness (DoC). Altogether, these studies mainly focused on the preservation of high-level cognitive markers in prolonged DoC, but did not specifically investigate lower but key-cognitive functions to consciousness emergence, such as the ability to take a first-person perspective, notably at the acute stage of coma. We made the hypothesis that the preservation of self-recognition (i) is independent of the behavioral impairment of consciousness, and (ii) can reflect the ability to recover consciousness.MethodsHence, using bedside Electroencephalography (EEG) recordings, we acquired, in a large cohort of 129 severely brain damaged patients, the brain response to the passive listening of the subject’s own name (SON) and unfamiliar other first names (OFN). One hundred and twelve of them (mean age ± SD = 46 ± 18.3 years, sex ratio M/F: 71/41) could be analyzed for the detection of an individual and significant discriminative P3 event-related brain response to the SON as compared to OFN (‘SON effect’, primary endpoint assessed by temporal clustering permutation tests).ResultsPatients were either coma (n = 38), unresponsive wakefulness syndrome (UWS, n = 30) or minimally conscious state (MCS, n = 44), according to the revised version of the Coma Recovery Scale (CRS-R). Overall, 33 DoC patients (29%) evoked a ‘SON effect’. This electrophysiological index was similar between coma (29%), MCS (23%) and UWS (34%) patients (p = 0.61). MCS patients at the time of enrolment were more likely to emerged from MCS (EMCS) at 6 months than coma and UWS patients (p = 0.013 for comparison between groups). Among the 72 survivors’ patients with event-related responses recorded within 3 months after brain injury, 75% of the 16 patients with a SON effect were EMCS at 6 months, while 59% of the 56 patients without a SON effect evolved to this favorable behavioral outcome.DiscussionAbout 30% of severely brain-damaged patients suffering from DoC are capable to process salient self-referential auditory stimuli, even in case of absence of behavioral detection of self-conscious processing. We suggest that self-recognition covert brain ability could be an index of consciousness recovery, and thus could help to predict good outcome.</p
    corecore