1,311 research outputs found

    Testing the domain-general nature of monitoring in the spatial and verbal cognitive domains

    Get PDF
    While it is well-established that monitoring the environment for the occurrence of relevant events represents a key executive function, it is still unclear whether such a function is mediated by domain-general or domain-specific mechanisms. We investigated this issue by combining event-related potentials (ERPs) with a behavioral paradigm in which monitoring processes (non-monitoring vs. monitoring) and cognitive domains (spatial vs. verbal) were orthogonally manipulated in the same group of participants. They had to categorize 3-dimensional visually presented words on the basis of either spatial or verbal rules. In monitoring blocks, they additionally had to check whether the word displayed a specific spatial configuration or whether it contained a certain consonant. The behavioral results showed slower responses for both spatial and verbal monitoring trials compared to non-monitoring trials. The ERP results revealed that monitoring did not interact with domain, thus suggesting the involvement of common underlying mechanisms. Specifically, monitoring acted on lower-level perceptual processes (as expressed by an enhanced visual N1 wave and a sustained posterior negativity for monitoring trials) and on higher-level cognitive processes (involving larger positive modulations by monitoring trials over frontal and parietal scalp regions). The source reconstruction analysis of the ERP data confirmed that monitoring was associated with increased activity in visual areas and in right prefrontal and parietal regions (i.e., superior and inferior frontal gyri and posterior parietal cortex), which previous studies have linked to spatial and temporal monitoring. Our findings extend this research by supporting the domain-general nature of monitoring in the spatial and verbal domains

    The ZuCo Benchmark on Cross-Subject Reading Task Classification with EEG and Eye-Tracking Data

    Full text link
    We present a new machine learning benchmark for reading task classification with the goal of advancing EEG and eye-tracking research at the intersection between computational language processing and cognitive neuroscience. The benchmark task consists of a cross-subject classification to distinguish between two reading paradigms: normal reading and task-specific reading. The data for the benchmark is based on the Zurich Cognitive Language Processing Corpus (ZuCo 2.0), which provides simultaneous eye-tracking and EEG signals from natural reading. The training dataset is publicly available, and we present a newly recorded hidden testset. We provide multiple solid baseline methods for this task and discuss future improvements. We release our code and provide an easy-to-use interface to evaluate new approaches with an accompanying public leaderboard:www.zuco-benchmark.com.HighlightsWe present a new machine learning benchmark for reading task classification with the goal of advancing EEG and eye-tracking research.We provide an interface to evaluate new approaches with an accompanying public leaderboard.The benchmark task consists of a cross-subject classification to distinguish between two reading paradigms: normal reading and task-specific reading.The data is based on the Zurich Cognitive Language Processing Corpus of simultaneous eye-tracking and EEG signals from natural reading

    The Berlin Brain-Computer Interface: Progress Beyond Communication and Control

    Get PDF
    The combined effect of fundamental results about neurocognitive processes and advancements in decoding mental states from ongoing brain signals has brought forth a whole range of potential neurotechnological applications. In this article, we review our developments in this area and put them into perspective. These examples cover a wide range of maturity levels with respect to their applicability. While we assume we are still a long way away from integrating Brain-Computer Interface (BCI) technology in general interaction with computers, or from implementing neurotechnological measures in safety-critical workplaces, results have already now been obtained involving a BCI as research tool. In this article, we discuss the reasons why, in some of the prospective application domains, considerable effort is still required to make the systems ready to deal with the full complexity of the real world.EC/FP7/611570/EU/Symbiotic Mind Computer Interaction for Information Seeking/MindSeeEC/FP7/625991/EU/Hyperscanning 2.0 Analyses of Multimodal Neuroimaging Data: Concept, Methods and Applications/HYPERSCANNING 2.0DFG, 103586207, GRK 1589: Verarbeitung sensorischer Informationen in neuronalen Systeme

    Could neurolecturing address the limitations of live and recorded lectures?

    Get PDF
    Lectures are a common teaching method in higher education. However, they have many serious limitations, including boredom, attendance, short attention span, low knowledge transmission and the passivity of students. This paper suggests how a combination of electroencephalography (EEG) and eye-tracking technology could address some of these limitations – an approach that I have called neurolecturing. Neurolecturing could measure students’ attention, learning and cognitive load and provide real time feedback to students and lecturers. It could also play a role in the flipped classroom and artificial intelligence tutoring

    Could neurolecturing address the limitations of live and recorded lectures?

    Get PDF
    Lectures are a common teaching method in higher education. However, they have many serious limitations, including boredom, attendance, short attention span, low knowledge transmission and the passivity of students. This paper suggests how a combination of electroencephalography (EEG) and eye-tracking technology could address some of these limitations – an approach that I have called neurolecturing. Neurolecturing could measure students’ attention, learning and cognitive load and provide real time feedback to students and lecturers. It could also play a role in the flipped classroom and artificial intelligence tutoring

    Focus Plus: Detect Learner's Distraction by Web Camera in Distance Teaching

    Full text link
    Distance teaching has become popular these years because of the COVID-19 epidemic. However, both students and teachers face several challenges in distance teaching, like being easy to distract. We proposed Focus+, a system designed to detect learners' status with the latest AI technology from their web camera to solve such challenges. By doing so, teachers can know students' status, and students can regulate their learning experience. In this research, we will discuss the expected model's design for training and evaluating the AI detection model of Focus+.Comment: 5 Pages, 4 Figures, 2021 National Chair Professorship Academic Series: Teaching and Learning in Pandemic Er

    Change blindness: eradication of gestalt strategies

    Get PDF
    Arrays of eight, texture-defined rectangles were used as stimuli in a one-shot change blindness (CB) task where there was a 50% chance that one rectangle would change orientation between two successive presentations separated by an interval. CB was eliminated by cueing the target rectangle in the first stimulus, reduced by cueing in the interval and unaffected by cueing in the second presentation. This supports the idea that a representation was formed that persisted through the interval before being 'overwritten' by the second presentation (Landman et al, 2003 Vision Research 43149–164]. Another possibility is that participants used some kind of grouping or Gestalt strategy. To test this we changed the spatial position of the rectangles in the second presentation by shifting them along imaginary spokes (by ±1 degree) emanating from the central fixation point. There was no significant difference seen in performance between this and the standard task [F(1,4)=2.565, p=0.185]. This may suggest two things: (i) Gestalt grouping is not used as a strategy in these tasks, and (ii) it gives further weight to the argument that objects may be stored and retrieved from a pre-attentional store during this task

    Design of Cognitive Interfaces for Personal Informatics Feedback

    Get PDF
    • …
    corecore