2,012 research outputs found

    EEG-based cognitive control behaviour assessment: an ecological study with professional air traffic controllers

    Get PDF
    Several models defining different types of cognitive human behaviour are available. For this work, we have selected the Skill, Rule and Knowledge (SRK) model proposed by Rasmussen in 1983. This model is currently broadly used in safety critical domains, such as the aviation. Nowadays, there are no tools able to assess at which level of cognitive control the operator is dealing with the considered task, that is if he/she is performing the task as an automated routine (skill level), as procedures-based activity (rule level), or as a problem-solving process (knowledge level). Several studies tried to model the SRK behaviours from a Human Factor perspective. Despite such studies, there are no evidences in which such behaviours have been evaluated from a neurophysiological point of view, for example, by considering brain activity variations across the different SRK levels. Therefore, the proposed study aimed to investigate the use of neurophysiological signals to assess the cognitive control behaviours accordingly to the SRK taxonomy. The results of the study, performed on 37 professional Air Traffic Controllers, demonstrated that specific brain features could characterize and discriminate the different SRK levels, therefore enabling an objective assessment of the degree of cognitive control behaviours in realistic setting

    Brain-Computer Interfaces for Non-clinical (Home, Sports, Art, Entertainment, Education, Well-being) Applications

    Get PDF
    HCI researchers interest in BCI is increasing because the technology industry is expanding into application areas where efficiency is not the main goal of concern. Domestic or public space use of information and communication technology raise awareness of the importance of affect, comfort, family, community, or playfulness, rather than efficiency. Therefore, in addition to non-clinical BCI applications that require efficiency and precision, this Research Topic also addresses the use of BCI for various types of domestic, entertainment, educational, sports, and well-being applications. These applications can relate to an individual user as well as to multiple cooperating or competing users. We also see a renewed interest of artists to make use of such devices to design interactive art installations that know about the brain activity of an individual user or the collective brain activity of a group of users, for example, an audience. Hence, this Research Topic also addresses how BCI technology influences artistic creation and practice, and the use of BCI technology to manipulate and control sound, video, and virtual and augmented reality (VR/AR)

    A brain-computer interface for potential non-verbal facial communication based on EEG signals related to specific emotions

    Get PDF
    Unlike assistive technology for verbal communication, the brain–machine or brain–computer interface (BMI/BCI) has not been established as a nonverbal communication tool for amyotrophic lateral sclerosis (ALS) patients. Face-to-face communication enables access to rich emotional information, but individuals suffering from neurological disorders, such as ALS and autism, may not express their emotions or communicate their negative feelings. Although emotions may be inferred by looking at facial expressions, emotional prediction for neutral faces necessitates advanced judgment. The process that underlies brain neuronal responses to neutral faces and causes emotional changes remains unknown. To address this problem, therefore, this study attempted to decode conditioned emotional reactions to neutral face stimuli. This direction was motivated by the assumption that if electroencephalogram (EEG) signals can be used to detect patients’ emotional responses to specific inexpressive faces, the results could be incorporated into the design and development of BMI/BCI-based nonverbal communication tools. To these ends, this study investigated how a neutral face associated with a negative emotion modulates rapid central responses in face processing and then identified cortical activities. The conditioned neutral face-triggered event-related potentials that originated from the posterior temporal lobe statistically significantly changed during late face processing (600–700 ms) after stimulus, rather than in early face processing activities, such as P1 and N170 responses. Source localization revealed that the conditioned neutral faces increased activity in the right fusiform gyrus. This study also developed an efficient method for detecting implicit negative emotional responses to specific faces by using EEG signals

    Excuse Me, Do I Know You From Somewhere? Unaware Facial Recognition Using Brain-Computer Interfaces

    Get PDF
    While a great deal of research has been done on \ the human brain’s reaction to seeing faces and \ reaction to recognition of these faces, the unaware \ recognition of faces is an area where further research \ can be conducted and contributed to. We performed a \ preliminary experiment where participants viewed \ images of faces of individuals while we recorded their \ EEG signals using a consumer-grade BCI headset. \ Pre-selection of the images used in each of the three \ phases in the experiment allowed us to tag each image \ based on what state of recognition we expect the image \ to take – No Recognition, a Possible Unaware \ Recognition, and a Possible Aware Recognition. We \ find, after filtering, artifact removal, and analysis of \ the participants’ EEG signals recorded from a \ consumer-grade BCI headset, obvious differences \ between the three classes of recognition (as defined \ above) and, more specifically, unaware recognitions, \ can be easily identified

    Real-Time Measurement of Face Recognition in Rapid Serial Visual Presentation

    Get PDF
    Event-related potentials (ERPs) have been used extensively to study the processes involved in recognition memory. In particular, the early familiarity component of recognition has been linked to the FN400 (mid-frontal negative deflection between 300 and 500 ms), whereas the recollection component has been linked to a later positive deflection over the parietal cortex (500–800 ms). In this study, we measured the ERPs elicited by faces with varying degrees of familiarity. Participants viewed a continuous sequence of faces with either low (novel faces), medium (celebrity faces), or high (faces of friends and family) familiarity while performing a separate face-identification task. We found that the level of familiarity was significantly correlated with the magnitude of both the early and late recognition components. Additionally, by using a single-trial classification technique, applied to the entire evoked response, we were able to distinguish between familiar and unfamiliar faces with a high degree of accuracy. The classification of high versus low familiarly resulted in areas under the curve of up to 0.99 for some participants. Interestingly, our classifier model (a linear discriminant function) was developed using a completely separate object categorization task on a different population of participants

    Change blindness: eradication of gestalt strategies

    Get PDF
    Arrays of eight, texture-defined rectangles were used as stimuli in a one-shot change blindness (CB) task where there was a 50% chance that one rectangle would change orientation between two successive presentations separated by an interval. CB was eliminated by cueing the target rectangle in the first stimulus, reduced by cueing in the interval and unaffected by cueing in the second presentation. This supports the idea that a representation was formed that persisted through the interval before being 'overwritten' by the second presentation (Landman et al, 2003 Vision Research 43149–164]. Another possibility is that participants used some kind of grouping or Gestalt strategy. To test this we changed the spatial position of the rectangles in the second presentation by shifting them along imaginary spokes (by ±1 degree) emanating from the central fixation point. There was no significant difference seen in performance between this and the standard task [F(1,4)=2.565, p=0.185]. This may suggest two things: (i) Gestalt grouping is not used as a strategy in these tasks, and (ii) it gives further weight to the argument that objects may be stored and retrieved from a pre-attentional store during this task
    corecore