4 research outputs found

    Neuroadaptive modelling for generating images matching perceptual categories

    Get PDF
    Brain-computer interfaces enable active communication and execution of a pre-defined set of commands, such as typing a letter or moving a cursor. However, they have thus far not been able to infer more complex intentions or adapt more complex output based on brain signals. Here, we present neuroadaptive generative modelling, which uses a participant's brain signals as feedback to adapt a boundless generative model and generate new information matching the participant's intentions. We report an experiment validating the paradigm in generating images of human faces. In the experiment, participants were asked to specifically focus on perceptual categories, such as old or young people, while being presented with computer-generated, photorealistic faces with varying visual features. Their EEG signals associated with the images were then used as a feedback signal to update a model of the user's intentions, from which new images were generated using a generative adversarial network. A double-blind follow-up with the participant evaluating the output shows that neuroadaptive modelling can be utilised to produce images matching the perceptual category features. The approach demonstrates brain-based creative augmentation between computers and humans for producing new information matching the human operator's perceptual categories.Peer reviewe

    Two sides of the same coin: adaptation of BCIs to internal states with user-centered design and electrophysiological features

    Get PDF
    The ideal brain–computer interface (BCI) adapts to the user’s state to enable optimal BCI performance. Two methods of BCI adaptation are commonly applied: User-centered design (UCD) responds to individual user needs and requirements. Passive BCIs can adapt via online analysis of electrophysiological signals. Despite similar goals, these methods are rarely discussed in combination. Hence, we organized a workshop for the 8th International BCI Meeting 2021 to discuss the combined application of both methods. Here we expand upon the workshop by discussing UCD in more detail regarding its utility for end-users as well as non-end-user-based early-stage BCI development. Furthermore, we explore electrophysiology-based online user state adaptation concerning consciousness and pain detection. The integration of the numerous BCI user state adaptation methods into a unified process remains challenging. Yet, further systematic accumulation of specific knowledge about assessment and integration of internal user states bears great potential for BCI optimization

    EEG-based classification of visual and auditory monitoring tasks

    Get PDF
    Using EEG signals for mental workload detection has received particular attention in passive BCI research aimed at increasing safety and performance in high-risk and safety-critical occupations, like pilots and air traffic controllers. Along with detecting the level of mental workload, it has been suggested that being able to automatically detect the type of mental workload (e.g., auditory, visual, motor, cognitive) would also be useful. In this work, a novel experimental protocol was developed in which subjects performed a task involving one of two different types of mental workload (specifically, auditory and visual), each under two different levels of task demand (easy and difficult). The tasks were designed to be nearly identical in terms of visual and auditory stimuli, and differed only in the type of stimuli the subject was monitoring/attending to. EEG power spectral features were extracted and used to train linear and non-linear classifiers. Preliminary results on six subjects suggested that the auditory and visual tasks could be distinguished from one another, and individually from a baseline condition (which also contained nearly identical stimuli that the subject did not need to attend to at all), with accuracy significantly exceeding chance. This was true when classification was done within a workload level, and when data from the two workload levels were combined. Preliminary results also showed that tasks with easy and difficult trials could be distinguished from one another, each within a sensory domain (auditory and visual) as well as with both domains combined. Though further investigation is required, these preliminary results are promising, and suggest the feasibility of a passive BCI for detecting both type and level of mental workload

    Proceedings of the 3rd International Mobile Brain/Body Imaging Conference : Berlin, July 12th to July 14th 2018

    Get PDF
    The 3rd International Mobile Brain/Body Imaging (MoBI) conference in Berlin 2018 brought together researchers from various disciplines interested in understanding the human brain in its natural environment and during active behavior. MoBI is a new imaging modality, employing mobile brain imaging methods like the electroencephalogram (EEG) or near infrared spectroscopy (NIRS) synchronized to motion capture and other data streams to investigate brain activity while participants actively move in and interact with their environment. Mobile Brain / Body Imaging allows to investigate brain dynamics accompanying more natural cognitive and affective processes as it allows the human to interact with the environment without restriction regarding physical movement. Overcoming the movement restrictions of established imaging modalities like functional magnetic resonance tomography (MRI), MoBI can provide new insights into the human brain function in mobile participants. This imaging approach will lead to new insights into the brain functions underlying active behavior and the impact of behavior on brain dynamics and vice versa, it can be used for the development of more robust human-machine interfaces as well as state assessment in mobile humans.DFG, GR2627/10-1, 3rd International MoBI Conference 201
    corecore