2 research outputs found
Unified Framework for Identity and Imagined Action Recognition from EEG patterns
We present a unified deep learning framework for the recognition of user
identity and the recognition of imagined actions, based on
electroencephalography (EEG) signals, for application as a brain-computer
interface. Our solution exploits a novel shifted subsampling preprocessing step
as a form of data augmentation, and a matrix representation to encode the
inherent local spatial relationships of multi-electrode EEG signals. The
resulting image-like data is then fed to a convolutional neural network to
process the local spatial dependencies, and eventually analyzed through a
bidirectional long-short term memory module to focus on temporal relationships.
Our solution is compared against several methods in the state of the art,
showing comparable or superior performance on different tasks. Specifically, we
achieve accuracy levels above 90% both for action and user classification
tasks. In terms of user identification, we reach 0.39% equal error rate in the
case of known users and gestures, and 6.16% in the more challenging case of
unknown users and gestures. Preliminary experiments are also conducted in order
to direct future works towards everyday applications relying on a reduced set
of EEG electrodes
Co-adaptive control strategies in assistive Brain-Machine Interfaces
A large number of people with severe motor disabilities cannot access any of the
available control inputs of current assistive products, which typically rely on residual
motor functions. These patients are therefore unable to fully benefit from existent
assistive technologies, including communication interfaces and assistive robotics. In
this context, electroencephalography-based Brain-Machine Interfaces (BMIs) offer a
potential non-invasive solution to exploit a non-muscular channel for communication
and control of assistive robotic devices, such as a wheelchair, a telepresence
robot, or a neuroprosthesis. Still, non-invasive BMIs currently suffer from limitations,
such as lack of precision, robustness and comfort, which prevent their practical
implementation in assistive technologies.
The goal of this PhD research is to produce scientific and technical developments
to advance the state of the art of assistive interfaces and service robotics based on
BMI paradigms. Two main research paths to the design of effective control strategies
were considered in this project. The first one is the design of hybrid systems, based on
the combination of the BMI together with gaze control, which is a long-lasting motor
function in many paralyzed patients. Such approach allows to increase the degrees
of freedom available for the control. The second approach consists in the inclusion
of adaptive techniques into the BMI design. This allows to transform robotic tools and
devices into active assistants able to co-evolve with the user, and learn new rules of
behavior to solve tasks, rather than passively executing external commands.
Following these strategies, the contributions of this work can be categorized
based on the typology of mental signal exploited for the control. These include:
1) the use of active signals for the development and implementation of hybrid eyetracking
and BMI control policies, for both communication and control of robotic
systems; 2) the exploitation of passive mental processes to increase the adaptability
of an autonomous controller to the user\u2019s intention and psychophysiological state,
in a reinforcement learning framework; 3) the integration of brain active and passive
control signals, to achieve adaptation within the BMI architecture at the level of
feature extraction and classification