32 research outputs found

    Interfacce cervello-computer per la comunicazione aumentativa: algoritmi asincroni e adattativi e validazione con utenti finali

    Get PDF
    This thesis aimed at addressing some of the issues that, at the state of the art, avoid the P300-based brain computer interface (BCI) systems to move from research laboratories to end users’ home. An innovative asynchronous classifier has been defined and validated. It relies on the introduction of a set of thresholds in the classifier, and such thresholds have been assessed considering the distributions of score values relating to target, non-target stimuli and epochs of voluntary no-control. With the asynchronous classifier, a P300-based BCI system can adapt its speed to the current state of the user and can automatically suspend the control when the user diverts his attention from the stimulation interface. Since EEG signals are non-stationary and show inherent variability, in order to make long-term use of BCI possible, it is important to track changes in ongoing EEG activity and to adapt BCI model parameters accordingly. To this aim, the asynchronous classifier has been subsequently improved by introducing a self-calibration algorithm for the continuous and unsupervised recalibration of the subjective control parameters. Finally an index for the online monitoring of the EEG quality has been defined and validated in order to detect potential problems and system failures. This thesis ends with the description of a translational work involving end users (people with amyotrophic lateral sclerosis-ALS). Focusing on the concepts of the user centered design approach, the phases relating to the design, the development and the validation of an innovative assistive device have been described. The proposed assistive technology (AT) has been specifically designed to meet the needs of people with ALS during the different phases of the disease (i.e. the degree of motor abilities impairment). Indeed, the AT can be accessed with several input devices either conventional (mouse, touchscreen) or alterative (switches, headtracker) up to a P300-based BCI.Questa tesi affronta alcune delle problematiche che, allo stato dell'arte, limitano l'usabilità delle interfacce cervello computer (Brain Computer Interface - BCI) al di fuori del contesto sperimentale. E' stato inizialmente definito e validato un classificatore asincrono. Quest'ultimo basa il suo funzionamento sull'inserimento di un set di soglie all'interno del classificatore. Queste soglie vengono definite considerando le distribuzioni dei valori di score relativi agli stimoli target e non-target e alle epoche EEG in cui il soggetto non intendeva effettuare nessuna selezione (no-control). Con il classificatore asincrono, un BCI basato su potenziali P300 può adattare la sua velocità allo stato corrente dell'utente e sospendere automaticamente il controllo quando l'utente non presta attenzione alla stimolazione. Dal momento che i segnali EEG sono non-stazionari e mostrano una variabilità intrinseca, al fine di rendere possibile l'utilizzo dei sistemi BCI sul lungo periodo, è importante rilevare i cambiamenti dell'attività EEG e adattare di conseguenza i parametri del classificatore. A questo scopo, il classificatore asincrono è stato successivamente migliorato introducendo un algoritmo di autocalibrazione per la continua e non supervisionata ricalibrazione dei parametri di controllo soggettivi. Infine è stato definito e validato un indice per monitorare on-line la qualità del segnale EEG, in modo da rilevare potenziali problemi e malfunzionamenti del sistema. Questa tesi si conclude con la descrizione di un lavoro che ha coinvolto gli utenti finali (persone affette da sclerosi laterale amiotrofica-SLA). In particolare, basandosi sui principi dell’user-centered design, sono state descritte le fasi relative alla progettazione, sviluppo e validazione di una tecnologia assistiva (TA) innovativa. La TA è stata specificamente progettata per rispondere alla esigenze delle persone affetta da SLA durante le diverse fasi della malattia. Infatti, la TA proposta può essere utilizzata sia mediante dispositivi d’input tradizionali (mouse, tastiera) che alternativi (bottoni, headtracker) fino ad arrivare ad un BCI basato su potenziali P300

    Use of Task-Relevant Spoken Word Stimuli in an Auditory Brain-Computer Interface

    Get PDF
    Auditory brain-computer interfaces (aBCI) may be an effective solution for communication in cases of severely locked-in, late stage ALS (Lou Gehrig’s disease) and upper spinal cord injury patients who are otherwise not candidates for implanted electrodes. Feasibility of auditory BCI has been shown for both healthy participants, (Hill et al., 2004), and impaired populations (Sellers and Donchin, 2006). (Hill et al., 2014) found similar BCI performance in healthy participants and those with locked-in syndrome in a paradigm comparing words to pure tone stimuli. Additional BCI research has explored variations to augment P300 signals for use in speller paradigms, including more meaningful auditory stimuli (Klobassa et al., 2009; Furdea et al., 2009; Simon et al., 2014). It has been recognized in these studies that end users would much prefer natural sounds over a repeated tone stimulus. All of these systems required an association of sound with target stimuli, typically enforced by a visual support matrix. These systems would not be usable by the target end users of an auditory BCI. At- tempts to remove the need for visual referencing by investigating a BCI system with serial presentation of spoken letter streams as stimuli (Hoehne and Tangermann, 2014) or spoken words (Ferracuti et al., 2013) has met with limited success but presents a potential high speed communication solutions. The present study highlights a method of using BCI task relevant spoken word stimuli to eliminate visually presented references. By utilizing spoken word stimuli, a BCI system could utilize a range of stimuli equivalent to the size of the users vocabulary and provide faster communication out- put than spelling systems. As a control, spoken word stimuli that have no task specific relevance are also tested. Stimuli audio-spatial cues have shown significant improvements in aBCI performance (Käthner et al., 2013; Schreuder et al., 2011). The present study specifically evaluates the potential improvements to BCI performance of semantic and audio-spatial relevance by eliciting auditory oddball P300 responses to task relevant directional stimuli (spoken words: ‘front’, ‘back’, ‘left’, ‘right’). Participants completed several trials of a motivational game with directionally relevant targets over two experimental sessions. Offline analysis of training data was accomplished to evaluate the impact of stimulus characteristics on BCI performance. Questionnaire results on workload, motivation and system usability accurately reflected participant’s BCI performance. A behavioral button press study was utilized to further investigate the influence of spatial cues used in the paradigm, but also highlighted differences in the semantic relevance of the stimuli. Behavioral results correlated with BCI performance. The results of this study indicate task relevant stimuli are a viable option for eliminating artificial and visual stimulus references. This study’s results highlight several considerations for future auditory BCI studies, including: classifier selection, hearing threshold importance, aid of behavioral correlates to BCI performance and use of spatially separated spoken word stimuli

    Advanced Biometrics with Deep Learning

    Get PDF
    Biometrics, such as fingerprint, iris, face, hand print, hand vein, speech and gait recognition, etc., as a means of identity management have become commonplace nowadays for various applications. Biometric systems follow a typical pipeline, that is composed of separate preprocessing, feature extraction and classification. Deep learning as a data-driven representation learning approach has been shown to be a promising alternative to conventional data-agnostic and handcrafted pre-processing and feature extraction for biometric systems. Furthermore, deep learning offers an end-to-end learning paradigm to unify preprocessing, feature extraction, and recognition, based solely on biometric data. This Special Issue has collected 12 high-quality, state-of-the-art research papers that deal with challenging issues in advanced biometric systems based on deep learning. The 12 papers can be divided into 4 categories according to biometric modality; namely, face biometrics, medical electronic signals (EEG and ECG), voice print, and others

    ON THE INTERPLAY BETWEEN BRAIN-COMPUTER INTERFACES AND MACHINE LEARNING ALGORITHMS: A SYSTEMS PERSPECTIVE

    Get PDF
    Today, computer algorithms use traditional human-computer interfaces (e.g., keyboard, mouse, gestures, etc.), to interact with and extend human capabilities across all knowledge domains, allowing them to make complex decisions underpinned by massive datasets and machine learning. Machine learning has seen remarkable success in the past decade in obtaining deep insights and recognizing unknown patterns in complex data sets, in part by emulating how the brain performs certain computations. As we increase our understanding of the human brain, brain-computer interfaces can benefit from the power of machine learning, both as an underlying model of how the brain performs computations and as a tool for processing high-dimensional brain recordings. The technology (machine learning) has come full circle and is being applied back to understanding the brain and any electric residues of the brain activity over the scalp (EEG). Similarly, domains such as natural language processing, machine translation, and scene understanding remain beyond the scope of true machine learning algorithms and require human participation to be solved. In this work, we investigate the interplay between brain-computer interfaces and machine learning through the lens of end-user usability. Specifically, we propose the systems and algorithms to enable synergistic and user-friendly integration between computers (machine learning) and the human brain (brain-computer interfaces). In this context, we provide our research contributions in two interrelated aspects by, (i) applying machine learning to solve challenges with EEG-based BCIs, and (ii) enabling human-assisted machine learning with EEG-based human input and implicit feedback.Ph.D

    Interpretable Convolutional Neural Networks for Decoding and Analyzing Neural Time Series Data

    Get PDF
    Machine learning is widely adopted to decode multi-variate neural time series, including electroencephalographic (EEG) and single-cell recordings. Recent solutions based on deep learning (DL) outperformed traditional decoders by automatically extracting relevant discriminative features from raw or minimally pre-processed signals. Convolutional Neural Networks (CNNs) have been successfully applied to EEG and are the most common DL-based EEG decoders in the state-of-the-art (SOA). However, the current research is affected by some limitations. SOA CNNs for EEG decoding usually exploit deep and heavy structures with the risk of overfitting small datasets, and architectures are often defined empirically. Furthermore, CNNs are mainly validated by designing within-subject decoders. Crucially, the automatically learned features mainly remain unexplored; conversely, interpreting these features may be of great value to use decoders also as analysis tools, highlighting neural signatures underlying the different decoded brain or behavioral states in a data-driven way. Lastly, SOA DL-based algorithms used to decode single-cell recordings rely on more complex, slower to train and less interpretable networks than CNNs, and the use of CNNs with these signals has not been investigated. This PhD research addresses the previous limitations, with reference to P300 and motor decoding from EEG, and motor decoding from single-neuron activity. CNNs were designed light, compact, and interpretable. Moreover, multiple training strategies were adopted, including transfer learning, which could reduce training times promoting the application of CNNs in practice. Furthermore, CNN-based EEG analyses were proposed to study neural features in the spatial, temporal and frequency domains, and proved to better highlight and enhance relevant neural features related to P300 and motor states than canonical EEG analyses. Remarkably, these analyses could be used, in perspective, to design novel EEG biomarkers for neurological or neurodevelopmental disorders. Lastly, CNNs were developed to decode single-neuron activity, providing a better compromise between performance and model complexity

    Human-Computer Interaction: Security Aspects

    Get PDF
    Along with the rapid development of intelligent information age, users are having a growing interaction with smart devices. Such smart devices are interconnected together in the Internet of Things (IoT). The sensors of IoT devices collect information about users' behaviors from the interaction between users and devices. Since users interact with IoT smart devices for the daily communication and social network activities, such interaction generates a huge amount of network traffic. Hence, users' behaviors are playing an important role in the security of IoT smart devices, and the security aspects of Human-Computer Interaction are becoming significant. In this dissertation, we provide a threefold contribution: (1) we review security challenges of HCI-based authentication, and design a tool to detect deceitful users via keystroke dynamics; (2) we present the impact of users' behaviors on network traffic, and propose a framework to manage such network traffic; (3) we illustrate a proposal for energy-constrained IoT smart devices to be resilient against energy attack and efficient in network communication. More in detail, in the first part of this thesis, we investigate how users' behaviors impact on the way they interact with a device. Then we review the work related to security challenges of HCI-based authentication on smartphones, and Brain-Computer Interfaces (BCI). Moreover, we design a tool to assess the truthfulness of the information that users input using a computer keyboard. This tool is based on keystroke dynamics and it relies on machine learning technique to achieve this goal. To the best of our knowledge, this is the first work that associates the typing users' behaviors with the production of deceptive personal information. We reached an overall accuracy of 76% in the classification of a single answer as truthful or deceptive. In the second part of this thesis, we review the analysis of network traffic, especially related to the interaction between mobile devices and users. Since the interaction generates a huge amount of network traffic, we propose an innovative framework, GolfEngine, to manage and control the impact of users behavior on the network relying on Software Defined Networking (SDN) techniques. GolfEngine provides users a tool to build their security applications and offers Graphical User Interface (GUI) for managing and monitoring the network. In particular, GolfEngine provides the function of checking policy conflicts when users design security applications and the mechanism to check data storage redundancy. GolfEngine not only prevents the malicious inputting policies but also it enforces the security about network management of network traffic. The results of our simulation underline that GolfEngine provides an efficient, secure, and robust performance for managing network traffic via SDN. In the third and last part of this dissertation, we analyze the security aspects of battery-equipped IoT devices from the energy consumption perspective. Although most of the energy consumption of IoT devices is due to user interaction, there is still a significant amount of energy consumed by point-to-point communication and IoT network management. In this scenario, an adversary may hijack an IoT device and conduct a Denial of Service attack (DoS) that aims to run out batteries of other devices. Therefore, we propose EnergIoT, a novel method based on energetic policies that prevent such attacks and, at the same time, optimizes the communication between users and IoT devices, and extends the lifetime of the network. EnergIoT relies on a hierarchical clustering approach, based on different duty cycle ratios, to maximize network lifetime of energy-constrained smart devices. The results show that EnergIoT enhances the security and improves the network lifetime by 32%, compared to the earlier used approach, without sacrificing the network performance (i.e., end-to-end delay)

    Analysis of sensorimotor rhythms based on lower-limbs motor imagery for brain-computer interface

    Get PDF
    Over recent years significant advancements in the field of assistive technologies have been observed. One of the paramount needs for the development and advancement that urged researchers to contribute in the field other than congenital or diagnosed chronic disorders, is the rising number of affectees from accidents, natural calamity (due to climate change), or warfare, worldwide resulting in spinal cord injuries (SCI), neural disorder, or amputation (interception) of limbs, that impede a human to live a normal life. In addition to this, more than ten million people in the world are living with some form of handicap due to the central nervous system (CNS) disorder, which is precarious. Biomedical devices for rehabilitation are the center of research focus for many years. For people with lost motor control, or amputation, but unscathed sensory control, instigation of control signals from the source, i.e. electrophysiological signals, is vital for seamless control of assistive biomedical devices. Control signals, i.e. motion intentions, arouse    in the sensorimotor cortex of the brain that can be detected using invasive or non-invasive modality. With non-invasive modality, the electroencephalography (EEG) is used to record these motion intentions encoded in electrical activity of the cortex, and are deciphered to recognize user intent for locomotion. They are further transferred to the actuator, or end effector of the assistive device for control purposes. This can be executed via the brain-computer interface (BCI) technology. BCI is an emerging research field that establishes a real-time bidirectional connection between the human brain and a computer/output device. Amongst its diverse applications, neurorehabilitation to deliver sensory feedback and brain controlled biomedical devices for rehabilitation are most popular. While substantial literature on control of upper-limb assistive technologies controlled via BCI is there, less is known about the lower-limb (LL) control of biomedical devices for navigation or gait assistance via BCI. The types  of EEG signals compatible with an independent BCI are the oscillatory/sensorimotor rhythms (SMR) and event-related potential (ERP). These signals have successfully been used in BCIs for navigation control of assistive devices. However, ERP paradigm accounts for a voluminous setup for stimulus presentation to the user during operation of BCI assistive device. Contrary to this, the SMR does not require large setup for activation of cortical activity; it instead depends on the motor imagery (MI) that is produced synchronously or asynchronously by the user. MI is a covert cognitive process also termed kinaesthetic motor imagery (KMI) and elicits clearly after rigorous training trials, in form of event-related desynchronization (ERD) or synchronization (ERS), depending on imagery activity or resting period. It usually comprises of limb movement tasks, but is not limited to it in a BCI paradigm. In order to produce detectable features that correlate to the user¿s intent, selection of cognitive task is an important aspect to improve the performance of a BCI. MI used in BCI predominantly remains associated with the upper- limbs, particularly hands, due to the somatotopic organization of the motor cortex. The hand representation area is substantially large, in contrast to the anatomical location of the LL representation areas in the human sensorimotor cortex. The LL area is located within the interhemispheric fissure, i.e. between the mesial walls of both hemispheres of the cortex. This makes it arduous to detect EEG features prompted upon imagination of LL. Detailed investigation of the ERD/ERS in the mu and beta oscillatory rhythms during left and right LL KMI tasks is required, as the user¿s intent to walk is of paramount importance associated to everyday activity. This is an important area of research, followed by the improvisation of the already existing rehabilitation system that serves the LL affectees. Though challenging, solution to these issues is also imperative for the development of robust controllers that follow the asynchronous BCI paradigms to operate LL assistive devices seamlessly. This thesis focusses on the investigation of cortical lateralization of ERD/ERS in the SMR, based on foot dorsiflexion KMI and knee extension KMI separately. This research infers the possibility to deploy these features in real-time BCI by finding maximum possible classification accuracy from the machine learning (ML) models. EEG signal is non-stationary, as it is characterized by individual-to-individual and trial-to-trial variability, and a low signal-to-noise ratio (SNR), which is challenging. They are high in dimension with relatively low number of samples available for fitting ML models to the data. These factors account for ML methods that were developed into the tool of choice  to analyse single-trial EEG data. Hence, the selection of appropriate ML model for true detection of class label with no tradeoff of overfitting is crucial. The feature extraction part of the thesis constituted of testing the band-power (BP) and the common spatial pattern (CSP) methods individually. The study focused on the synchronous BCI paradigm. This was to ensure the exhibition of SMR for the possibility of a practically viable control system in a BCI. For the left vs. right foot KMI, the objective was to distinguish the bilateral tasks, in order to use them as unilateral commands in a 2-class BCI for controlling/navigating a robotic/prosthetic LL for rehabilitation. Similar was the approach for left-right knee KMI. The research was based on four main experimental studies. In addition to the four studies, the research is also inclusive of the comparison of intra-cognitive tasks within the same limb, i.e. left foot vs. left knee and right foot vs. right knee tasks, respectively (Chapter 4). This added to another novel contribution towards the findings based on comparison of different tasks within the same LL. It provides basis to increase the dimensionality of control signals within one BCI paradigm, such as a BCI-controlled LL assistive device with multiple degrees of freedom (DOF) for restoration of locomotion function. This study was based on analysis of statistically significant mu ERD feature using BP feature extraction method. The first stage of this research comprised of the left vs. right foot KMI tasks, wherein the ERD/ERS that elicited in the mu-beta rhythms were analysed using BP feature extraction method (Chapter 5). Three individual features, i.e. mu ERD, beta ERD, and beta ERS were investigated on EEG topography and time-frequency (TF) maps, and average time course of power percentage, using the common average reference and bipolar reference methods. A comparative study was drawn for both references to infer the optimal method. This was followed by ML, i.e. classification of the three feature vectors (mu ERD, beta ERD, and beta ERS), using linear discriminant analysis (LDA), support vector machine (SVM), and k-nearest neighbour (KNN) algorithms, separately. Finally, the multiple correction statistical tests were done, in order to predict maximum possible classification accuracy amongst all paradigms for the most significant feature. All classifier models were supported with the statistical techniques of k-fold cross validation and evaluation of area under receiver-operator characteristic curves (AUC-ROC) for prediction of the true class label. The highest classification accuracy of 83.4% ± 6.72 was obtained with KNN model for beta ERS feature. The next study was based on enhancing the classification accuracy obtained from previous study. It was based on using similar cognitive tasks as study in Chapter 5, however deploying different methodology for feature extraction and classification procedure. In the second study, ERD/ERS from mu and beta rhythms were extracted using CSP and filter bank common spatial pattern (FBCSP) algorithms, to optimize the individual spatial patterns (Chapter 6). This was followed by ML process, for which the supervised logistic regression (Logreg) and LDA were deployed separately. Maximum classification accuracy resulted in 77.5% ± 4.23 with FBCSP feature vector and LDA model, with a maximum kappa coefficient of 0.55 that is in the moderate range of agreement between the two classes. The left vs. right foot discrimination results were nearly same, however the BP feature vector performed better than CSP. The third stage was based on the deployment of novel cognitive task of left vs. right knee extension KMI. Analysis of the ERD/ERS in the mu-beta rhythms was done for verification of cortical lateralization via BP feature vector (Chapter 7). Similar to Chapter 5, in this study the analysis of ERD/ERS features was done on the EEG topography and TF maps, followed by the determination of average time course and peak latency of feature occurrence. However, for this study, only mu ERD and beta ERS features were taken into consideration and the EEG recording method only comprised of common average reference. This was due to the established results from the foot study earlier, in Chapter 5, where beta ERD features showed less average amplitude. The LDA and KNN classification algorithms were employed. Unexpectedly, the left vs. right knee KMI reflected the highest accuracy of 81.04% ± 7.5 and an AUC-ROC = 0.84, strong enough to be used in a real-time BCI as two independent control features. This was using KNN model for beta ERS feature. The final study of this research followed the same paradigm as used in Chapter 6, but for left vs. right knee KMI cognitive task (Chapter 8). Primarily this study aimed at enhancing the resulting accuracy from Chapter 7, using CSP and FBCSP methods with Logreg and LDA models respectively. Results were in accordance with those of the already established foot KMI study, i.e. BP feature vector performed better than the CSP. Highest classification accuracy of 70.00% ± 2.85 with kappa score of 0.40 was obtained with Logreg using FBCSP feature vector. Results stipulated the utilization of ERD/ERS in mu and beta bands, as independent control features for discrimination of bilateral foot or the novel bilateral knee KMI tasks. Resulting classification accuracies implicate that any 2-class BCI, employing unilateral foot, or knee KMI, is suitable for real-time implementation. In conclusion, this thesis demonstrates the possible EEG pre-processing, feature extraction and classification methods to instigate a real-time BCI from the conducted studies. Following this, the critical aspects of latency in information transfer rate, SNR, and tradeoff between dimensionality and overfitting needs to be taken care of, during design of real-time BCI controller. It also highlights that there is a need for consensus over the development of standardized methods of cognitive tasks for MI based BCI. Finally, the application of wireless EEG for portable assistance is essential as it will contribute to lay the foundations of the development of independent asynchronous BCI based on SMR
    corecore