17 research outputs found

    Collaborative Brain-Computer Interfaces in Rapid Image Presentation and Motion Pictures

    Get PDF
    The last few years have seen an increase in brain-computer interface (BCI) research for the able-bodied population. One of these new branches involves collaborative BCIs (cBCIs), in which information from several users is combined to improve the performance of a BCI system. This thesis is focused on cBCIs with the aim of increasing understanding of how they can be used to improve performance of single-user BCIs based on event-related potentials (ERPs). The objectives are: (1) to study and compare different methods of creating groups using exclusively electroencephalography (EEG) signals, (2) to develop a theoretical model to establish where the highest gains may be expected from creating groups, and (3) to analyse the information that can be extracted by merging signals from multiple users. For this, two scenarios involving real-world stimuli (images presented at high rates and movies) were studied. The first scenario consisted of a visual search task in which images were presented at high frequencies. Three modes of combining EEG recordings from different users were tested to improve the detection of different ERPs, namely the P300 (associated with the presence of events of interest) and the N2pc (associated with shifts of attention). We showed that the detection and localisation of targets can improve significantly when information from multiple viewers is combined. In the second scenario, feature movies were introduced to study variations in ERPs in response to cuts through cBCI techniques. A distinct, previously unreported, ERP appears in relation to such cuts, the amplitude of which is not modulated by visual effects such as the low-level properties of the frames surrounding the discontinuity. However, significant variations that depended on the movie were found. We hypothesise that these techniques can be used to build on the attentional theory of cinematic continuity by providing an extra source of information: the brain

    Influencing brain waves by evoked potentials as biometric approach: taking stock of the last six years of research

    Get PDF
    The scientific advances of recent years have made available to anyone affordable hardware devices capable of doing something unthinkable until a few years ago, the reading of brain waves. It means that through small wearable devices it is possible to perform an electroencephalography (EEG), albeit with less potential than those offered by high-cost professional devices. Such devices make it possible for researchers a huge number of experiments that were once impossible in many areas due to the high costs of the necessary hardware. Many studies in the literature explore the use of EEG data as a biometric approach for people identification, but, unfortunately, it presents problems mainly related to the difficulty of extracting unique and stable patterns from users, despite the adoption of sophisticated techniques. An approach to face this problem is based on the evoked potentials (EPs), external stimuli applied during the EEG reading, a noninvasive technique used for many years in clinical routine, in combination with other diagnostic tests, to evaluate the electrical activity related to some areas of the brain and spinal cord to diagnose neurological disorders. In consideration of the growing number of works in the literature that combine the EEG and EP approaches for biometric purposes, this work aims to evaluate the practical feasibility of such approaches as reliable biometric instruments for user identification by surveying the state of the art of the last 6 years, also providing an overview of the elements and concepts related to this research area

    Towards smarter Brain Computer Interface (BCI): study of electroencephalographic signal processing and classification techniques toward the use of intelligent and adaptive BCI

    Full text link
    Tesis Doctoral inédita leída en la Universidad Autónoma de Madrid, Escuela Politécnica Superior, Departamento de Ingeniería Informática. Fecha de Lectura: 28-07-202

    Neural and visual correlates of perceptual decision making in adult dyslexia

    Get PDF
    Humans have to make decisions based on visual information numerous times every day—for example, judging whether it is a friend or simply a nice stranger who is waving at us from the other side of the street, or whether the content of a contract we are about to sign is correct. In particular, perceptual decisions based on good reading comprehension might disadvantage people affected by the specific learning disorder dyslexia, characterised by impairments in reading and writing. In recent years, neuroscience has begun to uncover the neural basis of these impairments in children and adults. However, it remains unknown what neural differences might underlie impaired processing of the physical properties of written words, such as font type and style. The current thesis sought to characterise the neural and oculomotor temporal correlates of font-modulated reading comprehension while also probing a more fundamental deficit in non-linguistic sensory perceptual decision making in adult dyslexia by using a combination of electrophysiological and eye-tracking methods. The first of our three studies (Chapter 2), investigated the impact of italics—a commonly used font style for highlighting important content—on reading comprehension in a sentence reading lexical decision task. Overall, the performance of dyslexics was worse than that of non-dyslexics. Cluster-based event-related potential (ERP) analysis revealed that brain responses within the first 300 ms following the target (decision) word differed in amplitude and spatial distribution between dyslexics and non-dyslexics when processing italicised text. The two ERP components we observed within this period showed a dissociation in peak time, spatial profile, and their ability to predict behavioural performance. These findings emphasise the importance of choosing font style carefully to optimise word processing and reading comprehension by dyslexics. Based on these differences, our second study (Chapter 3) asked whether a specific dyslexia font can be used to alleviate difficulties with reading comprehension in adult dyslexia, and what effects such a font has on cognitive and oculomotor mechanisms. Using standardised texts coupled with validated comprehension questions, we demonstrated that reading comprehension across all participants was better on trials presented in the dyslexia font OpenDyslexic compared to those presented in traditional Times New Roman font. These benefits were larger among dyslexics. Conversely, participants’ reading speed was unaffected by OpenDyslexic. Our eye-tracking data showed increases in visual search intensity and ease of visual processing on OpenDyslexic trials in the form of decreases in median fixation duration and fixation to saccade ratio, as well as a smaller number of falsely programmed forward saccades among dyslexics. These findings provide empirical evidence for the efficacy of OpenDyslexic in longer texts and its ability to improve the visual reading strategy. Finally, recent evidence has shown that adults with dyslexia exhibit obvious fundamental deficits spanning multiple sensory systems when performing simple perceptual decision tasks, such as integrating beeps and flashes. These deficits extend beyond the well-established linguistic difficulties. Particularly, dyslexics reading impairments are believed to be a consequence of deficient integration of congruent audio-visual information. However, it remains unclear whether dyslexic adults exhibit similar impairments when integrating audio-visual evidence in a non-linguistic perceptual decision task with noisy real-world objects. To address this question, and informed by our previous work in non-dyslexics, we used a linear multivariate discriminant analysis to investigate the extent to which audio-visual integration affects early sensory evidence encoding (‘early’) or later decision-related stages (‘late’) in dyslexia. We found increased decision accuracy and slower response times during audio-visual trials for both groups. However, overall, dyslexics showed worse performance than non-dyslexics. When comparing audio-visual to visual trials, we observed that dyslexics exhibited an increase in the magnitude of an EEG component situated between the early and late processing stages. Conversely, non-dyslexics exhibited increased component amplitudes for a later post-sensory EEG component, consistent with a post-sensory influence of audio-visual integration. Our results suggest that adult dyslexics benefit from congruent audio-visual evidence of noisy perceptual stimuli to a similar extent but rely on a different neural process to achieve these improvements. In conclusion, our results provide novel insights into the neural dynamics, visual and cognitive mechanisms underlying adult dyslexics’ perceptual decision making. They further offer empirical evidence and practical suggestions for easily implementable applications that can improve text comprehension by everyone

    Interfacce cervello-computer per la comunicazione aumentativa: algoritmi asincroni e adattativi e validazione con utenti finali

    Get PDF
    This thesis aimed at addressing some of the issues that, at the state of the art, avoid the P300-based brain computer interface (BCI) systems to move from research laboratories to end users’ home. An innovative asynchronous classifier has been defined and validated. It relies on the introduction of a set of thresholds in the classifier, and such thresholds have been assessed considering the distributions of score values relating to target, non-target stimuli and epochs of voluntary no-control. With the asynchronous classifier, a P300-based BCI system can adapt its speed to the current state of the user and can automatically suspend the control when the user diverts his attention from the stimulation interface. Since EEG signals are non-stationary and show inherent variability, in order to make long-term use of BCI possible, it is important to track changes in ongoing EEG activity and to adapt BCI model parameters accordingly. To this aim, the asynchronous classifier has been subsequently improved by introducing a self-calibration algorithm for the continuous and unsupervised recalibration of the subjective control parameters. Finally an index for the online monitoring of the EEG quality has been defined and validated in order to detect potential problems and system failures. This thesis ends with the description of a translational work involving end users (people with amyotrophic lateral sclerosis-ALS). Focusing on the concepts of the user centered design approach, the phases relating to the design, the development and the validation of an innovative assistive device have been described. The proposed assistive technology (AT) has been specifically designed to meet the needs of people with ALS during the different phases of the disease (i.e. the degree of motor abilities impairment). Indeed, the AT can be accessed with several input devices either conventional (mouse, touchscreen) or alterative (switches, headtracker) up to a P300-based BCI.Questa tesi affronta alcune delle problematiche che, allo stato dell'arte, limitano l'usabilità delle interfacce cervello computer (Brain Computer Interface - BCI) al di fuori del contesto sperimentale. E' stato inizialmente definito e validato un classificatore asincrono. Quest'ultimo basa il suo funzionamento sull'inserimento di un set di soglie all'interno del classificatore. Queste soglie vengono definite considerando le distribuzioni dei valori di score relativi agli stimoli target e non-target e alle epoche EEG in cui il soggetto non intendeva effettuare nessuna selezione (no-control). Con il classificatore asincrono, un BCI basato su potenziali P300 può adattare la sua velocità allo stato corrente dell'utente e sospendere automaticamente il controllo quando l'utente non presta attenzione alla stimolazione. Dal momento che i segnali EEG sono non-stazionari e mostrano una variabilità intrinseca, al fine di rendere possibile l'utilizzo dei sistemi BCI sul lungo periodo, è importante rilevare i cambiamenti dell'attività EEG e adattare di conseguenza i parametri del classificatore. A questo scopo, il classificatore asincrono è stato successivamente migliorato introducendo un algoritmo di autocalibrazione per la continua e non supervisionata ricalibrazione dei parametri di controllo soggettivi. Infine è stato definito e validato un indice per monitorare on-line la qualità del segnale EEG, in modo da rilevare potenziali problemi e malfunzionamenti del sistema. Questa tesi si conclude con la descrizione di un lavoro che ha coinvolto gli utenti finali (persone affette da sclerosi laterale amiotrofica-SLA). In particolare, basandosi sui principi dell’user-centered design, sono state descritte le fasi relative alla progettazione, sviluppo e validazione di una tecnologia assistiva (TA) innovativa. La TA è stata specificamente progettata per rispondere alla esigenze delle persone affetta da SLA durante le diverse fasi della malattia. Infatti, la TA proposta può essere utilizzata sia mediante dispositivi d’input tradizionali (mouse, tastiera) che alternativi (bottoni, headtracker) fino ad arrivare ad un BCI basato su potenziali P300

    Error-related potentials for adaptive decoding and volitional control

    Get PDF
    Locked-in syndrome (LIS) is a condition characterized by total or near-total paralysis with preserved cognitive and somatosensory function. For the locked-in, brain-machine interfaces (BMI) provide a level of restored communication and interaction with the world, though this technology has not reached its fullest potential. Several streams of research explore improving BMI performance but very little attention has been given to the paradigms implemented and the resulting constraints imposed on the users. Learning new mental tasks, constant use of external stimuli, and high attentional and cognitive processing loads are common demands imposed by BMI. These paradigm constraints negatively affect BMI performance by locked-in patients. In an effort to develop simpler and more reliable BMI for those suffering from LIS, this dissertation explores using error-related potentials, the neural correlates of error awareness, as an access pathway for adaptive decoding and direct volitional control. In the first part of this thesis we characterize error-related local field potentials (eLFP) and implement a real-time decoder error detection (DED) system using eLFP while non-human primates controlled a saccade BMI. Our results show specific traits in the eLFP that bridge current knowledge of non-BMI evoked error-related potentials with error-potentials evoked during BMI control. Moreover, we successfully perform real-time DED via, to our knowledge, the first real-time LFP-based DED system integrated into an invasive BMI, demonstrating that error-based adaptive decoding can become a standard feature in BMI design. In the second part of this thesis, we focus on employing electroencephalography error-related potentials (ErrP) for direct volitional control. These signals were employed as an indicator of the user’s intentions under a closed-loop binary-choice robot reaching task. Although this approach is technically challenging, our results demonstrate that ErrP can be used for direct control via binary selection and, given the appropriate levels of task engagement and agency, single-trial closed-loop ErrP decoding is possible. Taken together, this work contributes to a deeper understanding of error-related potentials evoked during BMI control and opens new avenues of research for employing ErrP as a direct control signal for BMI. For the locked-in community, these advancements could foster the development of real-time intuitive brain-machine control

    Interpretable Convolutional Neural Networks for Decoding and Analyzing Neural Time Series Data

    Get PDF
    Machine learning is widely adopted to decode multi-variate neural time series, including electroencephalographic (EEG) and single-cell recordings. Recent solutions based on deep learning (DL) outperformed traditional decoders by automatically extracting relevant discriminative features from raw or minimally pre-processed signals. Convolutional Neural Networks (CNNs) have been successfully applied to EEG and are the most common DL-based EEG decoders in the state-of-the-art (SOA). However, the current research is affected by some limitations. SOA CNNs for EEG decoding usually exploit deep and heavy structures with the risk of overfitting small datasets, and architectures are often defined empirically. Furthermore, CNNs are mainly validated by designing within-subject decoders. Crucially, the automatically learned features mainly remain unexplored; conversely, interpreting these features may be of great value to use decoders also as analysis tools, highlighting neural signatures underlying the different decoded brain or behavioral states in a data-driven way. Lastly, SOA DL-based algorithms used to decode single-cell recordings rely on more complex, slower to train and less interpretable networks than CNNs, and the use of CNNs with these signals has not been investigated. This PhD research addresses the previous limitations, with reference to P300 and motor decoding from EEG, and motor decoding from single-neuron activity. CNNs were designed light, compact, and interpretable. Moreover, multiple training strategies were adopted, including transfer learning, which could reduce training times promoting the application of CNNs in practice. Furthermore, CNN-based EEG analyses were proposed to study neural features in the spatial, temporal and frequency domains, and proved to better highlight and enhance relevant neural features related to P300 and motor states than canonical EEG analyses. Remarkably, these analyses could be used, in perspective, to design novel EEG biomarkers for neurological or neurodevelopmental disorders. Lastly, CNNs were developed to decode single-neuron activity, providing a better compromise between performance and model complexity

    The role of phonology in visual word recognition: evidence from Chinese

    Get PDF
    Posters - Letter/Word Processing V: abstract no. 5024The hypothesis of bidirectional coupling of orthography and phonology predicts that phonology plays a role in visual word recognition, as observed in the effects of feedforward and feedback spelling to sound consistency on lexical decision. However, because orthography and phonology are closely related in alphabetic languages (homophones in alphabetic languages are usually orthographically similar), it is difficult to exclude an influence of orthography on phonological effects in visual word recognition. Chinese languages contain many written homophones that are orthographically dissimilar, allowing a test of the claim that phonological effects can be independent of orthographic similarity. We report a study of visual word recognition in Chinese based on a mega-analysis of lexical decision performance with 500 characters. The results from multiple regression analyses, after controlling for orthographic frequency, stroke number, and radical frequency, showed main effects of feedforward and feedback consistency, as well as interactions between these variables and phonological frequency and number of homophones. Implications of these results for resonance models of visual word recognition are discussed.postprin

    Interactive effects of orthography and semantics in Chinese picture naming

    Get PDF
    Posters - Language Production/Writing: abstract no. 4035Picture-naming performance in English and Dutch is enhanced by presentation of a word that is similar in form to the picture name. However, it is unclear whether facilitation has an orthographic or a phonological locus. We investigated the loci of the facilitation effect in Cantonese Chinese speakers by manipulating—at three SOAs (2100, 0, and 1100 msec)—semantic, orthographic, and phonological similarity. We identified an effect of orthographic facilitation that was independent of and larger than phonological facilitation across all SOAs. Semantic interference was also found at SOAs of 2100 and 0 msec. Critically, an interaction of semantics and orthography was observed at an SOA of 1100 msec. This interaction suggests that independent effects of orthographic facilitation on picture naming are located either at the level of semantic processing or at the lemma level and are not due to the activation of picture name segments at the level of phonological retrieval.postprin
    corecore