549 research outputs found

    BNCI Horizon 2020 - Towards a Roadmap for Brain/Neural Computer Interaction

    Get PDF
    In this paper, we present BNCI Horizon 2020, an EU Coordination and Support Action (CSA) that will provide a roadmap for brain-computer interaction research for the next years, starting in 2013, and aiming at research efforts until 2020 and beyond. The project is a successor of the earlier EU-funded Future BNCI CSA that started in 2010 and produced a roadmap for a shorter time period. We present how we, a consortium of the main European BCI research groups as well as companies and end user representatives, expect to tackle the problem of designing a roadmap for BCI research. In this paper, we define the field with its recent developments, in particular by considering publications and EU-funded research projects, and we discuss how we plan to involve research groups, companies, and user groups in our effort to pave the way for useful and fruitful EU-funded BCI research for the next ten years

    A user-centred approach to unlock the potential of non-invasive BCIs: an unprecedented international translational effort

    Get PDF
    Non-invasive Mental Task-based Brain-Computer Interfaces (MT-BCIs) enable their users to interact with the environment through their brain activity alone (measured using electroencephalography for example), by performing mental tasks such as mental calculation or motor imagery. Current developments in technology hint at a wide range of possible applications, both in the clinical and non-clinical domains. MT-BCIs can be used to control (neuro)prostheses or interact with video games, among many other applications. They can also be used to restore cognitive and motor abilities for stroke rehabilitation, or even improve athletic performance.Nonetheless, the expected transfer of MT-BCIs from the lab to the marketplace will be greatly impeded if all resources are allocated to technological aspects alone. We cannot neglect the Human End-User that sits in the centre of the loop. Indeed, self-regulating one’s brain activity through mental tasks to interact is an acquired skill that requires appropriate training. Yet several studies have shown that current training procedures do not enable MT-BCI users to reach adequate levels of performance. Therefore, one significant challenge for the community is that of improving end-user training.To do so, another fundamental challenge must be taken into account: we need to understand the processes that underlie MT-BCI performance and user learning. It is currently estimated that 10 to 30% of people cannot control an MT-BCI. These people are often referred to as “BCI inefficient”. But the concept of “BCI inefficiency” is debated. Does it really exist? Or, are low performances due to insufficient training, training procedures that are unsuited to these users or is the BCI data processing not sensitive enough? The currently available literature does not allow for a definitive answer to these questions as most published studies either include a limited number of participants (i.e., 10 to 20 participants) and/or training sessions (i.e., 1 or 2). We still have very little insight into what the MT-BCI learning curve looks like, and into which factors (including both user-related and machine-related factors) influence this learning curve. Finding answers will require a large number of experiments, involving a large number of participants taking part in multiple training sessions. It is not feasible for one research lab or even a small consortium to undertake such experiments alone. Therefore, an unprecedented coordinated effort from the research community is necessary.We are convinced that combining forces will allow us to characterise in detail MT-BCI user learning, and thereby provide a mandatory step toward transferring BCIs “out of the lab”. This is why we gathered an international, interdisciplinary consortium of BCI researchers from more than 20 different labs across Europe and Japan, including pioneers in the field. This collaboration will enable us to collect considerable amounts of data (at least 100 participants for 20 training sessions each) and establish a large open database. Based on this precious resource, we could then lead sound analyses to answer the previously mentioned questions. Using this data, our consortium could offer solutions on how to improve MT-BCI training procedures using innovative approaches (e.g., personalisation using intelligent tutoring systems) and technologies (e.g., virtual reality). The CHIST-ERA programme represents a unique opportunity to conduct this ambitious project, which will foster innovation in our field and strengthen our community

    Feature extraction and classification for Brain-Computer Interfaces

    Get PDF

    A Classification Model for Sensing Human Trust in Machines Using EEG and GSR

    Full text link
    Today, intelligent machines \emph{interact and collaborate} with humans in a way that demands a greater level of trust between human and machine. A first step towards building intelligent machines that are capable of building and maintaining trust with humans is the design of a sensor that will enable machines to estimate human trust level in real-time. In this paper, two approaches for developing classifier-based empirical trust sensor models are presented that specifically use electroencephalography (EEG) and galvanic skin response (GSR) measurements. Human subject data collected from 45 participants is used for feature extraction, feature selection, classifier training, and model validation. The first approach considers a general set of psychophysiological features across all participants as the input variables and trains a classifier-based model for each participant, resulting in a trust sensor model based on the general feature set (i.e., a "general trust sensor model"). The second approach considers a customized feature set for each individual and trains a classifier-based model using that feature set, resulting in improved mean accuracy but at the expense of an increase in training time. This work represents the first use of real-time psychophysiological measurements for the development of a human trust sensor. Implications of the work, in the context of trust management algorithm design for intelligent machines, are also discussed.Comment: 20 page

    Ubiquitous Emotion Recognition with Multimodal Mobile Interfaces

    Get PDF
    In 1997 Rosalind Picard introduced fundamental concepts of affect recognition. Since this time, multimodal interfaces such as Brain-computer interfaces (BCIs), RGB and depth cameras, physiological wearables, multimodal facial data and physiological data have been used to study human emotion. Much of the work in this field focuses on a single modality to recognize emotion. However, there is a wealth of information that is available for recognizing emotions when incorporating multimodal data. Considering this, the aim of this workshop is to look at current and future research activities and trends for ubiquitous emotion recognition through the fusion of data from various multimodal, mobile devices

    An Approach of One-vs-Rest Filter Bank Common Spatial Pattern and Spiking Neural Networks for Multiple Motor Imagery Decoding

    Get PDF
    Motor imagery (MI) is a typical BCI paradigm and has been widely applied into many aspects (e.g. brain-driven wheelchair and motor function rehabilitation training). Although significant achievements have been achieved, multiple motor imagery decoding is still unsatisfactory. To deal with this challenging issue, firstly, a segment of electroencephalogram was extracted and preprocessed. Secondly, we applied a filter bank common spatial pattern (FBCSP) with one-vs-rest (OVR) strategy to extract the spatio-temporal-frequency features of multiple MI. Thirdly, the F-score was employed to optimise and select these features. Finally, the optimized features were fed to the spiking neural networks (SNN) for classification. Evaluation was conducted on two public multiple MI datasets (Dataset IIIa of the BCI competition III and Dataset IIa of the BCI competition IV). Experimental results showed that the average accuracy of the proposed framework reached up to 90.09% (kappa: 0.868) and 81.33% (kappa: 0.751) on the two public datasets, respectively. The achieved performance (accuracy and kappa) was comparable to the best one of the compared methods. This study demonstrated that the proposed method can be used as an alternative approach for multiple MI decoding and it provided a potential solution for online multiple MI detection

    Co-Design with Myself: A Brain-Computer Interface Design Tool that Predicts Live Emotion to Enhance Metacognitive Monitoring of Designers

    Full text link
    Intuition, metacognition, and subjective uncertainty interact in complex ways to shape the creative design process. Design intuition, a designer's innate ability to generate creative ideas and solutions based on implicit knowledge and experience, is often evaluated and refined through metacognitive monitoring. This self-awareness and management of cognitive processes can be triggered by subjective uncertainty, reflecting the designer's self-assessed confidence in their decisions. Despite their significance, few creativity support tools have targeted the enhancement of these intertwined components using biofeedback, particularly the affect associated with these processes. In this study, we introduce "Multi-Self," a BCI-VR design tool designed to amplify metacognitive monitoring in architectural design. Multi-Self evaluates designers' affect (valence and arousal) to their work, providing real-time, visual biofeedback. A proof-of-concept pilot study with 24 participants assessed its feasibility. While feedback accuracy responses were mixed, most participants found the tool useful, reporting that it sparked metacognitive monitoring, encouraged exploration of the design space, and helped modulate subjective uncertainty

    Diverse Feature Blend Based on Filter-Bank Common Spatial Pattern and Brain Functional Connectivity for Multiple Motor Imagery Detection

    Get PDF
    Motor imagery (MI) based brain-computer interface (BCI) is a research hotspot and has attracted lots of attention. Within this research topic, multiple MI classification is a challenge due to the difficulties caused by time-varying spatial features across different individuals. To deal with this challenge, we tried to fuse brain functional connectivity (BFC) and one-versus-the-rest filter-bank common spatial pattern (OVR-FBCSP) to improve the robustness of classification. The BFC features were extracted by phase locking value (PLV), representing the brain inter-regional interactions relevant to the MI, whilst the OVR-FBCSP is used to extract the spatial-frequency features related to the MI. These diverse features were then fed into a multi-kernel relevance vector machine (MK-RVM). The dataset with three motor imagery tasks (left hand MI, right hand MI, and feet MI) was used to assess the proposed method. Experimental results not only showed that the cascade structure of diverse feature fusion and MK-RVM achieved satisfactory classification performance (average accuracy: 83.81%, average kappa: 0.76), but also demonstrated that BFC plays a supplementary role in the MI classification. Moreover, the proposed method has a potential to be integrated into multiple MI online detection owing to the advantage of strong time-efficiency of RVM

    Brain informed transfer learning for categorizing construction hazards

    Full text link
    A transfer learning paradigm is proposed for "knowledge" transfer between the human brain and convolutional neural network (CNN) for a construction hazard categorization task. Participants' brain activities are recorded using electroencephalogram (EEG) measurements when viewing the same images (target dataset) as the CNN. The CNN is pretrained on the EEG data and then fine-tuned on the construction scene images. The results reveal that the EEG-pretrained CNN achieves a 9 % higher accuracy compared with a network with same architecture but randomly initialized parameters on a three-class classification task. Brain activity from the left frontal cortex exhibits the highest performance gains, thus indicating high-level cognitive processing during hazard recognition. This work is a step toward improving machine learning algorithms by learning from human-brain signals recorded via a commercially available brain-computer interface. More generalized visual recognition systems can be effectively developed based on this approach of "keep human in the loop"

    Insomnia : the affordance of hybrid media in visualising a sleep disorder

    Get PDF
    The integration of visual and numerical abstraction in contemporary audio-visual communication has become increasingly prevalent. This increase reflects the evolution of computational machines from simple data processors. Computation and interface have augmented our senses and converged algorithmic logic with cultural techniques to form hybrid channels of communication. These channels are fluid and mutable, allowing creatives to explore and disseminate knowledge through iterative media practice. Insomnia is an auto-ethnographic case study that examines the affordance of merging Brain-Computer Interfaces (BCIs) and node- based programming software (TouchDesigner), as a hybrid media system (McMullan, 2020). As a system, Insomnia compiles my archived brain activity data and processes it through a custom designed generative visualisation interface. Documenting and ‘processing’ a sleep disorder is filtered through key concepts of media archaeology, cultural techniques, and practice-led research allowing Insomnia to inform discussion of the affordance of hybrid media. Insomnia is presented as a virtual exhibition with a supporting exegesis. The methodology and outcomes of the project form a framework that bridges science communication and creative practice and points to continued development for interactive installation design
    corecore