11,997 research outputs found

    Student Teaching and Research Laboratory Focusing on Brain-computer Interface Paradigms - A Creative Environment for Computer Science Students -

    Full text link
    This paper presents an applied concept of a brain-computer interface (BCI) student research laboratory (BCI-LAB) at the Life Science Center of TARA, University of Tsukuba, Japan. Several successful case studies of the student projects are reviewed together with the BCI Research Award 2014 winner case. The BCI-LAB design and project-based teaching philosophy is also explained. Future teaching and research directions summarize the review.Comment: 4 pages, 4 figures, accepted for EMBC 2015, IEEE copyrigh

    User-centered design in brain–computer interfaces — a case study

    Get PDF
    The array of available brain–computer interface (BCI) paradigms has continued to grow, and so has the corresponding set of machine learning methods which are at the core of BCI systems. The latter have evolved to provide more robust data analysis solutions, and as a consequence the proportion of healthy BCI users who can use a BCI successfully is growing. With this development the chances have increased that the needs and abilities of specific patients, the end-users, can be covered by an existing BCI approach. However, most end-users who have experienced the use of a BCI system at all have encountered a single paradigm only. This paradigm is typically the one that is being tested in the study that the end-user happens to be enrolled in, along with other end-users. Though this corresponds to the preferred study arrangement for basic research, it does not ensure that the end-user experiences a working BCI. In this study, a different approach was taken; that of a user-centered design. It is the prevailing process in traditional assistive technology. Given an individual user with a particular clinical profile, several available BCI approaches are tested and – if necessary – adapted to him/her until a suitable BCI system is found

    Novel Virtual Moving Sound-based Spatial Auditory Brain-Computer Interface Paradigm

    Full text link
    This paper reports on a study in which a novel virtual moving sound-based spatial auditory brain-computer interface (BCI) paradigm is developed. Classic auditory BCIs rely on spatially static stimuli, which are often boring and difficult to perceive when subjects have non-uniform spatial hearing perception characteristics. The concept of moving sound proposed and tested in the paper allows for the creation of a P300 oddball paradigm of necessary target and non-target auditory stimuli, which are more interesting and easier to distinguish. We present a report of our study of seven healthy subjects, which proves the concept of moving sound stimuli usability for a novel BCI. We compare online BCI classification results in static and moving sound paradigms yielding similar accuracy results. The subject preference reports suggest that the proposed moving sound protocol is more comfortable and easier to discriminate with the online BCI.Comment: 4 pages (in conference proceedings original version); 6 figures, accepted at 6th International IEEE EMBS Conference on Neural Engineering, November 6-8, 2013, Sheraton San Diego Hotel & Marina, San Diego, CA; paper ID 465; to be available at IEEE Xplore; IEEE Copyright 201

    True zero-training brain-computer interfacing: an online study

    Get PDF
    Despite several approaches to realize subject-to-subject transfer of pre-trained classifiers, the full performance of a Brain-Computer Interface (BCI) for a novel user can only be reached by presenting the BCI system with data from the novel user. In typical state-of-the-art BCI systems with a supervised classifier, the labeled data is collected during a calibration recording, in which the user is asked to perform a specific task. Based on the known labels of this recording, the BCI's classifier can learn to decode the individual's brain signals. Unfortunately, this calibration recording consumes valuable time. Furthermore, it is unproductive with respect to the final BCI application, e.g. text entry. Therefore, the calibration period must be reduced to a minimum, which is especially important for patients with a limited concentration ability. The main contribution of this manuscript is an online study on unsupervised learning in an auditory event-related potential (ERP) paradigm. Our results demonstrate that the calibration recording can be bypassed by utilizing an unsupervised trained classifier, that is initialized randomly and updated during usage. Initially, the unsupervised classifier tends to make decoding mistakes, as the classifier might not have seen enough data to build a reliable model. Using a constant re-analysis of the previously spelled symbols, these initially misspelled symbols can be rectified posthoc when the classifier has learned to decode the signals. We compare the spelling performance of our unsupervised approach and of the unsupervised posthoc approach to the standard supervised calibration-based dogma for n = 10 healthy users. To assess the learning behavior of our approach, it is unsupervised trained from scratch three times per user. Even with the relatively low SNR of an auditory ERP paradigm, the results show that after a limited number of trials (30 trials), the unsupervised approach performs comparably to a classic supervised model

    Psychophysical Responses Comparison in Spatial Visual, Audiovisual, and Auditory BCI-Spelling Paradigms

    Full text link
    The paper presents a pilot study conducted with spatial visual, audiovisual and auditory brain-computer-interface (BCI) based speller paradigms. The psychophysical experiments are conducted with healthy subjects in order to evaluate a difficulty and a possible response accuracy variability. We also present preliminary EEG results in offline BCI mode. The obtained results validate a thesis, that spatial auditory only paradigm performs as good as the traditional visual and audiovisual speller BCI tasks.Comment: The 6th International Conference on Soft Computing and Intelligent Systems and The 13th International Symposium on Advanced Intelligent Systems, 201

    EEG Signal Processing and Classification for the Novel Tactile-Force Brain-Computer Interface Paradigm

    Full text link
    The presented study explores the extent to which tactile-force stimulus delivered to a hand holding a joystick can serve as a platform for a brain computer interface (BCI). The four pressure directions are used to evoke tactile brain potential responses, thus defining a tactile-force brain computer interface (tfBCI). We present brain signal processing and classification procedures leading to successful interfacing results. Experimental results with seven subjects performing online BCI experiments provide a validation of the hand location tfBCI paradigm, while the feasibility of the concept is illuminated through remarkable information-transfer rates.Comment: 6 pages (in conference proceedings original version); 6 figures, submitted to The 9th International Conference on Signal Image Technology & Internet Based Systems, December 2-5, 2013, Kyoto, Japan; to be available at IEEE Xplore; IEEE Copyright 201
    • …
    corecore