252 research outputs found

    Distance dependent extensions of the Chinese restaurant process

    Get PDF
    In this paper we consider the clustering of text documents using the Chinese Restau- rant Process (CRP) and extensions that take time-correlations into account. To this pur- pose, we implement and test the Distance Dependent Chinese Restaurant Process (DD- CRP) for mixture models on both generated and real-world data. We also propose and im- plement a novel clustering algorithm, the Av- eraged Distance Dependent Chinese Restau- rant Process (ADDCRP), to model time- correlations, that is faster per iteration and attains similar performance as the fully dis- tance dependent CRP

    An uncued brain-computer interface using reservoir computing

    Get PDF
    Brain-Computer Interfaces are an important and promising avenue for possible next-generation assistive devices. In this article, we show how Reservoir Comput- ing – a computationally efficient way of training recurrent neural networks – com- bined with a novel feature selection algorithm based on Common Spatial Patterns can be used to drastically improve performance in an uncued motor imagery based Brain-Computer Interface (BCI). The objective of this BCI is to label each sample of EEG data as either motor imagery class 1 (e.g. left hand), motor imagery class 2 (e.g. right hand) or a rest state (i.e., no motor imagery). When comparing the re- sults of the proposed method with the results from the BCI Competition IV (where this dataset was introduced), it turns out that the proposed method outperforms the winner of the competition

    Integrating dynamic stopping, transfer learning and language models in an adaptive zero-training ERP speller

    Get PDF
    Objective. Most BCIs have to undergo a calibration session in which data is recorded to train decoders with machine learning. Only recently zero-training methods have become a subject of study. This work proposes a probabilistic framework for BCI applications which exploit event-related potentials (ERPs). For the example of a visual P300 speller we show how the framework harvests the structure suitable to solve the decoding task by (a) transfer learning, (b) unsupervised adaptation, (c) language model and (d) dynamic stopping. Approach. A simulation study compares the proposed probabilistic zero framework (using transfer learning and task structure) to a state-of-the-art supervised model on n = 22 subjects. The individual influence of the involved components (a)–(d) are investigated. Main results. Without any need for a calibration session, the probabilistic zero-training framework with inter-subject transfer learning shows excellent performance—competitive to a state-of-the-art supervised method using calibration. Its decoding quality is carried mainly by the effect of transfer learning in combination with continuous unsupervised adaptation. Significance. A high-performing zero-training BCI is within reach for one of the most popular BCI paradigms: ERP spelling. Recording calibration data for a supervised BCI would require valuable time which is lost for spelling. The time spent on calibration would allow a novel user to spell 29 symbols with our unsupervised approach. It could be of use for various clinical and non-clinical ERP-applications of BCI

    Reducing BCI calibration time with transfer learning: a shrinkage approach

    Get PDF
    Introduction: A brain-computer interface system (BCI) allows subjects to make use of neural control signals to drive a computer application. Therefor a BCI is generally equipped with a decoder to differentiate between types of responses recorded in the brain. For example, an application giving feedback to the user can benefit from recognizing the presence or absence of a so-called error potential (Errp), elicited in the brain of the user when this feedback is perceived as being ‘wrong’, a mistake of the system. Due to the high inter- and intra- subject variability in these response signals, calibration data needs to be recorded to train the decoder. This calibration session is exhausting and demotivating for the subject. Transfer Learning is a general name for techniques in which data from previous subjects is used as additional information to train a decoder for a new subject, thereby reducing the amount of subject specific data that needs to be recorded during calibration. In this work we apply transfer learning to an Errp detection task by applying single-target shrinkage to Linear Discriminant Analysis (LDA), a method originally proposed by Höhne et. al. to improve accuracy by compensating for inter-stimuli differences in an ERP-speller [1]. Material, Methods and Results: For our study we used the error potential dataset recorded by Perrin et al. in [2]. For 26 subjects each, 340 Errp/nonErrp responses were recorded with a #Errp to #nonErrp ratio of 0.41 to 0.94. 272 responses were available for training the decoder and the remaining 68 responses were left out for testing. For every subject separately we built three different decoders. First, a subject specific LDA decoder was built solely making use of the subject’s own train data. Second, we added the train data of the other 25 subjects to train a global LDA decoder, naively ignoring the difference between subjects. Finally, the single-target-shrinkage method (STS) [1] is used to regularize the parameters of the subject specific decoder towards those of the global decoder. Making use of cross validation this method assigns an optimal weight to the subject specific data and data from previous subjects to be used for training. Figure 1 shows the performance of the three decoders on the test data in terms of AUC as a function of the amount of subject specific calibration data used. Discussion: The subject specific decoder in Figure 1 shows how sensitive the decoding performance is to the amount of calibration data provided. Using data from previously recorded subjects the amount of calibration data, and as such the calibration time, can be reduced as shown by the global decoder. A certain amount of quality is however sacrificed. Making an optimal compromise between the subject specific and global decoder, the single-target-shrinkage decoder allows the calibration time to be reduced by 20% without any change in decoder quality (confirmed by a paired sample t-test giving p=0.72). Significance: This work serves as a first proof of concept in the use of shrinkage LDA as a transfer learning method. More specific, the error potential decoder built with reduced calibration time boosts the opportunity for error correcting methods in BCI

    Switching characters between stimuli improves P300 speller accuracy

    Get PDF
    In this paper, an alternative stimulus presentation paradigm for the P300 speller is introduced. Similar to the checkerboard paradigm it minimizes the occurrence of the two most common causes of spelling errors: adjacency distraction and double flashes. Moreover, in contrast to the checkerboard paradigm, this new stimulus sequence does not increase the time required per stimulus iteration. Our new paradigm is compared to the basic row-column paradigm and the results indicate that, on average, the accuracy is improved

    True zero-training brain-computer interfacing: an online study

    Get PDF
    Despite several approaches to realize subject-to-subject transfer of pre-trained classifiers, the full performance of a Brain-Computer Interface (BCI) for a novel user can only be reached by presenting the BCI system with data from the novel user. In typical state-of-the-art BCI systems with a supervised classifier, the labeled data is collected during a calibration recording, in which the user is asked to perform a specific task. Based on the known labels of this recording, the BCI's classifier can learn to decode the individual's brain signals. Unfortunately, this calibration recording consumes valuable time. Furthermore, it is unproductive with respect to the final BCI application, e.g. text entry. Therefore, the calibration period must be reduced to a minimum, which is especially important for patients with a limited concentration ability. The main contribution of this manuscript is an online study on unsupervised learning in an auditory event-related potential (ERP) paradigm. Our results demonstrate that the calibration recording can be bypassed by utilizing an unsupervised trained classifier, that is initialized randomly and updated during usage. Initially, the unsupervised classifier tends to make decoding mistakes, as the classifier might not have seen enough data to build a reliable model. Using a constant re-analysis of the previously spelled symbols, these initially misspelled symbols can be rectified posthoc when the classifier has learned to decode the signals. We compare the spelling performance of our unsupervised approach and of the unsupervised posthoc approach to the standard supervised calibration-based dogma for n = 10 healthy users. To assess the learning behavior of our approach, it is unsupervised trained from scratch three times per user. Even with the relatively low SNR of an auditory ERP paradigm, the results show that after a limited number of trials (30 trials), the unsupervised approach performs comparably to a classic supervised model
    • …
    corecore