4 research outputs found

    A P300 BCI for the masses: prior information enables instant unsupervised spelling

    Get PDF
    The usability of Brain Computer Interfaces (BCI) based on the P300 speller is severely hindered by the need for long training times and many repetitions of the same stimulus. In this contribution we introduce a set of unsupervised hierarchical probabilistic models that tackle both problems simultaneously by incorporating prior knowledge from two sources: information from other training subjects (through transfer learning) and information about the words being spelled (through language models). We show, that due to this prior knowledge, the performance of the unsupervised models parallels and in some cases even surpasses that of supervised models, while eliminating the tedious training session

    A unified probabilistic approach to improve spelling in an event-related potential based brain-computer interface

    No full text
    In recent years, in an attempt to maximize performance, machine learning approaches for event-related potential (ERP) spelling have become more and more complex. In this paper, we have taken a step back as we wanted to improve the performance without building an overly complex model, that cannot be used by the community. Our research resulted in a unified probabilistic model for ERP spelling, which is based on only three assumptions and incorporates language information. On top of that, the probabilistic nature of our classifier yields a natural dynamic stopping strategy. Furthermore, our method uses the same parameters across 25 subjects from three different datasets. We show that our classifier, when enhanced with language models and dynamic stopping, improves the spelling speed and accuracy drastically. Additionally, we would like to point out that as our model is entirely probabilistic, it can easily be used as the foundation for complex systems in future work. All our experiments are executed on publicly available datasets to allow for future comparison with similar techniques

    Dynamic stopping improves the speed and accuracy of a P300 speller

    No full text
    Brain Computer Interface spellers based on the P300 paradigm traditionally use a fixed number of epochs (stimulus presentations) to predict a letter. In this contribution, we introduce a dynamical adjustment of the number of epochs based on a threshold on the confidence of a probabilistic classifier. This allows the average required number of epochs to be lowered drastically. As such, using a conceptually simple modification with no impact on computational requirements, we obtain a P300 speller which is not only faster but also more accurate, which in turn increases the usability of the system substantially
    corecore