114,819 research outputs found

    Detecting single-trial EEG evoked potential using a wavelet domain linear mixed model: application to error potentials classification

    Full text link
    Objective. The main goal of this work is to develop a model for multi-sensor signals such as MEG or EEG signals, that accounts for the inter-trial variability, suitable for corresponding binary classification problems. An important constraint is that the model be simple enough to handle small size and unbalanced datasets, as often encountered in BCI type experiments. Approach. The method involves linear mixed effects statistical model, wavelet transform and spatial filtering, and aims at the characterization of localized discriminant features in multi-sensor signals. After discrete wavelet transform and spatial filtering, a projection onto the relevant wavelet and spatial channels subspaces is used for dimension reduction. The projected signals are then decomposed as the sum of a signal of interest (i.e. discriminant) and background noise, using a very simple Gaussian linear mixed model. Main results. Thanks to the simplicity of the model, the corresponding parameter estimation problem is simplified. Robust estimates of class-covariance matrices are obtained from small sample sizes and an effective Bayes plug-in classifier is derived. The approach is applied to the detection of error potentials in multichannel EEG data, in a very unbalanced situation (detection of rare events). Classification results prove the relevance of the proposed approach in such a context. Significance. The combination of linear mixed model, wavelet transform and spatial filtering for EEG classification is, to the best of our knowledge, an original approach, which is proven to be effective. This paper improves on earlier results on similar problems, and the three main ingredients all play an important role

    CHORUS Deliverable 2.2: Second report - identification of multi-disciplinary key issues for gap analysis toward EU multimedia search engines roadmap

    Get PDF
    After addressing the state-of-the-art during the first year of Chorus and establishing the existing landscape in multimedia search engines, we have identified and analyzed gaps within European research effort during our second year. In this period we focused on three directions, notably technological issues, user-centred issues and use-cases and socio- economic and legal aspects. These were assessed by two central studies: firstly, a concerted vision of functional breakdown of generic multimedia search engine, and secondly, a representative use-cases descriptions with the related discussion on requirement for technological challenges. Both studies have been carried out in cooperation and consultation with the community at large through EC concertation meetings (multimedia search engines cluster), several meetings with our Think-Tank, presentations in international conferences, and surveys addressed to EU projects coordinators as well as National initiatives coordinators. Based on the obtained feedback we identified two types of gaps, namely core technological gaps that involve research challenges, and “enablers”, which are not necessarily technical research challenges, but have impact on innovation progress. New socio-economic trends are presented as well as emerging legal challenges

    Towards a Lightweight Approach for Modding Serious Educational Games: Assisting Novice Designers

    Get PDF
    Serious educational games (SEGs) are a growing segment of the education community’s pedagogical toolbox. Effectively creating such games remains challenging, as teachers and industry trainers are content experts; typically they are not game designers with the theoretical knowledge and practical experience needed to create a quality SEG. Here, a lightweight approach to interactively explore and modify existing SEGs is introduced, a toll that can be broadly adopted by educators for pedagogically sound SEGs. Novice game designers can rapidly explore the educational and traditional elements of a game, with a stress on tracking the SEG learning objectives, as well as allowing for reviewing and altering a variety of graphic and audio game elements

    Speech Processing in Computer Vision Applications

    Get PDF
    Deep learning has been recently proven to be a viable asset in determining features in the field of Speech Analysis. Deep learning methods like Convolutional Neural Networks facilitate the expansion of specific feature information in waveforms, allowing networks to create more feature dense representations of data. Our work attempts to address the problem of re-creating a face given a speaker\u27s voice and speaker identification using deep learning methods. In this work, we first review the fundamental background in speech processing and its related applications. Then we introduce novel deep learning-based methods to speech feature analysis. Finally, we will present our deep learning approaches to speaker identification and speech to face synthesis. The presented method can convert a speaker audio sample to an image of their predicted face. This framework is composed of several chained together networks, each with an essential step in the conversion process. These include Audio embedding, encoding, and face generation networks, respectively. Our experiments show that certain features can map to the face and that with a speaker\u27s voice, DNNs can create their face and that a GUI could be used in conjunction to display a speaker recognition network\u27s data

    On the stimulus duty cycle in steady state visual evoked potential

    Get PDF
    Brain-computer interfaces (BCI) are useful devices that allow direct control of external devices using thoughts, i.e. brain's electrical activity. There are several BCI paradigms, of which steady state visual evoked potential (SSVEP) is the most commonly used due to its quick response and accuracy. SSVEP stimuli are typically generated by varying the luminance of a target for a set number of frames or display events. Conventionally, SSVEP based BCI paradigms use magnitude (amplitude) information from frequency domain but recently, SSVEP based BCI paradigms have begun to utilize phase information to discriminate between similar frequency targets. This paper will demonstrate that using a single frame to modulate a stimulus may lead to a bi-modal distribution of SSVEP as a consequence of a user attending both transition edges. This incoherence, while of less importance in traditional magnitude domain SSVEP BCIs becomes critical when phase is taken into account. An alternative modulation technique incorporating a 50% duty cycle is also a popular method for generating SSVEP stimuli but has a unimodal distribution due to user's forced attention to a single transition edge. This paper demonstrates that utilizing the second method results in significantly enhanced performance in information transfer rate in a phase discrimination SSVEP based BCI
    • …
    corecore