9 research outputs found

    A Simple Iterative Algorithm for Parsimonious Binary Kernel Fisher Discrimination

    Get PDF
    By applying recent results in optimization theory variously known as optimization transfer or majorize/minimize algorithms, an algorithm for binary, kernel, Fisher discriminant analysis is introduced that makes use of a non-smooth penalty on the coefficients to provide a parsimonious solution. The problem is converted into a smooth optimization that can be solved iteratively with no greater overhead than iteratively re-weighted least-squares. The result is simple, easily programmed and is shown to perform, in terms of both accuracy and parsimony, as well as or better than a number of leading machine learning algorithms on two well-studied and substantial benchmarks

    Optimal truss and frame design from projected homogenization-based topology optimization

    No full text
    In this article, we propose a novel method to obtain a near-optimal frame structure, based on the solution of a homogenization-based topology optimization model. The presented approach exploits the equivalence between Michell’s problem of least-weight trusses and a compliance minimization problem using optimal rank-2 laminates in the low volume fraction limit. In a fully automated procedure, a discrete structure is extracted from the homogenization-based continuum model. This near-optimal structure is post-optimized as a frame, where the bending stiffness is continuously decreased, to allow for a final design that resembles a truss structure. Numerical experiments show excellent behavior of the method, where the final designs are close to analytical optima, and obtained in less than 10 minutes, for various levels of detail, on a standard PC

    A Review of Classification Algorithms for EEG-based Brain-Computer Interfaces: A 10-year Update

    Get PDF
    International audienceObjective: Most current Electroencephalography (EEG)-based Brain-Computer Interfaces (BCIs) are based on machine learning algorithms. There is a large diversity of classifier types that are used in this field, as described in our 2007 review paper. Now, approximately 10 years after this review publication, many new algorithms have been developed and tested to classify EEG signals in BCIs. The time is therefore ripe for an updated review of EEG classification algorithms for BCIs. Approach: We surveyed the BCI and machine learning literature from 2007 to 2017 to identify the new classification approaches that have been investigated to design BCIs. We synthesize these studies in order to present such algorithms, to report how they were used for BCIs, what were the outcomes, and to identify their pros and cons. Main results: We found that the recently designed classification algorithms for EEG-based BCIs can be divided into four main categories: adaptive classifiers, matrix and tensor classifiers, transfer learning and deep learning, plus a few other miscellaneous classifiers. Among these, adaptive classifiers were demonstrated to be generally superior to static ones, even with unsupervised adaptation. Transfer learning can also prove useful although the benefits of transfer learning remain unpredictable. Riemannian geometry-based methods have reached state-of-the-art performances on multiple BCI problems and deserve to be explored more thoroughly, along with tensor-based methods. Shrinkage linear discriminant analysis and random forests also appear particularly useful for small training samples settings. On the other hand, deep learning methods have not yet shown convincing improvement over state-of-the-art BCI methods. Significance: This paper provides a comprehensive overview of the modern classification algorithms used in EEG-based BCIs, presents the principles of these Review of Classification Algorithms for EEG-based BCI 2 methods and guidelines on when and how to use them. It also identifies a number of challenges to further advance EEG classification in BCI

    Emotional processing in Parkinson's disease and anxiety: an EEG study of visual affective word processing

    Get PDF
    A general problem in the design of an EEG-BCI system is the poor quality and low robustness of the extracted features, affecting overall performance. However, BCI systems that are applicable in real-time and outside clinical settings require high performance. Therefore, we have to improve the current methods for feature extraction. In this work, we investigated EEG source reconstruction techniques to enhance the extracted features based on a linearly constrained minimum variance (LCMV) beamformer. Beamformers allow for easy incorporation of anatomical data and are applicable in real-time. A 32-channel EEG-BCI system was designed for a two-class motor imagery (MI) paradigm. We optimized a synchronous system for two untrained subjects and investigated two aspects. First, we investigated the effect of using beamformers calculated on the basis of three different head models: a template 3-layered boundary element method (BEM) head model, a 3-layered personalized BEM head model and a personalized 5-layered finite difference method (FDM) head model including white and gray matter, CSF, scalp and skull tissue. Second, we investigated the influence of how the regions of interest, areas of expected MI activity, were constructed. On the one hand, they were chosen around electrodes C3 and C4, as hand MI activity theoretically is expected here. On the other hand, they were constructed based on the actual activated regions identified by an fMRI scan. Subsequently, an asynchronous system was derived for one of the subjects and an optimal balance between speed and accuracy was found. Lastly, a real-time application was made. These systems were evaluated by their accuracy, defined as the percentage of correct left and right classifications. From the real-time application, the information transfer rate (ITR) was also determined. An accuracy of 86.60 ± 4.40% was achieved for subject 1 and 78.71 ± 0.73% for subject 2. This gives an average accuracy of 82.66 ± 2.57%. We found that the use of a personalized FDM model improved the accuracy of the system, on average 24.22% with respect to the template BEM model and on average 5.15% with respect to the personalized BEM model. Including fMRI spatial priors did not improve accuracy. Personal fine- tuning largely resolved the robustness problems arising due to the differences in head geometry and neurophysiology between subjects. A real-time average accuracy of 64.26% was reached and the maximum ITR was 6.71 bits/min. We conclude that beamformers calculated with a personalized FDM model have great potential to ameliorate feature extraction and, as a consequence, to improve the performance of real-time BCI systems
    corecore