845 research outputs found

    Data-Driven Transducer Design and Identification for Internally-Paced Motor Brain Computer Interfaces: A Review

    Get PDF
    Brain-Computer Interfaces (BCIs) are systems that establish a direct communication pathway between the users' brain activity and external effectors. They offer the potential to improve the quality of life of motor-impaired patients. Motor BCIs aim to permit severely motor-impaired users to regain limb mobility by controlling orthoses or prostheses. In particular, motor BCI systems benefit patients if the decoded actions reflect the users' intentions with an accuracy that enables them to efficiently interact with their environment. One of the main challenges of BCI systems is to adapt the BCI's signal translation blocks to the user to reach a high decoding accuracy. This paper will review the literature of data-driven and user-specific transducer design and identification approaches and it focuses on internally-paced motor BCIs. In particular, continuous kinematic biomimetic and mental-task decoders are reviewed. Furthermore, static and dynamic decoding approaches, linear and non-linear decoding, offline and real-time identification algorithms are considered. The current progress and challenges related to the design of clinical-compatible motor BCI transducers are additionally discussed

    On Tackling Fundamental Constraints in Brain-Computer Interface Decoding via Deep Neural Networks

    Get PDF
    A Brain-Computer Interface (BCI) is a system that provides a communication and control medium between human cortical signals and external devices, with the primary aim to assist or to be used by patients who suffer from a neuromuscular disease. Despite significant recent progress in the area of BCI, there are numerous shortcomings associated with decoding Electroencephalography-based BCI signals in real-world environments. These include, but are not limited to, the cumbersome nature of the equipment, complications in collecting large quantities of real-world data, the rigid experimentation protocol and the challenges of accurate signal decoding, especially in making a system work in real-time. Hence, the core purpose of this work is to investigate improving the applicability and usability of BCI systems, whilst preserving signal decoding accuracy. Recent advances in Deep Neural Networks (DNN) provide the possibility for signal processing to automatically learn the best representation of a signal, contributing to improved performance even with a noisy input signal. Subsequently, this thesis focuses on the use of novel DNN-based approaches for tackling some of the key underlying constraints within the area of BCI. For example, recent technological improvements in acquisition hardware have made it possible to eliminate the pre-existing rigid experimentation procedure, albeit resulting in noisier signal capture. However, through the use of a DNN-based model, it is possible to preserve the accuracy of the predictions from the decoded signals. Moreover, this research demonstrates that by leveraging DNN-based image and signal understanding, it is feasible to facilitate real-time BCI applications in a natural environment. Additionally, the capability of DNN to generate realistic synthetic data is shown to be a potential solution in reducing the requirement for costly data collection. Work is also performed in addressing the well-known issues regarding subject bias in BCI models by generating data with reduced subject-specific features. The overall contribution of this thesis is to address the key fundamental limitations of BCI systems. This includes the unyielding traditional experimentation procedure, the mandatory extended calibration stage and sustaining accurate signal decoding in real-time. These limitations lead to a fragile BCI system that is demanding to use and only suited for deployment in a controlled laboratory. Overall contributions of this research aim to improve the robustness of BCI systems and enable new applications for use in the real-world

    Action prevents error : predictive processing without active inference

    Get PDF
    According to predictive processing, minds relentlessly aim at a single goal: prediction error minimization. Prediction error minimization is said to explain everything the mind does, from perception to cognition to action. Here I focus on action. ‘Active inference’ is the standard approach to action in predictive processing. According to active inference, as it has been developed by Friston and collaborators, action ensues when proprioceptive predictions generate prediction error at the motor periphery, and classical reflex arcs engage to quash the error. I raise a series of problems for active inference. I then offer an alternative approach on which action prevents error (APE), rather than quash it. I argue that the action prevents error approach solves all the problems raised for active inference. In addition, I show how the alternative approach can be independently motivated by further commitments of predictive processing and that it is compatible with other prominent approaches to sensorimotor psychology, such as optimal feedback control

    CLASSIFICATION BASED ON SEMI-SUPERVISED LEARNING: A REVIEW

    Get PDF
    Semi-supervised learning is the class of machine learning that deals with the use of supervised and unsupervised learning to implement the learning process. Conceptually placed between labelled and unlabeled data. In certain cases, it enables the large numbers of unlabeled data required to be utilized in comparison with usually limited collections of labeled data. In standard classification methods in machine learning, only a labeled collection is used to train the classifier. In addition, labelled instances are difficult to acquire since they necessitate the assistance of annotators, who serve in an occupation that is identified by their label. A complete audit without a supervisor is fairly easy to do, but nevertheless represents a significant risk to the enterprise, as there have been few chances to safely experiment with it so far. By utilizing a large number of unsupervised inputs along with the supervised inputs, the semi-supervised learning solves this issue, to create a good training sample. Since semi-supervised learning requires fewer human effort and allows greater precision, both theoretically or in practice, it is of critical interest
    • …
    corecore