1,365 research outputs found

    A motor imagery based brain-computer interface system via swarm-optimized fuzzy integral and its application

    Full text link
    © 2016 IEEE. A brain-computer interface (BCI) system provides a convenient means of communication between the human brain and a computer, which is applied not only to healthy people but also for people that suffer from motor neuron diseases (MNDs). Motor imagery (MI) is one well-known basis for designing Electroencephalography (EEG)-based real-life BCI systems. However, EEG signals are often contaminated with severe noise and various uncertainties, imprecise and incomplete information streams. Therefore, this study proposes spectrum ensemble based on swam-optimized fuzzy integral for integrating decisions from sub-band classifiers that are established by a sub-band common spatial pattern (SBCSP) method. Firstly, the SBCSP effectively extracts features from EEG signals, and thereby the multiple linear discriminant analysis (MLDA) is employed during a MI classification task. Subsequently, particle swarm optimization (PSO) is used to regulate the subject-specific parameters for assigning optimal confidence levels for classifiers used in the fuzzy integral during the fuzzy fusion stage of the proposed system. Moreover, BCI systems usually tend to have complex architectures, be bulky in size, and require time-consuming processing. To overcome this drawback, a wireless and wearable EEG measurement system is investigated in this study. Finally, in our experimental result, the proposed system is found to produce significant improvement in terms of the receiver operating characteristic (ROC) curve. Furthermore, we demonstrate that a robotic arm can be reliably controlled using the proposed BCI system. This paper presents novel insights regarding the possibility of using the proposed MI-based BCI system in real-life applications

    Use of the Choquet Integral for Combination of Classifiers in P300 Based Brain-Computer Interface

    Get PDF
    One of the key issues in the development of braincomputer interfaces (BCIs) is the improvement of their current information transfer rate. In order to achieve that objective at least two aspects of BCI design should be considered: classification accuracy and protocol specification. In this paper we show how combination of classifiers using fuzzy measures and the Choquet integral can be applied to the context of EEG-based BCI and study whether its use, together with an appropriate application protocol, can lead to an increase in the information transfer rate

    Fuzzy decision-making fuser (FDMF) for integrating human-machine autonomous (HMA) systems with adaptive evidence sources

    Full text link
    © 2017 Liu, Pal, Marathe, Wang and Lin. A brain-computer interface (BCI) creates a direct communication pathway between the human brain and an external device or system. In contrast to patient-oriented BCIs, which are intended to restore inoperative or malfunctioning aspects of the nervous system, a growing number of BCI studies focus on designing auxiliary systems that are intended for everyday use. The goal of building these BCIs is to provide capabilities that augment existing intact physical and mental capabilities. However, a key challenge to BCI research is human variability; factors such as fatigue, inattention, and stress vary both across different individuals and for the same individual over time. If these issues are addressed, autonomous systems may provide additional benefits that enhance system performance and prevent problems introduced by individual human variability. This study proposes a human-machine autonomous (HMA) system that simultaneously aggregates human and machine knowledge to recognize targets in a rapid serial visual presentation (RSVP) task. The HMA focuses on integrating an RSVP BCI with computer vision techniques in an image-labeling domain. A fuzzy decision-making fuser (FDMF) is then applied in the HMA system to provide a natural adaptive framework for evidence-based inference by incorporating an integrated summary of the available evidence (i.e., human and machine decisions) and associated uncertainty. Consequently, the HMA system dynamically aggregates decisions involving uncertainties from both human and autonomous agents. The collaborative decisions made by an HMA system can achieve and maintain superior performance more efficiently than either the human or autonomous agents can achieve independently. The experimental results shown in this study suggest that the proposed HMA system with the FDMF can effectively fuse decisions from human brain activities and the computer vision techniques to improve overall performance on the RSVP recognition task. This conclusion demonstrates the potential benefits of integrating autonomous systems with BCI systems

    A Self-Adaptive Online Brain Machine Interface of a Humanoid Robot through a General Type-2 Fuzzy Inference System

    Get PDF
    This paper presents a self-adaptive general type-2 fuzzy inference system (GT2 FIS) for online motor imagery (MI) decoding to build a brain-machine interface (BMI) and navigate a bi-pedal humanoid robot in a real experiment, using EEG brain recordings only. GT2 FISs are applied to BMI for the first time in this study. We also account for several constraints commonly associated with BMI in real practice: 1) maximum number of electroencephalography (EEG) channels is limited and fixed, 2) no possibility of performing repeated user training sessions, and 3) desirable use of unsupervised and low complexity features extraction methods. The novel learning method presented in this paper consists of a self-adaptive GT2 FIS that can both incrementally update its parameters and evolve (a.k.a. self-adapt) its structure via creation, fusion and scaling of the fuzzy system rules in an online BMI experiment with a real robot. The structure identification is based on an online GT2 Gath-Geva algorithm where every MI decoding class can be represented by multiple fuzzy rules (models). The effectiveness of the proposed method is demonstrated in a detailed BMI experiment where 15 untrained users were able to accurately interface with a humanoid robot, in a single thirty-minute experiment, using signals from six EEG electrodes only

    A Dual-Modality Emotion Recognition System of EEG and Facial Images and its Application in Educational Scene

    Get PDF
    With the development of computer science, people's interactions with computers or through computers have become more frequent. Some human-computer interactions or human-to-human interactions that are often seen in daily life: online chat, online banking services, facial recognition functions, etc. Only through text messaging, however, can the effect of information transfer be reduced to around 30% of the original. Communication becomes truly efficient when we can see one other's reactions and feel each other's emotions. This issue is especially noticeable in the educational field. Offline teaching is a classic teaching style in which teachers may determine a student's present emotional state based on their expressions and alter teaching methods accordingly. With the advancement of computers and the impact of Covid-19, an increasing number of schools and educational institutions are exploring employing online or video-based instruction. In such circumstances, it is difficult for teachers to get feedback from students. Therefore, an emotion recognition method is proposed in this thesis that can be used for educational scenarios, which can help teachers quantify the emotional state of students in class and be used to guide teachers in exploring or adjusting teaching methods. Text, physiological signals, gestures, facial photographs, and other data types are commonly used for emotion recognition. Data collection for facial images emotion recognition is particularly convenient and fast among them, although there is a problem that people may subjectively conceal true emotions, resulting in inaccurate recognition results. Emotion recognition based on EEG waves can compensate for this drawback. Taking into account the aforementioned issues, this thesis first employs the SVM-PCA to classify emotions in EEG data, then employs the deep-CNN to classify the emotions of the subject's facial images. Finally, the D-S evidence theory is used for fusing and analyzing the two classification results and obtains the final emotion recognition accuracy of 92%. The specific research content of this thesis is as follows: 1) The background of emotion recognition systems used in teaching scenarios is discussed, as well as the use of various single modality systems for emotion recognition. 2) Detailed analysis of EEG emotion recognition based on SVM. The theory of EEG signal generation, frequency band characteristics, and emotional dimensions is introduced. The EEG signal is first filtered and processed with artifact removal. The processed EEG signal is then used for feature extraction using wavelet transforms. It is finally fed into the proposed SVM-PCA for emotion recognition and the accuracy is 64%. 3) Using the proposed deep-CNN to recognize emotions in facial images. Firstly, the Adaboost algorithm is used to detect and intercept the face area in the image, and the gray level balance is performed on the captured image. Then the preprocessed images are trained and tested using the deep-CNN, and the average accuracy is 88%. 4) Fusion method based on decision-making layer. The data fusion at the decision level is carried out with the results of EEG emotion recognition and facial expression emotion recognition. The final dual-modality emotion recognition results and system accuracy of 92% are obtained using D-S evidence theory. 5) The dual-modality emotion recognition system's data collection approach is designed. Based on the process, the actual data in the educational scene is collected and analyzed. The final accuracy of the dual-modality system is 82%. Teachers can use the emotion recognition results as a guide and reference to improve their teaching efficacy

    Pattern Recognition

    Get PDF
    A wealth of advanced pattern recognition algorithms are emerging from the interdiscipline between technologies of effective visual features and the human-brain cognition process. Effective visual features are made possible through the rapid developments in appropriate sensor equipments, novel filter designs, and viable information processing architectures. While the understanding of human-brain cognition process broadens the way in which the computer can perform pattern recognition tasks. The present book is intended to collect representative researches around the globe focusing on low-level vision, filter design, features and image descriptors, data mining and analysis, and biologically inspired algorithms. The 27 chapters coved in this book disclose recent advances and new ideas in promoting the techniques, technology and applications of pattern recognition

    Real-time EMG based pattern recognition control for hand prostheses : a review on existing methods, challenges and future implementation

    Get PDF
    Upper limb amputation is a condition that significantly restricts the amputees from performing their daily activities. The myoelectric prosthesis, using signals from residual stump muscles, is aimed at restoring the function of such lost limbs seamlessly. Unfortunately, the acquisition and use of such myosignals are cumbersome and complicated. Furthermore, once acquired, it usually requires heavy computational power to turn it into a user control signal. Its transition to a practical prosthesis solution is still being challenged by various factors particularly those related to the fact that each amputee has different mobility, muscle contraction forces, limb positional variations and electrode placements. Thus, a solution that can adapt or otherwise tailor itself to each individual is required for maximum utility across amputees. Modified machine learning schemes for pattern recognition have the potential to significantly reduce the factors (movement of users and contraction of the muscle) affecting the traditional electromyography (EMG)-pattern recognition methods. Although recent developments of intelligent pattern recognition techniques could discriminate multiple degrees of freedom with high-level accuracy, their efficiency level was less accessible and revealed in real-world (amputee) applications. This review paper examined the suitability of upper limb prosthesis (ULP) inventions in the healthcare sector from their technical control perspective. More focus was given to the review of real-world applications and the use of pattern recognition control on amputees. We first reviewed the overall structure of pattern recognition schemes for myo-control prosthetic systems and then discussed their real-time use on amputee upper limbs. Finally, we concluded the paper with a discussion of the existing challenges and future research recommendations
    corecore