14,706 research outputs found

    Smart Bagged Tree-based Classifier optimized by Random Forests (SBT-RF) to Classify Brain- Machine Interface Data

    Get PDF
    Brain-Computer Interface (BCI) is a new technology that uses electrodes and sensors to connect machines and computers with the human brain to improve a person\u27s mental performance. Also, human intentions and thoughts are analyzed and recognized using BCI, which is then translated into Electroencephalogram (EEG) signals. However, certain brain signals may contain redundant information, making classification ineffective. Therefore, relevant characteristics are essential for enhancing classification performance. . Thus, feature selection has been employed to eliminate redundant data before sorting to reduce computation time. BCI Competition III Dataset Iva was used to investigate the efficacy of the proposed system. A Smart Bagged Tree-based Classifier (SBT-RF) technique is presented to determine the importance of the features for selecting and classifying the data. As a result, SBT-RF is better at improving the mean accuracy of the dataset. It also decreases computation cost and training time and increases prediction speed. Furthermore, fewer features mean fewer electrodes, thus lowering the risk of damage to the brain. The proposed algorithm has the greatest average accuracy of ~98% compared to other relevant algorithms in the literature. SBT-RF is compared to state-of-the-art algorithms based on the following performance metrics: Confusion Matrix, ROC-AUC, F1-Score, Training Time, Prediction speed, and Accuracy

    Computer Based Behavioral Biometric Authentication via Multi-Modal Fusion

    Get PDF
    Biometric computer authentication has an advantage over password and access card authentication in that it is based on something you are, which is not easily copied or stolen. One way of performing biometric computer authentication is to use behavioral tendencies associated with how a user interacts with the computer. However, behavioral biometric authentication accuracy rates are much larger then more traditional authentication methods. This thesis presents a behavioral biometric system that fuses user data from keyboard, mouse, and Graphical User Interface (GUI) interactions. Combining the modalities results in a more accurate authentication decision based on a broader view of the user\u27s computer activity while requiring less user interaction to train the system than previous work. Testing over 30 users, shows that fusion techniques significantly improve behavioral biometric authentication accuracy over single modalities on their own. Two fusion techniques are presented, feature fusion and decision level fusion. Using an ensemble based classification method the decision level fusion technique improves the FAR by 0.86% and FRR by 2.98% over the best individual modality

    Combining brain-computer interfaces and assistive technologies: state-of-the-art and challenges

    Get PDF
    In recent years, new research has brought the field of EEG-based Brain-Computer Interfacing (BCI) out of its infancy and into a phase of relative maturity through many demonstrated prototypes such as brain-controlled wheelchairs, keyboards, and computer games. With this proof-of-concept phase in the past, the time is now ripe to focus on the development of practical BCI technologies that can be brought out of the lab and into real-world applications. In particular, we focus on the prospect of improving the lives of countless disabled individuals through a combination of BCI technology with existing assistive technologies (AT). In pursuit of more practical BCIs for use outside of the lab, in this paper, we identify four application areas where disabled individuals could greatly benefit from advancements in BCI technology, namely,“Communication and Control”, “Motor Substitution”, “Entertainment”, and “Motor Recovery”. We review the current state of the art and possible future developments, while discussing the main research issues in these four areas. In particular, we expect the most progress in the development of technologies such as hybrid BCI architectures, user-machine adaptation algorithms, the exploitation of users’ mental states for BCI reliability and confidence measures, the incorporation of principles in human-computer interaction (HCI) to improve BCI usability, and the development of novel BCI technology including better EEG devices

    Signal Processing of Electroencephalogram for the Detection of Attentiveness towards Short Training Videos

    Get PDF
    This research has developed a novel method which uses an easy to deploy single dry electrode wireless electroencephalogram (EEG) collection device as an input to an automated system that measures indicators of a participant’s attentiveness while they are watching a short training video. The results are promising, including 85% or better accuracy in identifying whether a participant is watching a segment of video from a boring scene or lecture, versus a segment of video from an attentiveness inducing active lesson or memory quiz. In addition, the final system produces an ensemble average of attentiveness across many participants, pinpointing areas in the training videos that induce peak attentiveness. Qualitative analysis of the results of this research is also very promising. The system produces attentiveness graphs for individual participants and these triangulate well with the thoughts and feelings those participants had during different parts of the videos, as described in their own words. As distance learning and computer based training become more popular, it is of great interest to measure if students are attentive to recorded lessons and short training videos. This research was motivated by this interest, as well as recent advances in electronic and computer engineering’s use of biometric signal analysis for the detection of affective (emotional) response. Signal processing of EEG has proven useful in measuring alertness, emotional state, and even towards very specific applications such as whether or not participants will recall television commercials days after they have seen them. This research extended these advances by creating an automated system which measures attentiveness towards short training videos. The bulk of the research was focused on electrical and computer engineering, specifically the optimization of signal processing algorithms for this particular application. A review of existing methods of EEG signal processing and feature extraction methods shows that there is a common subdivision of the steps that are used in different EEG applications. These steps include hardware sensing filtering and digitizing, noise removal, chopping the continuous EEG data into windows for processing, normalization, transformation to extract frequency or scale information, treatment of phase or shift information, and additional post-transformation noise reduction techniques. A large degree of variation exists in most of these steps within the currently documented state of the art. This research connected these varied methods into a single holistic model that allows for comparison and selection of optimal algorithms for this application. The research described herein provided for such a structured and orderly comparison of individual signal analysis and feature extraction methods. This study created a concise algorithmic approach in examining all the aforementioned steps. In doing so, the study provided the framework for a systematic approach which followed a rigorous participant cross validation so that options could be tested, compared and optimized. Novel signal analysis methods were also developed, using new techniques to choose parameters, which greatly improved performance. The research also utilizes machine learning to automatically categorize extracted features into measures of attentiveness. The research improved existing machine learning with novel methods, including a method of using per-participant baselines with kNN machine learning. This provided an optimal solution to extend current EEG signal analysis methods that were used in other applications, and refined them for use in the measurement of attentiveness towards short training videos. These algorithms are proven to be best via selection of optimal signal analysis and optimal machine learning steps identified through both n-fold and participant cross validation. The creation of this new system which uses signal processing of EEG for the detection of attentiveness towards short training videos has created a significant advance in the field of attentiveness measuring towards short training videos

    Mental state estimation for brain-computer interfaces

    Get PDF
    Mental state estimation is potentially useful for the development of asynchronous brain-computer interfaces. In this study, four mental states have been identified and decoded from the electrocorticograms (ECoGs) of six epileptic patients, engaged in a memory reach task. A novel signal analysis technique has been applied to high-dimensional, statistically sparse ECoGs recorded by a large number of electrodes. The strength of the proposed technique lies in its ability to jointly extract spatial and temporal patterns, responsible for encoding mental state differences. As such, the technique offers a systematic way of analyzing the spatiotemporal aspects of brain information processing and may be applicable to a wide range of spatiotemporal neurophysiological signals

    A fragmentising interface to a large corpus of digitized text: (Post)humanism and non-consumptive reading via features

    Get PDF
    While the idea of distant reading does not rule out the possibility of close reading of the individual components of the corpus of digitized text that is being distant-read, this ceases to be the case when parts of the corpus are, for reasons relating to intellectual property, not accessible for consumption through downloading followed by close reading. Copyright restrictions on material in collections of digitized text such as the HathiTrust Digital Library (HTDL) necessitates providing facilities for non-consumptive reading, one of the approaches to which consists of providing users with features from the text in the form of small fragments of text, instead of the text itself. We argue that, contrary to expectation, the fragmentary quality of the features generated by the reading interface does not necessarily imply that the mode of reading enabled and mediated by these features points in an anti-humanist direction. We pose the fragmentariness of the features as paradigmatic of the fragmentation with which digital techniques tend, more generally, to trouble the humanities. We then generalize our argument to put our work on feature-based non-consumptive reading in dialogue with contemporary debates that are currently taking place in philosophy and in cultural theory and criticism about posthumanism and agency. While the locus of agency in such a non-consumptive practice of reading does not coincide with the customary figure of the singular human subject as reader, it is possible to accommodate this fragmentising practice within the terms of an ampler notion of agency imagined as dispersed across an entire technosocial ensemble. When grasped in this way, such a practice of reading may be considered posthumanist but not necessarily antihumanist.Ope

    P300 detection and characterization for brain computer interface

    Get PDF
    Advances in cognitive neuroscience and brain imaging technologies have enabled the brain to directly interface with the computer. This technique is called as Brain Computer Interface (BCI). This ability is made possible through use of sensors that can monitor some of the physical processes that occur inside the brain. Researchers have used these kinds of technologies to build brain-computer interfaces (BCIs). Computers or communication devices can be controlled by using the signals produced in the brain. This can be a real boon for all those who are not able to communicate with the outside world directly. They can easily forecast their emotions or feelings using this technology. In BCI we use oddball paradigms to generate event-related potentials (ERPs), like the P300 wave, on targets which have been selected by the user. The basic principle of a P300 speller is detection of P300 waves that allows the user to write characters. Two classification problems are encountered in the P300 speller. The first is to detect the presence of a P300 in the electroencephalogram (EEG). The second one refers to the combination of different P300 signals for determining the right character to spell. In this thesis both parts i.e., the classification as well as characterization part are presented in a simple and lucid way. First data is obtained using data set 2 of the third BCI competition. The raw data was processed through matlab software and the corresponding feature matrices were obtained. Several techniques such as normalization, feature extraction and feature reduction of the data are explained through the contents of this thesis. Then ANN algorithm is used to classify the data into P300 and no-P300 waves. Finally character recognition is carried out through the use of multiclass classifiers that enable the user to determine the right character to spell

    The evolution of AI approaches for motor imagery EEG-based BCIs

    Full text link
    The Motor Imagery (MI) electroencephalography (EEG) based Brain Computer Interfaces (BCIs) allow the direct communication between humans and machines by exploiting the neural pathways connected to motor imagination. Therefore, these systems open the possibility of developing applications that could span from the medical field to the entertainment industry. In this context, Artificial Intelligence (AI) approaches become of fundamental importance especially when wanting to provide a correct and coherent feedback to BCI users. Moreover, publicly available datasets in the field of MI EEG-based BCIs have been widely exploited to test new techniques from the AI domain. In this work, AI approaches applied to datasets collected in different years and with different devices but with coherent experimental paradigms are investigated with the aim of providing a concise yet sufficiently comprehensive survey on the evolution and influence of AI techniques on MI EEG-based BCI data.Comment: Submitted to Italian Workshop on Artificial Intelligence for Human Machine Interaction (AIxHMI 2022), December 02, 2022, Udine, Ital
    • …
    corecore