960 research outputs found
Classification of Event-Related Potentials Associated with Response Errors in Actors and Observers Based on Autoregressive Modeling
Event-Related Potentials (ERPs) provide non-invasive measurements of the electrical activity on the scalp related to the processing of stimuli and preparation of responses by the brain. In this paper an ERP-signal classification method is proposed for discriminating between ERPs of correct and incorrect responses of actors and of observers seeing an actor making such responses. The classification method targeted signals containing error-related negativity (ERN) and error positivity (Pe) components, which are typically associated with error processing in the human brain. Feature extraction consisted of Multivariate Autoregressive modeling combined with the Simulated Annealing technique. The resulting information was subsequently classified by means of an Artificial Neural Network (ANN) using back-propagation algorithm under the “leave-one-out cross-validation” scenario and the Fuzzy C-Means (FCM) algorithm. The ANN consisted of a multi-layer perceptron (MLP). The approach yielded classification rates of up to 85%, both for the actors’ correct and incorrect responses and the corresponding ERPs of the observers. The electrodes needed for such classifications were situated mainly at central and frontal areas. Results provide indications that the classification of the ERN is achievable. Furthermore, the availability of the Pe signals, in addition to the ERN, improves the classification, and this is more pronounced for observers’ signals. The proposed ERP-signal classification method provides a promising tool to study error detection and observational-learning mechanisms in performance monitoring and joint-action research, in both healthy and patient populations
A hybrid brain-computer interface based on motor intention and visual working memory
Non-invasive electroencephalography (EEG) based brain-computer interface (BCI) is able to provide alternative means for people with disabilities to communicate with and control over external assistive devices. A hybrid BCI is designed and developed for following two types of system (control and monitor).
Our first goal is to create a signal decoding strategy that allows people with limited motor control to have more command over potential prosthetic devices. Eight healthy subjects were recruited to perform visual cues directed reaching tasks. Eye and motion artifacts were identified and removed to ensure that the subjects\u27 visual fixation to the target locations would have little or no impact on the final result. We applied a Fisher Linear Discriminate (FLD) analysis for single-trial classification of the EEG to decode the intended arm movement in the left, right, and forward directions (before the onsets of actual movements). The mean EEG signal amplitude near the PPC region 271-310 ms after visual stimulation was found to be the dominant feature for best classification results. A signal scaling factor developed was found to improve the classification accuracy from 60.11% to 93.91% in the two-class (left versus right) scenario. This result demonstrated great promises for BCI neuroprosthetics applications, as motor intention decoding can be served as a prelude to the classification of imagined motor movement to assist in motor disable rehabilitation, such as prosthetic limb or wheelchair control.
The second goal is to develop the adaptive training for patients with low visual working memory (VWM) capacity to improve cognitive abilities and healthy individuals who seek to enhance their intellectual performance. VWM plays a critical role in preserving and processing information. It is associated with attention, perception and reasoning, and its capacity can be used as a predictor of cognitive abilities. Recent evidence has suggested that with training, one can enhance the VWM capacity and attention over time. Not only can these studies reveal the characteristics of VWM load and the influences of training, they may also provide effective rehabilitative means for patients with low VWM capacity. However, few studies have investigated VWM over a long period of time, beyond 5-weeks.
In this study, a combined behavioral approach and EEG was used to investigate VWM load, gain, and transfer. The results reveal that VWM capacity is directly correlated to the reaction time and contralateral delay amplitude (CDA). The approximate magic number 4 was observed through the event-related potentials (ERPs) waveforms, where the average capacity is 2.8-item from 15 participants. In addition, the findings indicate that VWM capacity can be improved through adaptive training. Furthermore, after training exercises, participants from the training group are able to improve their performance accuracies dramatically compared to the control group. Adaptive training gains on non-trained tasks can also be observed at 12 weeks after training.
Therefore, we conclude that all participants can benefit from training gains, and augmented VWM capacity can be sustained over a long period of time. Our results suggest that this form of training can significantly improve cognitive function and may be useful for enhancing the user performance on neuroprosthetics device
Neural network security and optimization for single-person authentication using electroencephalogram data
Includes bibliographical references.2022 Fall.Security is an important focus for devices that use biometric data, and as such security around authentication needs to be considered. This is true for brain-computer interfaces (BCIs), which often use electroencephalogram (EEG) data as inputs and neural network classification to determine their function. EEG data can also serve as a form of biometric authentication, which would contribute to the security of these devices. Neural networks have also used a method known as ablation to improve their efficiency. In light of this info, the goal of this research is to determine whether neural network ablation can also be used as a method to improve security by reducing a network's learning capabilities to include authenticating only a given target, and preventing adversaries from training new data to be authenticated. Data on the change in entropy of weight values of the networks after training was also collected for the purpose of determining patterns in weight distribution. Results from a set of ablated networks to a set of baseline (non-ablated) networks for five targets chosen randomly from a data set of 12 people were compared. The results found that ablated maintained accuracy through the ablation process, but that they did not perform as well as the baseline networks. Change in performance between single-target authentication and target-plus-invader authentication was also examined, but no significant results were found. Furthermore, the change in entropy differed between both baseline networks and ablated networks, as well as between single-target authentication and target-plus-invader authentication for all networks. Ablation was determined to have potential for security applications that need to be expanded on, and weight distribution was found to have some correlation with the complexity of an input to a network
S-Rocket: Selective Random Convolution Kernels for Time Series Classification
Random convolution kernel transform (Rocket) is a fast, efficient, and novel
approach for time series feature extraction using a large number of independent
randomly initialized 1-D convolution kernels of different configurations. The
output of the convolution operation on each time series is represented by a
partial positive value (PPV). A concatenation of PPVs from all kernels is the
input feature vector to a Ridge regression classifier. Unlike typical deep
learning models, the kernels are not trained and there is no weighted/trainable
connection between kernels or concatenated features and the classifier. Since
these kernels are generated randomly, a portion of these kernels may not
positively contribute in performance of the model. Hence, selection of the most
important kernels and pruning the redundant and less important ones is
necessary to reduce computational complexity and accelerate inference of Rocket
for applications on the edge devices. Selection of these kernels is a
combinatorial optimization problem. In this paper, we propose a scheme for
selecting these kernels while maintaining the classification performance.
First, the original model is pre-trained at full capacity. Then, a population
of binary candidate state vectors is initialized where each element of a vector
represents the active/inactive status of a kernel. A population-based
optimization algorithm evolves the population in order to find a best state
vector which minimizes the number of active kernels while maximizing the
accuracy of the classifier. This activation function is a linear combination of
the total number of active kernels and the classification accuracy of the
pre-trained classifier with the active kernels. Finally, the selected kernels
in the best state vector are utilized to train the Ridge regression classifier
with the selected kernels
Smart Bagged Tree-based Classifier optimized by Random Forests (SBT-RF) to Classify Brain- Machine Interface Data
Brain-Computer Interface (BCI) is a new technology that uses electrodes and sensors to connect machines and computers with the human brain to improve a person\u27s mental performance. Also, human intentions and thoughts are analyzed and recognized using BCI, which is then translated into Electroencephalogram (EEG) signals. However, certain brain signals may contain redundant information, making classification ineffective. Therefore, relevant characteristics are essential for enhancing classification performance. . Thus, feature selection has been employed to eliminate redundant data before sorting to reduce computation time. BCI Competition III Dataset Iva was used to investigate the efficacy of the proposed system. A Smart Bagged Tree-based Classifier (SBT-RF) technique is presented to determine the importance of the features for selecting and classifying the data. As a result, SBT-RF is better at improving the mean accuracy of the dataset. It also decreases computation cost and training time and increases prediction speed. Furthermore, fewer features mean fewer electrodes, thus lowering the risk of damage to the brain. The proposed algorithm has the greatest average accuracy of ~98% compared to other relevant algorithms in the literature. SBT-RF is compared to state-of-the-art algorithms based on the following performance metrics: Confusion Matrix, ROC-AUC, F1-Score, Training Time, Prediction speed, and Accuracy
Intelligent data mining using artificial neural networks and genetic algorithms : techniques and applications
Data Mining (DM) refers to the analysis of observational datasets to find
relationships and to summarize the data in ways that are both understandable
and useful. Many DM techniques exist. Compared with other DM techniques,
Intelligent Systems (ISs) based approaches, which include Artificial Neural
Networks (ANNs), fuzzy set theory, approximate reasoning, and derivative-free
optimization methods such as Genetic Algorithms (GAs), are tolerant of
imprecision, uncertainty, partial truth, and approximation. They provide
flexible information processing capability for handling real-life situations. This
thesis is concerned with the ideas behind design, implementation, testing and
application of a novel ISs based DM technique. The unique contribution of this
thesis is in the implementation of a hybrid IS DM technique (Genetic Neural
Mathematical Method, GNMM) for solving novel practical problems, the
detailed description of this technique, and the illustrations of several
applications solved by this novel technique.
GNMM consists of three steps: (1) GA-based input variable selection, (2) Multi-
Layer Perceptron (MLP) modelling, and (3) mathematical programming based
rule extraction. In the first step, GAs are used to evolve an optimal set of MLP
inputs. An adaptive method based on the average fitness of successive
generations is used to adjust the mutation rate, and hence the
exploration/exploitation balance. In addition, GNMM uses the elite group and
appearance percentage to minimize the randomness associated with GAs. In
the second step, MLP modelling serves as the core DM engine in performing
classification/prediction tasks. An Independent Component Analysis (ICA)
based weight initialization algorithm is used to determine optimal weights
before the commencement of training algorithms. The Levenberg-Marquardt
(LM) algorithm is used to achieve a second-order speedup compared to
conventional Back-Propagation (BP) training. In the third step, mathematical
programming based rule extraction is not only used to identify the premises of
multivariate polynomial rules, but also to explore features from the extracted
rules based on data samples associated with each rule. Therefore, the
methodology can provide regression rules and features not only in the
polyhedrons with data instances, but also in the polyhedrons without data
instances.
A total of six datasets from environmental and medical disciplines were used
as case study applications. These datasets involve the prediction of
longitudinal dispersion coefficient, classification of electrocorticography
(ECoG)/Electroencephalogram (EEG) data, eye bacteria Multisensor Data
Fusion (MDF), and diabetes classification (denoted by Data I through to Data VI). GNMM was applied to all these six datasets to explore its effectiveness,
but the emphasis is different for different datasets. For example, the emphasis
of Data I and II was to give a detailed illustration of how GNMM works; Data III
and IV aimed to show how to deal with difficult classification problems; the
aim of Data V was to illustrate the averaging effect of GNMM; and finally Data
VI was concerned with the GA parameter selection and benchmarking GNMM
with other IS DM techniques such as Adaptive Neuro-Fuzzy Inference System
(ANFIS), Evolving Fuzzy Neural Network (EFuNN), Fuzzy ARTMAP, and
Cartesian Genetic Programming (CGP). In addition, datasets obtained from
published works (i.e. Data II & III) or public domains (i.e. Data VI) where
previous results were present in the literature were also used to benchmark
GNMM’s effectiveness.
As a closely integrated system GNMM has the merit that it needs little human
interaction. With some predefined parameters, such as GA’s crossover
probability and the shape of ANNs’ activation functions, GNMM is able to
process raw data until some human-interpretable rules being extracted. This is
an important feature in terms of practice as quite often users of a DM system
have little or no need to fully understand the internal components of such a
system. Through case study applications, it has been shown that the GA-based
variable selection stage is capable of: filtering out irrelevant and noisy
variables, improving the accuracy of the model; making the ANN structure less
complex and easier to understand; and reducing the computational complexity
and memory requirements. Furthermore, rule extraction ensures that the MLP
training results are easily understandable and transferrable
Incorporating Structural Plasticity Approaches in Spiking Neural Networks for EEG Modelling
Structural Plasticity (SP) in the brain is a process that allows neuronal structure changes, in response to learning. Spiking Neural Networks (SNN) are an emerging form of artificial neural networks that uses brain-inspired techniques to learn. However, the application of SP in SNNs, its impact on overall learning and network behaviour is rarely explored. In the present study, we use an SNN with a single hidden layer, to apply SP in classifying Electroencephalography signals of two publicly available datasets. We considered classification accuracy as the learning capability and applied metaheuristics to derive the optimised number of neurons for the hidden layer along with other hyperparameters of the network. The optimised structure was then compared with overgrown and undergrown structures to compare the accuracy, stability, and behaviour of the network properties. Networks with SP yielded ~94% and ~92% accuracies in classifying wrist positions and mental states(stressed vs relaxed) respectively. The same SNN developed for mental state classification produced ~77% and ~73% accuracies in classifying arousal and valence. Moreover, the networks with SP demonstrated superior performance stability during iterative random initiations. Interestingly, these networks had a smaller number of inactive neurons and a preference for lowered neuron firing thresholds. This research highlights the importance of systematically selecting the hidden layer neurons over arbitrary settings, particularly for SNNs using Spike Time Dependent Plasticity learning and provides potential findings that may lead to the development of SP learning algorithms for SNNs
- …