1,865 research outputs found

    Nonlinear denoising of transient signals with application to event related potentials

    Full text link
    We present a new wavelet based method for the denoising of {\it event related potentials} ERPs), employing techniques recently developed for the paradigm of deterministic chaotic systems. The denoising scheme has been constructed to be appropriate for short and transient time sequences using circular state space embedding. Its effectiveness was successfully tested on simulated signals as well as on ERPs recorded from within a human brain. The method enables the study of individual ERPs against strong ongoing brain electrical activity.Comment: 16 pages, Postscript, 6 figures, Physica D in pres

    Jitter-Adaptive Dictionary Learning - Application to Multi-Trial Neuroelectric Signals

    Get PDF
    Dictionary Learning has proven to be a powerful tool for many image processing tasks, where atoms are typically defined on small image patches. As a drawback, the dictionary only encodes basic structures. In addition, this approach treats patches of different locations in one single set, which means a loss of information when features are well-aligned across signals. This is the case, for instance, in multi-trial magneto- or electroencephalography (M/EEG). Learning the dictionary on the entire signals could make use of the alignement and reveal higher-level features. In this case, however, small missalignements or phase variations of features would not be compensated for. In this paper, we propose an extension to the common dictionary learning framework to overcome these limitations by allowing atoms to adapt their position across signals. The method is validated on simulated and real neuroelectric data.Comment: 9 pages, 5 figures, minor correction

    Analysis of worker performances using statistical process control in fish paste otak-otak food industries

    Get PDF
    This research focuses on the improvement of Small and Medium Enterprises through the used of Process Statistical Control (SPC). An industry that focuses on the fish paste (known as “otak-otak”) production was taken as the case study in this research and the problems analysed are based on the real industrial experiences. The data collection for control charts were recorded for two weeks consisting of working time for each operator. The data were collected in subgroup of 16 with sample size of 5. The collection of data for weight of product was recorded randomly for the whole production line, while data collection of working time of operation was taken randomly from each operator every 30 minutes of the working hour. From this study, there are several problems had been detected in the process that been categories in six element that is people, method, measurement, machine, environment and materials. There were lack of motivation, lack of skill, lack of supervision, manual operation, lack of standard of procedure, waiting time in process, weight-based operator, lack of quality check, not using weight scale, conveyer that sometimes got stuck, spoon for tools, no automation, poor layout arrangement, talking while working, small working space, lack of hygiene, waiting time for material and easily spoiled. The findings can be used as the guideline to the industries for future production improvement. The industries would focus on elimination or reduction of the problems through their innovative solution

    Detecting single-trial EEG evoked potential using a wavelet domain linear mixed model: application to error potentials classification

    Full text link
    Objective. The main goal of this work is to develop a model for multi-sensor signals such as MEG or EEG signals, that accounts for the inter-trial variability, suitable for corresponding binary classification problems. An important constraint is that the model be simple enough to handle small size and unbalanced datasets, as often encountered in BCI type experiments. Approach. The method involves linear mixed effects statistical model, wavelet transform and spatial filtering, and aims at the characterization of localized discriminant features in multi-sensor signals. After discrete wavelet transform and spatial filtering, a projection onto the relevant wavelet and spatial channels subspaces is used for dimension reduction. The projected signals are then decomposed as the sum of a signal of interest (i.e. discriminant) and background noise, using a very simple Gaussian linear mixed model. Main results. Thanks to the simplicity of the model, the corresponding parameter estimation problem is simplified. Robust estimates of class-covariance matrices are obtained from small sample sizes and an effective Bayes plug-in classifier is derived. The approach is applied to the detection of error potentials in multichannel EEG data, in a very unbalanced situation (detection of rare events). Classification results prove the relevance of the proposed approach in such a context. Significance. The combination of linear mixed model, wavelet transform and spatial filtering for EEG classification is, to the best of our knowledge, an original approach, which is proven to be effective. This paper improves on earlier results on similar problems, and the three main ingredients all play an important role

    Noise Reduction in EEG Signals using Convolutional Autoencoding Techniques

    Get PDF
    The presence of noise in electroencephalography (EEG) signals can significantly reduce the accuracy of the analysis of the signal. This study assesses to what extent stacked autoencoders designed using one-dimensional convolutional neural network layers can reduce noise in EEG signals. The EEG signals, obtained from 81 people, were processed by a two-layer one-dimensional convolutional autoencoder (CAE), whom performed 3 independent button pressing tasks. The signal-to-noise ratios (SNRs) of the signals before and after processing were calculated and the distributions of the SNRs were compared. The performance of the model was compared to noise reduction performance of Principal Component Analysis, with 95% explained variance, by comparing the Harrell-Davis decile differences between the SNR distributions of both methods and the raw signal SNR distribution for each task. It was found that the CAE outperformed PCA for the full dataset across all three tasks, however the CAE did not outperform PCA for the person specific datasets in any of the three tasks. The results indicate that CAEs can perform better than PCA for noise reduction in EEG signals, but performance of the model may be training size dependent

    Noise Reduction of EEG Signals Using Autoencoders Built Upon GRU based RNN Layers

    Get PDF
    Understanding the cognitive and functional behaviour of the brain by its electrical activity is an important area of research. Electroencephalography (EEG) is a method that measures and record electrical activities of the brain from the scalp. It has been used for pathology analysis, emotion recognition, clinical and cognitive research, diagnosing various neurological and psychiatric disorders and for other applications. Since the EEG signals are sensitive to activities other than the brain ones, such as eye blinking, eye movement, head movement, etc., it is not possible to record EEG signals without any noise. Thus, it is very important to use an efficient noise reduction technique to get more accurate recordings. Numerous traditional techniques such as Principal Component Analysis (PCA), Independent Component Analysis (ICA), wavelet transformations and machine learning techniques were proposed for reducing the noise in EEG signals. The aim of this paper is to investigate the effectiveness of stacked autoencoders built upon Gated Recurrent Unit (GRU) based Recurrent Neural Network (RNN) layers (GRU-AE) against PCA. To achieve this, Harrell-Davis decile values for the reconstructed signals’ signal-to- noise ratio distributions were compared and it was found that the GRU-AE outperformed PCA for noise reduction of EEG signals

    A Survey on the Project in title

    Full text link
    In this paper we present a survey of work that has been done in the project ldquo;Unsupervised Adaptive P300 BCI in the framework of chaotic theory and stochastic theoryrdquo;we summarised the following papers, (Mohammed J Alhaddad amp; 2011), (Mohammed J. Alhaddad amp; Kamel M, 2012), (Mohammed J Alhaddad, Kamel, amp; Al-Otaibi, 2013), (Mohammed J Alhaddad, Kamel, amp; Bakheet, 2013), (Mohammed J Alhaddad, Kamel, amp; Al-Otaibi, 2014), (Mohammed J Alhaddad, Kamel, amp; Bakheet, 2014), (Mohammed J Alhaddad, Kamel, amp; Kadah, 2014), (Mohammed J Alhaddad, Kamel, Makary, Hargas, amp; Kadah, 2014), (Mohammed J Alhaddad, Mohammed, Kamel, amp; Hagras, 2015).We developed a new pre-processing method for denoising P300-based brain-computer interface data that allows better performance with lower number of channels and blocks. The new denoising technique is based on a modified version of the spectral subtraction denoising and works on each temporal signal channel independently thus offering seamless integration with existing pre-processing and allowing low channel counts to be used. We also developed a novel approach for brain-computer interface data that requires no prior training. The proposed approach is based on interval type-2 fuzzy logic based classifier which is able to handle the usersrsquo; uncertainties to produce better prediction accuracies than other competing classifiers such as BLDA or RFLDA. In addition, the generated type-2 fuzzy classifier is learnt from data via genetic algorithms to produce a small number of rules with a rule length of only one antecedent to maximize the transparency and interpretability for the normal clinician. We also employ a feature selection system based on an ensemble neural networks recursive feature selection which is able to find the effective time instances within the effective sensors in relation to given P300 event. The basic principle of this new class of techniques is that the trial with true activation signal within each block has to be different from the rest of the trials within that block. Hence, a measure that is sensitive to this dissimilarity can be used to make a decision based on a single block without any prior training. The new methods were verified using various experiments which were performed on standard data sets and using real-data sets obtained from real subjects experiments performed in the BCI lab in King Abdulaziz University. The results were compared to the classification results of the same data using previous methods. Enhanced performance in different experiments as quantitatively assessed using classification block accuracy as well as bit rate estimates was confirmed. It will be shown that the produced type-2 fuzzy logic based classifier will learn simple rules which are easy to understand explaining the events in question. In addition, the produced type-2 fuzzy logic classifier will be able to give better accuracies when compared to BLDA or RFLDA on various human subjects on the standard and real-world data sets
    • …
    corecore