108 research outputs found

    AFFECTIVE COMPUTING AND AUGMENTED REALITY FOR CAR DRIVING SIMULATORS

    Get PDF
    Car simulators are essential for training and for analyzing the behavior, the responses and the performance of the driver. Augmented Reality (AR) is the technology that enables virtual images to be overlaid on views of the real world. Affective Computing (AC) is the technology that helps reading emotions by means of computer systems, by analyzing body gestures, facial expressions, speech and physiological signals. The key aspect of the research relies on investigating novel interfaces that help building situational awareness and emotional awareness, to enable affect-driven remote collaboration in AR for car driving simulators. The problem addressed relates to the question about how to build situational awareness (using AR technology) and emotional awareness (by AC technology), and how to integrate these two distinct technologies [4], into a unique affective framework for training, in a car driving simulator

    Embedded Artificial Intelligence for Tactile Sensing

    Get PDF
    Electronic tactile sensing becomes an active research field whether for prosthetic applications, robotics, virtual reality or post stroke patients rehabilitation. To achieve such sensing, an array of sensors is used to retrieve human-skin like information, which is called Electronic skin (E-skin). Humans through their skins, are able to collect different types of information e.g. pressure, temperature, texture, etc. which are then passed to the nervous system, and finally to the brain in order to extract high level information from these sensory data. In order to make E-skin capable of such task, data acquired from E-skin should be filtered, processed, and then conveyed to the user (or robot). Processing these sensory information, should occur in real-time, taking in consideration the power limitation in such applications, especially prosthetic applications. The power consumption itself is related to different factors, one factor is the complexity of the algorithm e.g. number of FLOPs, and another is the memory consumption. In this thesis, I will focus on the processing of real tactile information, by 1)exploring different algorithms and methods for tactile data classification, 2)data organization and preprocessing of such tactile data and 3)hardware implementation. More precisely the focus will be on deep learning algorithms for tactile data processing mainly CNNs and RNNs, with energy-efficient embedded implementations. The proposed solution has proved less memory, FLOPs, and latency compared to the state of art (including tensorial SVM), applied to real tactile sensors data. Keywords: E-skin, tactile data processing, deep learning, CNN, RNN, LSTM, GRU, embedded, energy-efficient algorithms, edge computing, artificial intelligence

    Online multiclass EEG feature extraction and recognition using modified convolutional neural network method

    Get PDF
    Many techniques have been introduced to improve both brain-computer interface (BCI) steps: feature extraction and classification. One of the emerging trends in this field is the implementation of deep learning algorithms. There is a limited number of studies that investigated the application of deep learning techniques in electroencephalography (EEG) feature extraction and classification. This work is intended to apply deep learning for both stages: feature extraction and classification. This paper proposes a modified convolutional neural network (CNN) feature extractorclassifier algorithm to recognize four different EEG motor imagery (MI). In addition, a four-class linear discriminant analysis (LDR) classifier model was built and compared to the proposed CNN model. The paper showed very good results with 92.8% accuracy for one EEG four-class MI set and 85.7% for another set. The results showed that the proposed CNN model outperforms multi-class linear discriminant analysis with an accuracy increase of 28.6% and 17.9% for both MI sets, respectively. Moreover, it has been shown that majority voting for five repetitions introduced an accuracy advantage of 15% and 17.2% for both EEG sets, compared with single trials. This confirms that increasing the number of trials for the same MI gesture improves the recognition accurac

    Lightweight Machine Learning with Brain Signals

    Full text link
    Electroencephalography(EEG) signals are gaining popularity in Brain-Computer Interface(BCI) systems and neural engineering applications thanks to their portability and availability. Inevitably, the sensory electrodes on the entire scalp would collect signals irrelevant to the particular BCI task, increasing the risks of overfitting in machine learning-based predictions. While this issue is being addressed by scaling up the EEG datasets and handcrafting the complex predictive models, this also leads to increased computation costs. Moreover, the model trained for one set of subjects cannot easily be adapted to other sets due to inter-subject variability, which creates even higher over-fitting risks. Meanwhile, despite previous studies using either convolutional neural networks(CNNs) or graph neural networks(GNNs) to determine spatial correlations between brain regions, they fail to capture brain functional connectivity beyond physical proximity. To this end, we propose 1) removing task-irrelevant noises instead of merely complicating models; 2) extracting subject-invariant discriminative EEG encodings, by taking functional connectivity into account; 3) navigating and training deep learning model with the most critical EEG channels; 4) detecting most similar EEG segments with target subject to reduce the cost of computation as well as inter-subject variability. Specifically, we construct a task-adaptive graph representation of brain network based on topological functional connectivity rather than distance-based connections. Further, non-contributory EEG channels are excluded by selecting only functional regions relevant to the corresponding intention. Lastly, contributory EEG segments are detected by several similarity estimation metrics, we then evaluate and train our proposed framework upon detected EEG segments to compare the performance of different metrics in EEG BCI tasks. We empirically show that our proposed approach, SIFT-EEG, outperforms state-of-the-art, with around 4% and 7% improvements over CNN-based and GNN-based models, on performing motor imagery predictions. Also, the task-adaptive channel selection demonstrates similar predictive performance with only 20% of raw EEG data. Moreover, the best-performed metric can achieve a high level of accuracy with less than 9% training data, suggesting a possible shift in direction for future works other than simply scaling up the model

    Wavelets and Morphological Operators Based Classification of Epilepsy Risk Levels

    Get PDF
    The objective of this paper is to compare the performance of Singular Value Decomposition (SVD), Expectation Maximization (EM), and Modified Expectation Maximization (MEM) as the postclassifiers for classifications of the epilepsy risk levels obtained from extracted features through wavelet transforms and morphological filters from EEG signals. The code converter acts as a level one classifier. The seven features such as energy, variance, positive and negative peaks, spike and sharp waves, events, average duration, and covariance are extracted from EEG signals, out of which four parameters like positive and negative peaks, spike and sharp waves, events, and average duration are extracted using Haar, dB2, dB4, and Sym8 wavelet transforms with hard and soft thresholding methods. The above said four features are also extracted through morphological filters. The performance of the code converter and classifiers are compared based on the parameters such as Performance Index (PI) and Quality Value (QV). The Performance Index and Quality Value of code converters are at low value of 33.26% and 12.74, respectively. The highest PI of 98.03% and QV of 23.82 are attained at dB2 wavelet with hard thresholding method for SVD classifier. All the postclassifiers are settled at PI value of more than 90% at QV of 20

    Speech Based Machine Learning Models for Emotional State Recognition and PTSD Detection

    Get PDF
    Recognition of emotional state and diagnosis of trauma related illnesses such as posttraumatic stress disorder (PTSD) using speech signals have been active research topics over the past decade. A typical emotion recognition system consists of three components: speech segmentation, feature extraction and emotion identification. Various speech features have been developed for emotional state recognition which can be divided into three categories, namely, excitation, vocal tract and prosodic. However, the capabilities of different feature categories and advanced machine learning techniques have not been fully explored for emotion recognition and PTSD diagnosis. For PTSD assessment, clinical diagnosis through structured interviews is a widely accepted means of diagnosis, but patients are often embarrassed to get diagnosed at clinics. The speech signal based system is a recently developed alternative. Unfortunately,PTSD speech corpora are limited in size which presents difficulties in training complex diagnostic models. This dissertation proposed sparse coding methods and deep belief network models for emotional state identification and PTSD diagnosis. It also includes an additional transfer learning strategy for PTSD diagnosis. Deep belief networks are complex models that cannot work with small data like the PTSD speech database. Thus, a transfer learning strategy was adopted to mitigate the small data problem. Transfer learning aims to extract knowledge from one or more source tasks and apply the knowledge to a target task with the intention of improving the learning. It has proved to be useful when the target task has limited high quality training data. We evaluated the proposed methods on the speech under simulated and actual stress database (SUSAS) for emotional state recognition and on two PTSD speech databases for PTSD diagnosis. Experimental results and statistical tests showed that the proposed models outperformed most state-of-the-art methods in the literature and are potentially efficient models for emotional state recognition and PTSD diagnosis
    corecore