104 research outputs found
Fuzzy Layered Convolution Neutral Network for Feature Level Fusion Based On Multimodal Sentiment Classification
Multimodal sentiment analysis (MSA) is one of the core research topics of natural language processing (NLP). MSA has become a challenge for scholars and is equally complicated for an appliance to comprehend. One study that supports MS difficulties is the MSA, which is learning opinions, emotions, and attitudes in an audio-visual format. In order words, using such diverse modalities to obtain opinions and identify emotions is necessary. Such utilization can be achieved via modality data fusion, such as feature fusion. In handling the data fusion of such diverse modalities while obtaining high performance, a typical machine learning algorithm is Deep Learning (DL), particularly the Convolutional Neutral Network (CNN), which has the capacity to handle tasks of great intricacy and difficulty. In this paper, we present a CNN architecture with an integrated layer via fuzzy methodologies for MSA, a task yet to be explored in improving the accuracy performance of CNN for diverse inputs. Experiments conducted on a benchmark multimodal dataset, MOSI, obtaining 37.5% and 81% on seven (7) class and binary classification respectively, reveals an improved accuracy performance compared with the typical CNN, which acquired 28.9% and 78%, respectively
Using Deep Convolutional Neural Network for Emotion Detection on a Physiological Signals Dataset (AMIGOS)
Recommender systems have been based on context and content, and now the technological challenge of making personalized recommendations based on the user emotional state arises through physiological signals that are obtained from devices or sensors. This paper applies the deep learning approach using a deep convolutional neural network on a dataset of physiological signals (electrocardiogram and galvanic skin response), in this case, the AMIGOS dataset. The detection of emotions is done by correlating these physiological signals with the data of arousal and valence of this dataset, to classify the affective state of a person. In addition, an application for emotion recognition based on classic machine learning algorithms is proposed to extract the features of physiological signals in the domain of time, frequency, and non-linear. This application uses a convolutional neural network for the automatic feature extraction of the physiological signals, and through fully connected network layers, the emotion prediction is made. The experimental results on the AMIGOS dataset show that the method proposed in this paper achieves a better precision of the classification of the emotional states, in comparison with the originally obtained by the authors of this dataset.This research project is financed by theGovernment of Colombia, Colciencias and the Governorateof Boyac
Toward enhancement of deep learning techniques using fuzzy logic: a survey
Deep learning has emerged recently as a type of artificial intelligence (AI) and machine learning (ML), it usually imitates the human way in gaining a particular knowledge type. Deep learning is considered an essential data science element, which comprises predictive modeling and statistics. Deep learning makes the processes of collecting, interpreting, and analyzing big data easier and faster. Deep neural networks are kind of ML models, where the non-linear processing units are layered for the purpose of extracting particular features from the inputs. Actually, the training process of similar networks is very expensive and it also depends on the used optimization method, hence optimal results may not be provided. The techniques of deep learning are also vulnerable to data noise. For these reasons, fuzzy systems are used to improve the performance of deep learning algorithms, especially in combination with neural networks. Fuzzy systems are used to improve the representation accuracy of deep learning models. This survey paper reviews some of the deep learning based fuzzy logic models and techniques that were presented and proposed in the previous studies, where fuzzy logic is used to improve deep learning performance. The approaches are divided into two categories based on how both of the samples are combined. Furthermore, the models' practicality in the actual world is revealed
Recommended from our members
Brainwave-Based Human Emotion Estimation using Deep Neural Network Models for Biofeedback
This thesis was submitted for the award of Doctor of Philosophy and was awarded by Brunel University LondonEmotion is a state that comprehensively represents human feeling, thought and behavior, thus takes an important role in interpersonal human communication. Emotion estimation aims to automatically discriminate different emotional states by using physiological and nonphysiological signals acquired from human to achieve effective communication and interaction between human and machines. Brainwaves-Based Emotion Estimation is one of the most common used and efficient methods for emotion estimation research. The technology reveals a great role for human emotional disorder treatment, brain computer interface for disabilities, entertainment and many other research areas. In this thesis, various methods, schemes and frameworks are presented for Electroencephalogram (EEG) based human emotion estimation. Firstly, a hybrid dimension featurere duction scheme is presented using a total of 14 different features extracted from EEG recordings. The scheme combines these distinct features in the feature space using both supervised and unsupervised feature selection processes. Maximum Relevance Minimum Redundancy (mRMR) is applied to re-order the combined features into max-relevance with the emotion labels and min-redundancy of each feature. The generated features are further reduced with Principal Component Analysis (PCA) for extracting the principal components. Experimental results show that the proposed work outperforms the state-of-art methods using the same settings at the publicly available Database for Emotional Analysis using Physiological Signals (DEAP) data set. Secondly, a disentangled adaptive noise learning β-Variational autoencoder (VAE) combinewithlongshorttermmemory(LSTM)modelwasproposedfortheemotionrecognition based on EEG recordings. The experiment is also based on the EEG emotion public DEAPdataset. At first, the EEG time-series data are transformed into the Video-like EEG image data through the Azimuthal Equidistant Projection (AEP) to original EEG-sensor 3-D coordinates to perform 2-D projected locations of electrodes. Then Clough-Tocher scheme is applied for interpolating the scattered power measurements over the scalp and for estimating the values in-between the electrodes over a 32x32 mesh. After that, the βVAE LSTM algorithm is used to estimate the accuracy of the quadratic (arousal-valence) classification. The comparison between the β VAE-LSTM model and other classic methods is conducted at the same experimental setting that shows that the proposed model is effective. Finally, a novel real-time emotion detection system based on the EEG signals from a portable headband was presented, integrated into the interactive film ‘RIOT’. At first, the requirement of the interactive film was collected and the protocol for data collection using a portable EEG sensor (Emotiv Epoc) was designed. Then, a portable EEG emotion database (PEED) is built from 10 participants with the emotion labels using both self-reporting and video annotation tools. After that, various feature extraction, feature selection, validation scheme and classification methods are explored to build a practical system for the real-time detection. In the end, the emotion detection system is trained and integrated into the interactive film for real-time implementation and fully evaluated. The experimental results demonstrate the system with satisfied emotion detection accuracy and real-time performance
Emotion Recognition from EEG Signal Focusing on Deep Learning and Shallow Learning Techniques
Recently, electroencephalogram-based emotion recognition has become crucial in enabling the Human-Computer Interaction (HCI) system to become more intelligent. Due to the outstanding applications of emotion recognition, e.g., person-based decision making, mind-machine interfacing, cognitive interaction, affect detection, feeling detection, etc., emotion recognition has become successful in attracting the recent hype of AI-empowered research. Therefore, numerous studies have been conducted driven by a range of approaches, which demand a systematic review of methodologies used for this task with their feature sets and techniques. It will facilitate the beginners as guidance towards composing an effective emotion recognition system. In this article, we have conducted a rigorous review on the state-of-the-art emotion recognition systems, published in recent literature, and summarized some of the common emotion recognition steps with relevant definitions, theories, and analyses to provide key knowledge to develop a proper framework. Moreover, studies included here were dichotomized based on two categories: i) deep learning-based, and ii) shallow machine learning-based emotion recognition systems. The reviewed systems were compared based on methods, classifier, the number of classified emotions, accuracy, and dataset used. An informative comparison, recent research trends, and some recommendations are also provided for future research directions
Text-based Sentiment Analysis and Music Emotion Recognition
Nowadays, with the expansion of social media, large amounts of user-generated
texts like tweets, blog posts or product reviews are shared online. Sentiment polarity
analysis of such texts has become highly attractive and is utilized in recommender
systems, market predictions, business intelligence and more. We also witness deep
learning techniques becoming top performers on those types of tasks. There are
however several problems that need to be solved for efficient use of deep neural
networks on text mining and text polarity analysis.
First of all, deep neural networks are data hungry. They need to be fed with
datasets that are big in size, cleaned and preprocessed as well as properly labeled.
Second, the modern natural language processing concept of word embeddings as a
dense and distributed text feature representation solves sparsity and dimensionality
problems of the traditional bag-of-words model. Still, there are various uncertainties
regarding the use of word vectors: should they be generated from the same dataset
that is used to train the model or it is better to source them from big and popular
collections that work as generic text feature representations? Third, it is not easy for
practitioners to find a simple and highly effective deep learning setup for various
document lengths and types. Recurrent neural networks are weak with longer texts
and optimal convolution-pooling combinations are not easily conceived. It is thus
convenient to have generic neural network architectures that are effective and can
adapt to various texts, encapsulating much of design complexity.
This thesis addresses the above problems to provide methodological and practical
insights for utilizing neural networks on sentiment analysis of texts and achieving
state of the art results. Regarding the first problem, the effectiveness of various
crowdsourcing alternatives is explored and two medium-sized and emotion-labeled
song datasets are created utilizing social tags. One of the research interests of Telecom
Italia was the exploration of relations between music emotional stimulation and
driving style. Consequently, a context-aware music recommender system that aims
to enhance driving comfort and safety was also designed. To address the second
problem, a series of experiments with large text collections of various contents and
domains were conducted. Word embeddings of different parameters were exercised
and results revealed that their quality is influenced (mostly but not only) by the
size of texts they were created from. When working with small text datasets, it is
thus important to source word features from popular and generic word embedding
collections. Regarding the third problem, a series of experiments involving convolutional
and max-pooling neural layers were conducted. Various patterns relating
text properties and network parameters with optimal classification accuracy were
observed. Combining convolutions of words, bigrams, and trigrams with regional
max-pooling layers in a couple of stacks produced the best results. The derived
architecture achieves competitive performance on sentiment polarity analysis of
movie, business and product reviews.
Given that labeled data are becoming the bottleneck of the current deep learning
systems, a future research direction could be the exploration of various data programming
possibilities for constructing even bigger labeled datasets. Investigation
of feature-level or decision-level ensemble techniques in the context of deep neural
networks could also be fruitful. Different feature types do usually represent complementary
characteristics of data. Combining word embedding and traditional text
features or utilizing recurrent networks on document splits and then aggregating the
predictions could further increase prediction accuracy of such models
Emotional Design: An Overview
Emotional design has been well recognized in the domain of human factors and ergonomics. In this chapter, we reviewed related models and methods of emotional design. We are motivated to encourage emotional designers to take multiple perspectives when examining these models and methods. Then we proposed a systematic process for emotional design, including affective-cognitive needs elicitation, affective-cognitive needs analysis, and affective-cognitive needs fulfillment to support emotional design. Within each step, we provided an updated review of the representative methods to support and offer further guidance on emotional design. We hope researchers and industrial practitioners can take a systematic approach to consider each step in the framework with care. Finally, the speculations on the challenges and future directions can potentially help researchers across different fields to further advance emotional design.http://deepblue.lib.umich.edu/bitstream/2027.42/163319/1/Emotional_Design_Manuscript_Final.pdfSEL
- …