24 research outputs found

    Sentiment analysis of Turkısh twitter data using polarity lexicon and artificial intelligence

    No full text
    Sosyal medya artık insanların duygularını etkilemede önemli bir rol oynamakta, insanların özellikle de tüketicilerin belirli bir konu, ürün veya fikir hakkında ne hissettiklerini analiz etmemize yardımcı olmaktadır. İnsanların düşüncelerini ifade etmek için kullandıkları güncel sosyal medya platformlarından biri Twitter'dır. Bu tez çalışmasında Twitter API'si kullanılarak Twitter'dan 13 bin tivit toplanmış ve kutupsallık sözlüğü ve makine öğrenmesi sınıflandırmaları yardımı ile duygu analizi yapılmıştır. Bu amaçla bu tez çalışmasında rasgele orman (random forest) ve destek vektör makineleri (support vector machines) olmak üzere iki farklı makine öğrenmesi yöntemi sınıflandırıcı olarak kullanılmıştır. Toplanan tivitler içeriğine göre pozitif, negatif veya nötr olarak etiketlenmiştir. Twitler üzerindeki duygu analizleri ham biçimdeki tivitler üzerinde, dizgecikler ve etkisiz-kelimeler (stop-words) çıkarıldıktan sonra oluşan veri üzerinde ve tivitlerin kökü bulunduktan sonra oluşan veri üzerinde olmak üzere üç farklı aşamada yapılmıştır. Bu aşamaların hepsinde ayrı ayrı duygu analizi yapılmıştır. Son olarak, kullanılan farklı yöntemler toplanan veriler üzerinde test edilmiştir. Ele alınan problem için destek vektör makinelerinin en kısa yürütme süresine sahip olduğu, rasgele orman yönteminin ham veriler üzerinde daha iyi performans gösterdiği, kutupsallık sözlüğü kullanan yöntemin performansının ise diğer yöntemlerde olmayan bir şekilde verilerin ham halinden köklerinin bulunduğu duruma doğru sürekli olarak iyileştiği gözlenmiştirSocial media is now playing an important role in influencing people’s sentiment and also helps us to analyze how people particularly consumers feel about a particular topic, a product or an idea. One of the recent social media platforms to express thoughts is Twitter. In this thesis, a sum of 13K Turkish tweets had been collected from Twitter using the Twitter API and their sentiments are being analyzed using polarity lexicon and the use of machine learning classifiers. Random forests and support vector machines are the two kinds of classifiers that are adopted. The collected tweets are classified to be eitherpositive, negative or neutral based on their contents and then their sentiments have been analyzed in three different phases both when the tweets are in raw form, after the tweets are converted into tokens and stop-words are being removed from them and also when the tweets are being stemmed. Finally, the different methodologies used have been tested and find out that support vector machines is the method with the shortest execution time, while random forests perform better on raw data before any manipulation of the data, the performance of the method using polarity lexicon increases continuously as the data being manipulated from raw up to stemmed data

    Emotion Detection and its Effect on Decision-making

    No full text
    Emotion categorization has become an increasingly important area of research due to the rising number of intelligent systems. Artificial classifiers have demonstrated limited competency in classifying different emotions and have widely been used in recent years to facilitate the task of emotion categorization. Conversely, it requires time and is sometimes hard for human classifiers to agree with each other on the facial expression categorization tasks. Hence, this thesis will consider how the combination of human and artificial classifiers can lead to improvements in emotion classification. Further, as emotions are not only communicative tools that are reflected on the face, this thesis will also investigate how emotions are reflected in the body and how that can affect the decision-making process. Existing methods of emotion categorization from visual data using deep learning algorithms analyze the emotion by representing knowledge in a homogeneous way. As a result, a small change to the input image made by an adversary or due to the occlusion of the partial region of the face might result in a large decrease in the accuracy of these algorithms. The proposed thesis is that an artificial system designed based on inspiration from neuro-scientific theory or the natural way humans categorize emotions, e.g. considering the mouth, eyes, jaw etc. can obtain better accuracy in emotion categorization than existing state-of-the-art techniques, which rely on homogeneous knowledge representation, i.e. consider the color within pixels of an image as equally important. The comprehensive goal is to create different emotion categorization methods, inspired by neuro-scientific processes and the way humans categorize emotion. These methods are to be tested on different emotional facial expression datasets. In addition, this thesis will also investigate how emotions are reflected in the body, i.e. in the skin conductance and heart rate, as well as analyzes what people want to feel and how that can affect their decision-making. Understanding what people want to feel and how that can affect their decision-making can lead to the production of artificially intelligent systems for real-world situations. The first academic contribution is related to the novel methods linking and transferring knowledge learned based on the strategies of the brain and the way humans categorize emotions. The second academic contribution is related to novel approaches being able to predict and identify patterns that contribute to predicting heart rate and skin conductance in the context of emotion. The last academic contribution is related to the findings that emotionally-aware systems will make better and more relevant decisions in shared workspaces than non-emotionally-aware ones. This thesis proposes a landmark-based method for categorizing emotion from images. The novel landmark-based method extracts facial features that are resistant to attacks by identifying salient features related to emotion regardless of the size, shape, and proportions of the face exhibiting the emotion. Experimental results showed that the novel landmark-based method can achieve better performance in terms of accuracy (> 97%) and computational cost ( 119 hrs-time, respectively) for both in-distribution and out-of-distribution analysis. Furthermore, this thesis created a lateralized system by adapting the lateralized framework for computer vision and considering both constituents (e.g. mouth, eyes, jaw. etc.) and holistic (whole face) features to categorize emotion from images. The novel system successfully exhibited robustness against adversarial attacks by applying lateralization. The ability to simultaneously consider the parts of the face (constituents level) and the whole face (holistic level) empowers the lateralized system to correctly classify emotions and show stronger resistance to changes (10.86–47.72% decrease) in comparison to the state-of-the-art methods (25.15–83.43% decrease). This thesis also proposes a novel lateralized landmark-based method mirroring the human lateralized systems to categorize emotion from images. The novel system successfully exhibited robustness against attacks by using emotion-relevant features from the face exhibiting the emotion (instead of using pixels color), as well as simultaneously considering both constituents and holistic level predictions. The novel hybrid method was shown to achieve significantly higher accuracy ( 67% decrease) when tested on datasets with sufficient data to cover the common situation. Both landmark-based and lateralized systems categorize emotion from full facial images. However, the use of partial face coverings such as sunglasses and face masks, which are becoming very common nowadays unintentionally obscure facial expressions, causing a loss of accuracy when humans and computer systems attempt to categorise emotion. With the rise of soft computing techniques interacting with humans, it is important to know not just their accuracy, but also the confusion errors being made—do humans make less random/damaging errors than soft computing? Therefore, this thesis compared the accuracy of humans and computer systems in categorizing emotion from faces partially covered with sunglasses and face masks. The results suggest that although the accuracy of both human and computer systems decreases when the face is partially covered with sunglasses and face masks, the performance of machine learning classifiers is greatly impacted (> 74%) in comparison to humans ( This thesis further proposes the first attention-based method to improve the classification accuracy of benchmark machine learning classifiers when categorizing emotion from the images of people partially covered with sunglasses and face masks, by paying more attention to uncovered regions of the face. The ability to detect occluded regions of the face based on the covering type and paying more attention to the uncovered regions empowers the novel attention-based method to correctly classify emotions from partially covered faces. Experimental results showed that the novel attention-based method was shown to perform better than the benchmark approaches by a significant amount (up to 50.26% increase). Moving to the broader human expression, this thesis determines how emotions are reflected in the body by analyzing the moment-by-moment brain activity to predict emotional arousal-related autonomic nervous responses of participants as they watched emotion-provoking videos. The results suggest that predicting continuous autonomic responses such as heart rate and galvanic skin responses requires an approach capable of learning dependence or sequential feature selection to improve prediction performance. The results also suggest that specific brain regions and peripheral measures support differential processing of heart rate and galvanic skin response as the prediction error was significantly reduced using only a small number of feature subsets than using all features from the dataset. Lastly, this thesis investigates what people want to feel, as well as determines the impact of emotion on social economic decision-making. The results suggest that participants predominantly preferred experiencing happiness over anger even when they expected anger to be beneficial for goal pursuit. However, they preferred experiencing happiness to a lesser extent when they were told anger would be beneficial. The results demonstrate that experiencing a specific emotion (either anger or happiness) does not promote successful confrontation performance. In spite of that, further findings from the study suggest that emotion-awareness is an important factor in determining participants' performance, as participants higher in emotion-awareness were found to perform significantly better in comparison to participants low in emotion-awareness. In conclusion, this thesis develops a range of heterogeneous and attention-based methods mimicking human cognitive processing, analyzes how emotion is reflected in the body, as well as analyzes what people want to feel and how that can affect their decision-making. The overall results showed that heterogeneous and attention-based methods can achieve better accuracy in emotion categorization tasks than existing methods that utilize pixels. The results also showed that specific moment-by-moment brain activity captured in the context of emotions supports differential processing of heart rate and galvanic skin response. Further, the results also suggest that emotionally-aware artificial intelligent systems if produced can make better and more relevant decisions in shared workspaces. This will lead to the development of artificial affective decision-making techniques, especially suited to dynamic/uncertain domains that elicit emotions in humans.</p

    An Adversarial Attacks Resistance-based Approach to Emotion Recognition from Images using Facial Landmarks

    No full text
    Emotion recognition has become an increasingly important area of research due to the increasing number of CCTV cameras in the past few years. Deep network-based methods have made impressive progress in performing emotion recognition-based tasks, achieving high performance on many datasets and their related competitions such as the ImageNet challenge. However, deep networks are vulnerable to adversarial attacks. Due to their homogeneous representation of knowledge across all images, a small change to the input image made by an adversary might result in a large decrease in the accuracy of the algorithm. By detecting heterogeneous facial landmarks using the machine learning library Dlib we hypothesize we can build robustness to adversarial attacks. The residual neural network (ResNet) model has been used as an example of a deep learning model. While the accuracy achieved by ResNet showed a decrease of up to 22%, our proposed approach has shown strong resistance to an attack and showed only a little (< 0.3%) or no decrease when the attack is launched on the data. Furthermore, the proposed approach has shown considerably less execution time compared to the ResNet model.</p

    Emotion Categorization from Video-Frame Images Using a Novel Sequential Voting Technique

    No full text
    Emotion categorization can be the process of identifying different emotions in humans based on their facial expressions. It requires time and sometimes it is hard for human classifiers to agree with each other about an emotion category of a facial expression. However, machine learning classifiers have done well in classifying different emotions and have widely been used in recent years to facilitate the task of emotion categorization. Much research on emotion video databases uses a few frames from when emotion is expressed at peak to classify emotion, which might not give a good classification accuracy when predicting frames where the emotion is less intense. In this paper, using the CK+ emotion dataset as an example, we use more frames to analyze emotion from mid and peak frame images and compared our results to a method using fewer peak frames. Furthermore, we propose an approach based on sequential voting and apply it to more frames of the CK+ database. Our approach resulted in up to 85.9% accuracy for the mid frames and overall accuracy of 96.5% for the CK+ database compared with the accuracy of 73.4% and 93.8% from existing techniques.</p

    Particle Swarm Optimization for Feature Selection in Emotion Categorization

    No full text
    Emotion categorization plays an important role in understanding human emotions by artificial intelligence systems such as robots. It is a difficult task as humans express many features, which vary over time when showing an emotion. Thus, existing classification techniques are overwhelmed, and the creation of a subset of appropriate features is needed. Feature selection can be used to improve the performance of an emotion categorization task by selecting a subset of features. This removes irrelevant features. Particle swarm optimization (PSO) is a meta-heuristic algorithm which has demonstrated excellent performance in feature selection tasks. However, traditional PSO algorithms often get trapped in local optima as they use their personal best and global best to determine their search direction, which may lead to premature convergence. In this paper, we present a time-based PSO variant by introducing a time-constant into the velocity update function of the PSO algorithm to avoid premature convergence, particularly in an emotion video-frame dataset. The method has been incorporated into binary and continuous PSO, then compared with the two standard versions on an emotion video-frame (CK+) dataset, as well as on static emotional datasets (i.e. the JAFFE and NIMH-ChEFS) to ensure that bias has not been introduced into the algorithm. While the time-based PSO variant (both binary and the continuous PSO) have achieved non-significantly higher performance than the standard PSO algorithms on the JAFFE (77.15% vs 75.61%) and NIMH-ChEFS (71.57% vs 70.53%) dataset, the performance is significantly higher on the CK+ (96.19% vs 94.06%) dataset.</p

    EMAP Open Dataset

    No full text
    &lt;p&gt;EMAP is a dataset of 145 individuals' reactions to emotion-provoking film clips. It includes electroencephalographic and peripheral physiological data as well as moment-by-moment ratings for emotional arousal in addition to overall and categorical ratings. The dataset includes "raw" EEG and "cleaned" EEG data versions, as well as skin conductance, heart rate and respiration data and subjective ratings.&lt;/p&gt;&lt;p&gt;Folder and file labels and contents:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;Both clean and raw data are available in both CSV and EEGLabset file formats. The data labeled as 'P1-50,' 'P51-100,' and 'P101-153' correspond to participants 1-50, 51-100, and 101-153, respectively.&lt;/li&gt;&lt;li&gt;Additionally, the repository includes already extracted EEG features, named as 'Features,' for researchers interested in utilizing the features from our previous study.&lt;/li&gt;&lt;li&gt;It is important to note that data for some participants are missing, which explains why the dataset contains information for 145 participants, instead of 153.&lt;/li&gt;&lt;li&gt;The accompanying "session" .csv files contain demographics and metadata, as well as detailed information on missing participants, trials, sensors, and data.&lt;/li&gt;&lt;/ul&gt

    Performance boosting of image matching-based iris recognition systems using deformable circular hollow kernels and uniform histogram fusion images

    No full text
    Identification of people using different biometric data is becoming more important in network society. Biometrics include voice, ears, palms, fingerprints, faces, iris, retina, and hand shapes. Among these features, iris detection gets more attention because each iris type is unique and does not change throughout life. In this study, an iris recognition framework is proposed using deformable circular hollow kernels and uniform histogram fusion images (UHFIs). This system introduces two different approaches for the iris recognition: one with image matching without machine learning and the other one with machine learning. In the first approach, image matching through Gabor features is employed. After getting the circularly cropped UHFI, Gabor features are extracted and employed in the image matching-based recognition system. In addition, a normalized cross-correlation coefficient is used as a similarity metric to compare the Gabor feature vectors. In the second approach, Gabor feature images are extracted and employed in deep learning (DL). According to the experimental results, the proposed system reaches around 89% accuracy on MMU1, 86% accuracy on MMU2, and about 50% accuracy on the CASIAV3 and CASIAV4 datasets when no machine learning is employed. Extensive benchmarking results also indicate that the proposed system boosts the performance of conventional systems by at least around 40% in terms of accuracy in the absence of machine learning. However, in the presence of machine learning, experimental results show that the proposed method with DL achieves an accuracy of up to 100% for MMU1, MMU2, CASIAV3, and CASIAV4 datasets. (c) 2022 SPIE and IS&

    A comparison of humans and machine learning classifiers categorizing emotion from faces with different coverings

    No full text
    Partial face coverings such as sunglasses and face masks unintentionally obscure facial expressions, causing a loss of accuracy when humans and computer systems attempt to categorize emotion. With the rise of soft computing techniques interacting with humans, it is important to know not just their accuracy, but also the confusion errors being made—do humans make less random/damaging errors than soft computing? We analyzed the impact of sunglasses and different face masks on the ability to categorize emotional facial expressions in humans and computer systems. Computer systems, represented by VGG19, ResNet50, and InceptionV3 deep learning algorithms, and humans assessed images of people with varying emotional facial expressions and with four different types of coverings, i.e. unmasked, with a mask covering the lower face, a partial mask with transparent mouth window, and with sunglasses. The first contribution of this work is that computer systems were found to be better classifiers (98.48%) than humans (82.72%) for faces without covering (>15% difference). This difference is due to the significantly lower accuracy in categorizing anger, disgust, and fear expressions by humans (p′s<.001). However, the most novel aspect of the work is identifying how soft computing systems make different mistakes to humans on the same data. Humans mainly confuse unclear expressions as neutral emotion, which minimizes affective effects. Conversely, soft techniques often confuse unclear expressions as other emotion categories, which could lead to opposing decisions being made, e.g. a robot categorizing a fearful user as happy. Importantly, the variation in the misclassification can be adjusted by variations in the balance of categories in the training set.</p

    An Out-of-Distribution Attack Resistance Approach to Emotion Categorization

    No full text
    Deep neural networks are a powerful model for feature extraction. They produce features that enable state-of-the-art performance on many tasks, including emotion categorization. However, their homogeneous representation of knowledge has made them prone to attacks, i.e., small modification in train or test data to mislead the models. Emotion categorization can usually be performed to be either in-distribution (train and test with the same dataset) or out-of-distribution (train on one or more dataset(s) and test on a different dataset). Our already developed landmark-based technique, which is robust for in-distribution improvement against attacks in emotion categorization, could translate to out-of-distribution classification problems. This is important as different databases might have different variations such as in color or level of expressiveness of emotion. We compared the landmark-based method with four state-of-the-art deep models (EfficientNetB0, InceptionV3, ResNet50, and VGG19), as well as emotion categorization tools (i.e., Python Facial Expression Analysis Toolbox and the Microsoft Azure face application programming interface) by performing a cross-database experiment across six commonly used databases, i.e., extended Cohn-Kanade, Japanese female facial expression, Karolinska directed emotional faces, National Institute of Mental Health Child Emotional Faces Picture Set, real-world affective faces, and psychological image collection at Stirling databases. The landmark-based method has achieved a significantly higher accuracy, achieving an average of 47.44% compared with most of the deep networks ( 97% accuracy. Impact Statement-Recognising emotions from people's faces has real-world applications for computer-based perception as it is often vital for interpersonal communication. Emotion recognition tasks nowadays are addressed using deep learning models that model colour distribution so classify images rather than emotion. This homogeneous knowledge representation is in contrast to emotion categorization, which is hypothesised as more heterogeneous landmark-based. This is investigated through out-of-distribution emotion categorization problems, where the test samples are drawn from a different dataset to training images. Our landmark-based method achieves a significantly higher classification performance (on average) compared with four state-of-the-art deep networks (EfficientNetB0, InceptionV3, ResNet50 and VGG19), as well as other emotion categorization tools such as Py-Feat and the Azure Face API. We conclude that this improved generalization is relevant for future developments of emotion categorization tools.</p
    corecore