5 research outputs found

    Real-Time Facial Emotion Recognition Using Fast R-CNN

    Get PDF
    In computer vision and image processing, object detection algorithms are used to detect semantic objects of certain classes of images and videos. Object detector algorithms use deep learning networks to classify detected regions. Unprecedented advancements in Convolutional Neural Networks (CNN) have led to new possibilities and implementations for object detectors. An object detector which uses a deep learning algorithm detect objects through proposed regions, and then classifies the region using a CNN. Object detectors are computationally efficient unlike a typical CNN which is computationally complex and expensive. Object detectors are widely used for face detection, recognition, and object tracking. In this thesis, deep learning based object detection algorithms are implemented to classify facially expressed emotions in real-time captured through a webcam. A typical CNN would classify images without specifying regions within an image, which could be considered as a limitation towards better understanding the network performance which depend on different training options. It would also be more difficult to verify whether a network have converged and is able to generalize, which is the ability to classify unseen data, data which was not part of the training set. Fast Region-based Convolutional Neural Network, an object detection algorithm; used to detect facially expressed emotion in real-time by classifying proposed regions. The Fast R-CNN is trained using a high-quality video database, consisting of 24 actors, facially expressing eight different emotions, obtained from images which were processed from 60 videos per actor. An object detector’s performance is measured using various metrics. Regardless of how an object detector performed with respect to average precision or miss rate, doing well on such metrics would not necessarily mean that the network is correctly classifying regions. This may result from the fact that the network model has been over-trained. In our work we showed that object detector algorithm such as Fast R-CNN performed surprisingly well in classifying facially expressed emotions in real-time, performing better than CNN

    Understanding COVID-19 halal vaccination discourse on facebook and twitter using aspect-based sentiment analysis and text emotion analysis

    Get PDF
    The COVID-19 pandemic introduced unprecedented challenges for people and governments. Vaccines are an available solution to this pandemic. Recipients of the vaccines are of different ages, gender, and religion. Muslims follow specific Islamic guidelines that prohibit them from taking a vaccine with certain ingredients. This study aims at analyzing Facebook and Twitter data to understand the discourse related to halal vaccines using aspect-based sentiment analysis and text emotion analysis. We searched for the term “halal vaccine” and limited the timeline to the period between 1 January 2020, and 30 April 2021, and collected 6037 tweets and 3918 Facebook posts. We performed data preprocessing on tweets and Facebook posts and built the Latent Dirichlet Allocation (LDA) model to identify topics. Calculating the sentiment analysis for each topic was the next step. Finally, this study further investigates emotions in the data using the National Research Council of Canada Emotion Lexicon. Our analysis identified four topics in each of the Twitter dataset and Facebook dataset. Two topics of “COVID-19 vaccine” and “halal vaccine” are shared between the two datasets. The other two topics in tweets are “halal certificate” and “must halal”, while “sinovac vaccine” and “ulema council” are two other topics in the Facebook dataset. The sentiment analysis shows that the sentiment toward halal vaccine is mostly neutral in Twitter data, whereas it is positive in Facebook data. The emotion analysis indicates that trust is the most present emotion among the top three emotions in both datasets, followed by anticipation and fear

    Finding readers in the blogosphere: A project building an audience for a blog at the intersection of mothering and beekeeping http://www.bumblehive.com

    Get PDF
    Professional project report submitted in partial fulfillment of the requirements for the degree of Masters of Arts in Journalism from the School of Journalism, University of Missouri--Columbia.Women publish nearly four million blogs that fall under the genre known as Mommy Blogs: written by women, for women, primarily on the subjects of family life and parenting. Many Mommy Bloggers count readers in the thousands. For my research, I sought to answer the questions: How do Mom Bloggers attract readers? Do bloggers with strong followings actively pursue their readers? How do readers find bloggers? During the Spring of 2013, I researched audience building within the genre and interviewed 10 Mommy Bloggers about their experiences. The theoretical framework I used is the Uses and Gratifications theory, which is conducive to research on blog audience because bloggers are blog readers themselves, and bloggers know a lot about the identity and motivations of their readers because of the interactive nature of the platform. The method I used is interviewing, with open-ended questions, in order to give respondents an opportunity to relay information unprompted. The research reveals six primary tactics used to attract readers: 1. Reading and commenting on other blogs, with the expectation of reciprocity. 2. Promoting their blog on Facebook and Twitter. 3. Posting at least once a week. 4. Writing with a consistent voice. 5. Specializing in a niche topic. 6. Guest posting and publishing widely.Includes bibliographic references

    Application of Lovheim model for emotion detection in english tweets

    No full text
    Emotions are central for a wide range of everyday human experiences and understanding emotions is a key problem both in the business world and in the fields of physiology and neuroscience. The most well-known theory of emotions proposes a categorical systemof emotion classification, where emotions are classified as discrete entities, while psychologists say that in general man will hardly express a single basic emotion. According to this observation, alternative models have been developed, which define multiple dimensions corresponding to various parameters and specify emotions along those dimensions. Recently, one of the most used models in affective computing is the Lovheim’s cube of emotions, i.e., a theoretical model that focuses on the interactions of monoamine neurotransmitters and emotions. This work presents a comparison between a single automatic classifier able to recognize the basic emotions proposed in the Lovheim’s cube and a set of independent binary classifiers, each one able to recognize a single dimension of the Lovehim’s cube. The application of this model has determined a notable improvement of results: in fact, in the best case there is an increment of the accuracy of 11,8%. The set of classifiers has been modeled and deployed on the distributed ActoDeS application architecture. This implementation improves the computational performance and it eases the system reconfiguration and its ability to recognize particular situations, consisting of particular combinations of basic emotions
    corecore