5 research outputs found

    Beauty Pageants, Controverted Decisions and Emotional Outpour: An Analysis of Social Media Posts on GHANA'S MOST BEAUTIFUL

    Get PDF
    Since its emergence, beauty pageantry as a cultural even and entertainment programme has been predominantly characterized as controversial and an unwelcome social venture. Scholars like Cohen, Wilk and Stoeltje (1996) reiterate this by arguing that beauty pageants have become sites of controversy and resistance. In Ghana, most of these controversies take the form of audience uproar and outburst on social media, especially at the end of the contest when expectations are not met. Following the excessive amount of data social media generates on the beauty contest, it has become all-important for researchers to focus our attention on interrogating the posts shared by audiences as well as the emotions seeded in them. The study, therefore, examines social media posts of audiences on an indigenous beauty pageant in Ghana, Ghana’s Most Beautiful (GMB). The paper adopts the Reader-Response theory, Philosophical Hermeneutics and Ekman and Friesen's six basic emotions to interrogate Facebook users’ posts on the coronation of the 2017 and 2018 beauty Queens, Zeinab and Abena. Using qualitative content analysis, cyber ethnographic and thematic analysis of several purposively sampled Facebook posts, the study revealed that through emotional outpour of anger, disgust, sadness, surprise, and happiness, audience raised contemptuous and social issues such as corruption, aspersions, signification/ethnocentrism, and objectification. Withal, they also highlighted issues like glorification, and divination. Again, the study found that the beauty queens, through users’ posts were represented as commodities, assertive, purposeful, heroic and intelligent. The study concludes that organizers of GMB re-consider a total restructuring of the pageant into one that embraces cultural differences devoid of commodification and oppression of women. Keywords: Ghana’s Most Beautiful, Beauty pageants, audience emotions, emotional display, social media DOI: 10.7176/NMMC/97-05 Publication date:August 31st 202

    A Survey on Emotion Recognition for Human Robot Interaction

    Get PDF
    With the recent developments of technology and the advances in artificial intelligent and machine learning techniques, it becomes possible for the robot to acquire and show the emotions as a part of Human-Robot Interaction (HRI). An emotional robot can recognize the emotional states of humans so that it will be able to interact more naturally with its human counterpart in different environments. In this article, a survey on emotion recognition for HRI systems has been presented. The survey aims to achieve two objectives. Firstly, it aims to discuss the main challenges that face researchers when building emotional HRI systems. Secondly, it seeks to identify sensing channels that can be used to detect emotions and provides a literature review about recent researches published within each channel, along with the used methodologies and achieved results. Finally, some of the existing emotion recognition issues and recommendations for future works have been outlined

    Dual-level segmentation method for feature extraction enhancement strategy in speech emotion recognition

    Get PDF
    The speech segmentation approach could be one of the significant factors contributing to a Speech Emotion Recognition (SER) system's overall performance. An utterance may contain more than one perceived emotion, the boundaries between the changes of emotion in an utterance are challenging to determine. Speech segmented through the conventional fixed window did not correspond to the signal changes, due to the random segment point, an arbitrary segmented frame is produced, the segment boundary might be within the sentence or in-between emotional changes. This study introduced an improvement of segment-based segmentation on a fixed-window Relative Time Interval (RTI) by using Signal Change (SC) segmentation approach to discover the signal boundary concerning the signal transition. A segment-based feature extraction enhancement strategy using a dual-level segmentation method was proposed: RTI-SC segmentation utilizing the conventional approach. Instead of segmenting the whole utterance at the relative time interval, this study implements peak analysis to obtain segment boundaries defined by the maximum peak value within each temporary RTI segment. In peak selection, over-segmentation might occur due to connections with the input signal, impacting the boundary selection decision. Two approaches in finding the maximum peaks were implemented, firstly; peak selection by distance allocation, and secondly; peak selection by Maximum function. The substitution of the temporary RTI segment with the segment concerning signal change was intended to capture better high-level statistical-based features within the signal transition. The signal's prosodic, spectral, and wavelet properties were integrated to structure a fine feature set based on the proposed method. 36 low-level descriptors and 12 statistical features and their derivative were extracted on each segment resulted in a fixed vector dimension. Correlation-based Feature Subset Selection (CFS) with the Best First search method was applied for dimensionality reduction before Support Vector Machine (SVM) with Sequential Minimal Optimization (SMO) was implemented for classification. The performance of the feature fusion constructed from the proposed method was evaluated through speaker-dependent and speaker-independent tests on EMO-DB and RAVDESS databases. The result indicated that the prosodic and spectral feature derived from the dual-level segmentation method offered a higher recognition rate for most speaker-independent tasks with a significant improvement of the overall accuracy of 82.2% (150 features), the highest accuracy among other segmentation approaches used in this study. The proposed method outperformed the baseline approach in a single emotion assessment in both full dimensions and an optimized set. The highest accuracy for every emotion was mostly contributed by the proposed method. Using the EMO-DB database, accuracy was enhanced, specifically, happy (67.6%), anger (89%), fear (85.5%), disgust (79.3%), while neutral and sadness emotion obtained a similar accuracy with the baseline method (91%) and (93.5%) respectively. A 100% accuracy for boredom emotion (female speaker) was observed in the speaker-dependent test, the highest single emotion classified, reported in this study
    corecore