115 research outputs found

    Learning sentiment from students’ feedback for real-time interventions in classrooms

    Get PDF
    Knowledge about users sentiments can be used for a variety of adaptation purposes. In the case of teaching, knowledge about students sentiments can be used to address problems like confusion and boredom which affect students engagement. For this purpose, we looked at several methods that could be used for learning sentiment from students feedback. Thus, Naive Bayes, Complement Naive Bayes (CNB), Maximum Entropy and Support Vector Machine (SVM) were trained using real students' feedback. Two classifiers stand out as better at learning sentiment, with SVM resulting in the highest accuracy at 94%, followed by CNB at 84%. We also experimented with the use of the neutral class and the results indicated that, generally, classifiers perform better when the neutral class is excluded

    Sentiment analysis:towards a tool for analysing real-time students feedback

    Get PDF
    Students' real-time feedback has numerous advantages in education, however, analysing feedback while teaching is both stressful and time consuming. To address this problem, we propose to analyse feedback automatically using sentiment analysis. Sentiment analysis is domain dependent and although it has been applied to the educational domain before, it has not been previously used for real-time feedback. To find the best model for automatic analysis we look at four aspects: preprocessing, features, machine learning techniques and the use of the neutral class. We found that the highest result for the four aspects is Support Vector Machines (SVM) with the highest level of preprocessing, unigrams and no neutral class, which gave a 95 percent accuracy

    Automatic Classification of Autistic Child Vocalisations: A Novel Database and Results

    Get PDF
    Humanoid robots have in recent years shown great promise for supporting the educational needs of children on the autism spectrum. To further improve the efficacy of such interactions, user-adaptation strategies based on the individual needs of a child are required. In this regard, the proposed study assesses the suitability of a range of speech-based classification approaches for automatic detection of autism severity according to the com- monly used Social Responsiveness Scale ™ second edition (SRS- 2). Autism is characterised by socialisation limitations including child language and communication ability. When compared to neurotypical children of the same age these can be a strong indi- cation of severity. This study introduces a novel dataset of 803 utterances recorded from 14 autistic children aged between 4 – 10 years, during Wizard-of-Oz interactions with a humanoid robot. Our results demonstrate the suitability of support vector machines (SVMs) which use acoustic feature sets from multiple Interspeech C OM P AR E challenges. We also evaluate deep spec- trum features, extracted via an image classification convolutional neural network (CNN) from the spectrogram of autistic speech instances. At best, by using SVMs on the acoustic feature sets, we achieved a UAR of 73.7 % for the proposed 3-class task

    Face mask recognition from audio: the MASC database and an overview on the mask challenge

    Get PDF
    The sudden outbreak of COVID-19 has resulted in tough challenges for the field of biometrics due to its spread via physical contact, and the regulations of wearing face masks. Given these constraints, voice biometrics can offer a suitable contact-less biometric solution; they can benefit from models that classify whether a speaker is wearing a mask or not. This article reviews the Mask Sub-Challenge (MSC) of the INTERSPEECH 2020 COMputational PARalinguistics challengE (ComParE), which focused on the following classification task: Given an audio chunk of a speaker, classify whether the speaker is wearing a mask or not. First, we report the collection of the Mask Augsburg Speech Corpus (MASC) and the baseline approaches used to solve the problem, achieving a performance of [Formula: see text] Unweighted Average Recall (UAR). We then summarise the methodologies explored in the submitted and accepted papers that mainly used two common patterns: (i) phonetic-based audio features, or (ii) spectrogram representations of audio combined with Convolutional Neural Networks (CNNs) typically used in image processing. Most approaches enhance their models by adapting ensembles of different models and attempting to increase the size of the training data using various techniques. We review and discuss the results of the participants of this sub-challenge, where the winner scored a UAR of [Formula: see text]. Moreover, we present the results of fusing the approaches, leading to a UAR of [Formula: see text]. Finally, we present a smartphone app that can be used as a proof of concept demonstration to detect in real-time whether users are wearing a face mask; we also benchmark the run-time of the best models

    A novel face recognition system in unconstrained environments using a convolutional neural network

    Get PDF
    The performance of most face recognition systems (FRS) in unconstrained environments is widely noted to be sub-optimal. One reason for this poor performance may be due to the lack of highly effective image pre-processing approaches, which are typically required before the feature extraction and classification stages. Furthermore, it is noted that only minimal face recognition issues are typically considered in most FRS, thus limiting the wide applicability of most FRS in real-life scenarios. Thus, it is envisaged that developing more effective pre-processing techniques, in addition to selecting the correct features for classification, will significantly improve the performance of FRS. The thesis investigates different research works on FRS, its techniques and challenges in unconstrained environments. The thesis proposes a novel image enhancement technique as a pre-processing approach for FRS. The proposed enhancement technique improves on the overall FRS model resulting into an increased recognition performance. Also, a selection of novel hybrid features has been presented that is extracted from the enhanced facial images within the dataset to improve recognition performance. The thesis proposes a novel evaluation function as a component within the image enhancement technique to improve face recognition in unconstrained environments. Also, a defined scale mechanism was designed within the evaluation function to evaluate the enhanced images such that extreme values depict too dark or too bright images. The proposed algorithm enables the system to automatically select the most appropriate enhanced face image without human intervention. Evaluation of the proposed algorithm was done using standard parameters, where it is demonstrated to outperform existing image enhancement techniques both quantitatively and qualitatively. The thesis confirms the effectiveness of the proposed image enhancement technique towards face recognition in unconstrained environments using the convolutional neural network. Furthermore, the thesis presents a selection of hybrid features from the enhanced image that results in effective image classification. Different face datasets were selected where each face image was enhanced using the proposed and existing image enhancement technique prior to the selection of features and classification task. Experiments on the different face datasets showed increased and better performance using the proposed approach. The thesis shows that putting an effective image enhancement technique as a preprocessing approach can improve the performance of FRS as compared to using unenhanced face images. Also, the right features to be extracted from the enhanced face dataset as been shown to be an important factor for the improvement of FRS. The thesis made use of standard face datasets to confirm the effectiveness of the proposed method. On the LFW face dataset, an improved performance recognition rate was obtained when considering all the facial conditions within the face dataset.Thesis (PhD)--University of Pretoria, 2018.CSIR-DST Inter programme bursaryElectrical, Electronic and Computer EngineeringPhDUnrestricte

    A scattering and repulsive swarm intelligence algorithm for solving global optimization problems

    Get PDF
    The firefly algorithm (FA), as a metaheuristic search method, is useful for solving diverse optimization problems. However, it is challenging to use FA in tackling high dimensional optimization problems, and the random movement of FA has a high likelihood to be trapped in local optima. In this research, we propose three improved algorithms, i.e., Repulsive Firefly Algorithm (RFA), Scattering Repulsive Firefly Algorithm (SRFA), and Enhanced SRFA (ESRFA), to mitigate the premature convergence problem of the original FA model. RFA adopts a repulsive force strategy to accelerate fireflies (i.e. solutions) to move away from unpromising search regions, in order to reach global optimality in fewer iterations. SRFA employs a scattering mechanism along with the repulsive force strategy to divert weak neighbouring solutions to new search regions, in order to increase global exploration. Motivated by the survival tactics of hawk-moths, ESRFA incorporates a hovering-driven attractiveness operation, an exploration-driven evading mechanism, and a learning scheme based on the historical best experience in the neighbourhood to further enhance SRFA. Standard and CEC2014 benchmark optimization functions are used for evaluation of the proposed FA-based models. The empirical results indicate that ESRFA, SRFA and RFA significantly outperform the original FA model, a number of state-of-the-art FA variants, and other swarm-based algorithms, which include Simulated Annealing, Cuckoo Search, Particle Swarm, Bat Swarm, Dragonfly, and Ant-Lion Optimization, in diverse challenging benchmark functions

    Emotional speech analysis in mediation and court environments

    Get PDF
    When people communicate, their states of mind are coupled with the explicit content of the messages being transmitted. The implicit information conveyed by mental states is essential to correctly understand and frame the communication messages. In mediation, professional mediators include empathy as a fundamental skill when dealing with the relational and emotional aspects of a case. In court environments, emotion analysis intendsto point out stress or fear as indicators of the truthfulness of certain asserts. In commercial environments, such as call-centers, automatic emotional analysis through speech is focused to detect deception or frustration. Computational analysis of emotions focuses on gathering information from speech, facial expressions, body poses and movements to predict emotional states. Specifically, speech analysis has been reported as a valuable procedure for emotional state recognition. While some studies focus on the analysis of speech features to classify emotional states, others concentrate on determining the optimal classification performance. In this paper we analyze current approaches to computational analysis of emotions through speech and consider the replication of their techniques and findings in the domains of mediation and legal multimedia

    Feature selection using enhanced particle swarm optimisation for classification models.

    Get PDF
    In this research, we propose two Particle Swarm Optimisation (PSO) variants to undertake feature selection tasks. The aim is to overcome two major shortcomings of the original PSO model, i.e., premature convergence and weak exploitation around the near optimal solutions. The first proposed PSO variant incorporates four key operations, including a modified PSO operation with rectified personal and global best signals, spiral search based local exploitation, Gaussian distribution-based swarm leader enhancement, and mirroring and mutation operations for worst solution improvement. The second proposed PSO model enhances the first one through four new strategies, i.e., an adaptive exemplar breeding mechanism incorporating multiple optimal signals, nonlinear function oriented search coefficients, exponential and scattering schemes for swarm leader, and worst solution enhancement, respectively. In comparison with a set of 15 classical and advanced search methods, the proposed models illustrate statistical superiority for discriminative feature selection for a total of 13 data sets

    Sentiment Analysis of Persian Movie Reviews Using Deep Learning

    Get PDF
    Sentiment analysis aims to automatically classify the subject’s sentiment (e.g., positive, negative, or neutral) towards a particular aspect such as a topic, product, movie, news, etc. Deep learning has recently emerged as a powerful machine learning technique to tackle the growing demand for accurate sentiment analysis. However, the majority of research efforts are devoted to English-language only, while information of great importance is also available in other languages. This paper presents a novel, context-aware, deep-learning-driven, Persian sentiment analysis approach. Specifically, the proposed deep-learning-driven automated feature-engineering approach classifies Persian movie reviews as having positive or negative sentiments. Two deep learning algorithms, convolutional neural networks (CNN) and long-short-term memory (LSTM), are applied and compared with our previously proposed manual-feature-engineering-driven, SVM-based approach. Simulation results demonstrate that LSTM obtained a better performance as compared to multilayer perceptron (MLP), autoencoder, support vector machine (SVM), logistic regression and CNN algorithms
    corecore