2,930 research outputs found

    A fuzzy-based approach for classifying students' emotional states in online collaborative work

    Get PDF
    (c) 2016 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other works.Emotion awareness is becoming a key aspect in collaborative work at academia, enterprises and organizations that use collaborative group work in their activity. Due to pervasiveness of ICT's, most of collaboration can be performed through communication media channels such as discussion forums, social networks, etc. The emotive state of the users while they carry out their activity such as collaborative learning at Universities or project work at enterprises and organizations influences very much their performance and can actually determine the final learning or project outcome. Therefore, monitoring the users' emotive states and using that information for providing feedback and scaffolding is crucial. To this end, automated analysis over data collected from communication channels is a useful source. In this paper, we propose an approach to process such collected data in order to classify and assess emotional states of involved users and provide them feedback accordingly to their emotive states. In order to achieve this, a fuzzy approach is used to build the emotive classification system, which is fed with data from ANEW dictionary, whose words are bound to emotional weights and these, in turn, are used to map Fuzzy sets in our proposal. The proposed fuzzy-based system has been evaluated using real data from collaborative learning courses in an academic context.Peer ReviewedPostprint (author's final draft

    A model for providing emotion awareness and feedback using fuzzy logic in online learning

    Get PDF
    Monitoring users’ emotive states and using that information for providing feedback and scaffolding is crucial. In the learning context, emotions can be used to increase students’ attention as well as to improve memory and reasoning. In this context, tutors should be prepared to create affective learning situations and encourage collaborative knowledge construction as well as identify those students’ feelings which hinder learning process. In this paper, we propose a novel approach to label affective behavior in educational discourse based on fuzzy logic, which enables a human or virtual tutor to capture students’ emotions, make students aware of their own emotions, assess these emotions and provide appropriate affective feedback. To that end, we propose a fuzzy classifier that provides a priori qualitative assessment and fuzzy qualifiers bound to the amounts such as few, regular and many assigned by an affective dictionary to every word. The advantage of the statistical approach is to reduce the classical pollution problem of training and analyzing the scenario using the same dataset. Our approach has been tested in a real online learning environment and proved to have a very positive influence on students’ learning performance.Peer ReviewedPostprint (author's final draft

    A virtual diary companion

    Get PDF
    Chatbots and embodied conversational agents show turn based conversation behaviour. In current research we almost always assume that each utterance of a human conversational partner should be followed by an intelligent and/or empathetic reaction of chatbot or embodied agent. They are assumed to be alert, trying to please the user. There are other applications which have not yet received much attention and which require a more patient or relaxed attitude, waiting for the right moment to provide feedback to the human partner. Being able and willing to listen is one of the conditions for being successful. In this paper we have some observations on listening behaviour research and introduce one of our applications, the virtual diary companion

    Assessing the impact of affective feedback on end-user security awareness

    Get PDF
    A lack of awareness regarding online security behaviour can leave users and their devices vulnerable to compromise. This paper highlights potential areas where users may fall victim to online attacks, and reviews existing tools developed to raise users’ awareness of security behaviour. An ongoing research project is described, which provides a combined monitoring solution and affective feedback system, designed to provide affective feedback on automatic detection of risky security behaviour within a web browser. Results gained from the research conclude an affective feedback mechanism in a browser-based environment, can promote general awareness of online security

    Predicting Math Success in an Online Tutoring System Using Language Data and Click-Stream Variables: A Longitudinal Analysis

    Get PDF
    Previous studies have demonstrated strong links between students\u27 linguistic knowledge, their affective language patterns and their success in math. Other studies have shown that demographic and click-stream variables in online learning environments are important predictors of math success. This study builds on this research in two ways. First, it combines linguistics and click-stream variables along with demographic information to increase prediction rates for math success. Second, it examines how random variance, as found in repeated participant data, can explain math success beyond linguistic, demographic, and click-stream variables. The findings indicate that linguistic, demographic, and click-stream factors explained about 14% of the variance in math scores. These variables mixed with random factors explained about 44% of the variance

    Crowdsourcing a Word-Emotion Association Lexicon

    Full text link
    Even though considerable attention has been given to the polarity of words (positive and negative) and the creation of large polarity lexicons, research in emotion analysis has had to rely on limited and small emotion lexicons. In this paper we show how the combined strength and wisdom of the crowds can be used to generate a large, high-quality, word-emotion and word-polarity association lexicon quickly and inexpensively. We enumerate the challenges in emotion annotation in a crowdsourcing scenario and propose solutions to address them. Most notably, in addition to questions about emotions associated with terms, we show how the inclusion of a word choice question can discourage malicious data entry, help identify instances where the annotator may not be familiar with the target term (allowing us to reject such annotations), and help obtain annotations at sense level (rather than at word level). We conducted experiments on how to formulate the emotion-annotation questions, and show that asking if a term is associated with an emotion leads to markedly higher inter-annotator agreement than that obtained by asking if a term evokes an emotion

    Personalized face and gesture analysis using hierarchical neural networks

    Full text link
    The video-based computational analyses of human face and gesture signals encompass a myriad of challenging research problems involving computer vision, machine learning and human computer interaction. In this thesis, we focus on the following challenges: a) the classification of hand and body gestures along with the temporal localization of their occurrence in a continuous stream, b) the recognition of facial expressivity levels in people with Parkinson's Disease using multimodal feature representations, c) the prediction of student learning outcomes in intelligent tutoring systems using affect signals, and d) the personalization of machine learning models, which can adapt to subject and group-specific nuances in facial and gestural behavior. Specifically, we first conduct a quantitative comparison of two approaches to the problem of segmenting and classifying gestures on two benchmark gesture datasets: a method that simultaneously segments and classifies gestures versus a cascaded method that performs the tasks sequentially. Second, we introduce a framework that computationally predicts an accurate score for facial expressivity and validate it on a dataset of interview videos of people with Parkinson's disease. Third, based on a unique dataset of videos of students interacting with MathSpring, an intelligent tutoring system, collected by our collaborative research team, we build models to predict learning outcomes from their facial affect signals. Finally, we propose a novel solution to a relatively unexplored area in automatic face and gesture analysis research: personalization of models to individuals and groups. We develop hierarchical Bayesian neural networks to overcome the challenges posed by group or subject-specific variations in face and gesture signals. We successfully validate our formulation on the problems of personalized subject-specific gesture classification, context-specific facial expressivity recognition and student-specific learning outcome prediction. We demonstrate the flexibility of our hierarchical framework by validating the utility of both fully connected and recurrent neural architectures

    Multimodal music information processing and retrieval: survey and future challenges

    Full text link
    Towards improving the performance in various music information processing tasks, recent studies exploit different modalities able to capture diverse aspects of music. Such modalities include audio recordings, symbolic music scores, mid-level representations, motion, and gestural data, video recordings, editorial or cultural tags, lyrics and album cover arts. This paper critically reviews the various approaches adopted in Music Information Processing and Retrieval and highlights how multimodal algorithms can help Music Computing applications. First, we categorize the related literature based on the application they address. Subsequently, we analyze existing information fusion approaches, and we conclude with the set of challenges that Music Information Retrieval and Sound and Music Computing research communities should focus in the next years
    • …
    corecore