638 research outputs found

    Semi-supervised Deep Generative Modelling of Incomplete Multi-Modality Emotional Data

    Full text link
    There are threefold challenges in emotion recognition. First, it is difficult to recognize human's emotional states only considering a single modality. Second, it is expensive to manually annotate the emotional data. Third, emotional data often suffers from missing modalities due to unforeseeable sensor malfunction or configuration issues. In this paper, we address all these problems under a novel multi-view deep generative framework. Specifically, we propose to model the statistical relationships of multi-modality emotional data using multiple modality-specific generative networks with a shared latent space. By imposing a Gaussian mixture assumption on the posterior approximation of the shared latent variables, our framework can learn the joint deep representation from multiple modalities and evaluate the importance of each modality simultaneously. To solve the labeled-data-scarcity problem, we extend our multi-view model to semi-supervised learning scenario by casting the semi-supervised classification problem as a specialized missing data imputation task. To address the missing-modality problem, we further extend our semi-supervised multi-view model to deal with incomplete data, where a missing view is treated as a latent variable and integrated out during inference. This way, the proposed overall framework can utilize all available (both labeled and unlabeled, as well as both complete and incomplete) data to improve its generalization ability. The experiments conducted on two real multi-modal emotion datasets demonstrated the superiority of our framework.Comment: arXiv admin note: text overlap with arXiv:1704.07548, 2018 ACM Multimedia Conference (MM'18

    Recognition of Human Emotion using Radial Basis Function Neural Networks with Inverse Fisher Transformed Physiological Signals

    Get PDF
    Emotion is a complex state of human mind influenced by body physiological changes and interdependent external events thus making an automatic recognition of emotional state a challenging task. A number of recognition methods have been applied in recent years to recognize human emotion. The motivation for this study is therefore to discover a combination of emotion features and recognition method that will produce the best result in building an efficient emotion recognizer in an affective system. We introduced a shifted tanh normalization scheme to realize the inverse Fisher transformation applied to the DEAP physiological dataset and consequently performed series of experiments using the Radial Basis Function Artificial Neural Networks (RBFANN). In our experiments, we have compared the performances of digital image based feature extraction techniques such as the Histogram of Oriented Gradient (HOG), Local Binary Pattern (LBP) and the Histogram of Images (HIM). These feature extraction techniques were utilized to extract discriminatory features from the multimodal DEAP dataset of physiological signals. Experimental results obtained indicate that the best recognition accuracy was achieved with the EEG modality data using the HIM features extraction technique and classification done along the dominance emotion dimension. The result is very remarkable when compared with existing results in the literature including deep learning studies that have utilized the DEAP corpus and also applicable to diverse fields of engineering studies

    Embracing and exploiting annotator emotional subjectivity: an affective rater ensemble model

    Get PDF
    Automated recognition of continuous emotions in audio-visual data is a growing area of study that aids in understanding human-machine interaction. Training such systems presupposes human annotation of the data. The annotation process, however, is laborious and expensive given that several human ratings are required for every data sample to compensate for the subjectivity of emotion perception. As a consequence, labelled data for emotion recognition are rare and the existing corpora are limited when compared to other state-of-the-art deep learning datasets. In this study, we explore different ways in which existing emotion annotations can be utilised more effectively to exploit available labelled information to the fullest. To reach this objective, we exploit individual raters’ opinions by employing an ensemble of rater-specific models, one for each annotator, by that reducing the loss of information which is a byproduct of annotation aggregation; we find that individual models can indeed infer subjective opinions. Furthermore, we explore the fusion of such ensemble predictions using different fusion techniques. Our ensemble model with only two annotators outperforms the regular Arousal baseline on the test set of the MuSe-CaR corpus. While no considerable improvements on valence could be obtained, using all annotators increases the prediction performance of arousal by up to. 07 Concordance Correlation Coefficient absolute improvement on test - solely trained on rate-specific models and fused by an attention-enhanced Long-short Term Memory-Recurrent Neural Network

    A Review on MAS-Based Sentiment and Stress Analysis User-Guiding and Risk-Prevention Systems in Social Network Analysis

    Full text link
    [EN] In the current world we live immersed in online applications, being one of the most present of them Social Network Sites (SNSs), and different issues arise from this interaction. Therefore, there is a need for research that addresses the potential issues born from the increasing user interaction when navigating. For this reason, in this survey we explore works in the line of prevention of risks that can arise from social interaction in online environments, focusing on works using Multi-Agent System (MAS) technologies. For being able to assess what techniques are available for prevention, works in the detection of sentiment polarity and stress levels of users in SNSs will be reviewed. We review with special attention works using MAS technologies for user recommendation and guiding. Through the analysis of previous approaches on detection of the user state and risk prevention in SNSs we elaborate potential future lines of work that might lead to future applications where users can navigate and interact between each other in a more safe way.This work was funded by the project TIN2017-89156-R of the Spanish government.Aguado-Sarrió, G.; Julian Inglada, VJ.; García-Fornes, A.; Espinosa Minguet, AR. (2020). A Review on MAS-Based Sentiment and Stress Analysis User-Guiding and Risk-Prevention Systems in Social Network Analysis. Applied Sciences. 10(19):1-29. https://doi.org/10.3390/app10196746S1291019Vanderhoven, E., Schellens, T., Vanderlinde, R., & Valcke, M. (2015). Developing educational materials about risks on social network sites: a design based research approach. Educational Technology Research and Development, 64(3), 459-480. doi:10.1007/s11423-015-9415-4Teens and ICT: Risks and Opportunities. Belgium: TIRO http://www.belspo.be/belspo/fedra/proj.asp?l=en&COD=TA/00/08Risks and Safety on the Internet: The Perspective of European Children: Full Findings and Policy Implications From the EU Kids Online Survey of 9–16 Year Olds and Their Parents in 25 Countries http://eprints.lse.ac.uk/33731/Vanderhoven, E., Schellens, T., & Valcke, M. (2014). Educating teens about the risks on social network sites. An intervention study in Secondary Education. Comunicar, 22(43), 123-132. doi:10.3916/c43-2014-12Christofides, E., Muise, A., & Desmarais, S. (2012). Risky Disclosures on Facebook. Journal of Adolescent Research, 27(6), 714-731. doi:10.1177/0743558411432635George, J. M., & Dane, E. (2016). Affect, emotion, and decision making. Organizational Behavior and Human Decision Processes, 136, 47-55. doi:10.1016/j.obhdp.2016.06.004Thelwall, M. (2017). TensiStrength: Stress and relaxation magnitude detection for social media texts. Information Processing & Management, 53(1), 106-121. doi:10.1016/j.ipm.2016.06.009Thelwall, M., Buckley, K., Paltoglou, G., Cai, D., & Kappas, A. (2010). Sentiment strength detection in short informal text. Journal of the American Society for Information Science and Technology, 61(12), 2544-2558. doi:10.1002/asi.21416Shoumy, N. J., Ang, L.-M., Seng, K. P., Rahaman, D. M. M., & Zia, T. (2020). Multimodal big data affective analytics: A comprehensive survey using text, audio, visual and physiological signals. Journal of Network and Computer Applications, 149, 102447. doi:10.1016/j.jnca.2019.102447Zhang, C., Zeng, D., Li, J., Wang, F.-Y., & Zuo, W. (2009). Sentiment analysis of Chinese documents: From sentence to document level. Journal of the American Society for Information Science and Technology, 60(12), 2474-2487. doi:10.1002/asi.21206Lu, B., Ott, M., Cardie, C., & Tsou, B. K. (2011). Multi-aspect Sentiment Analysis with Topic Models. 2011 IEEE 11th International Conference on Data Mining Workshops. doi:10.1109/icdmw.2011.125Nasukawa, T., & Yi, J. (2003). Sentiment analysis. Proceedings of the international conference on Knowledge capture - K-CAP ’03. doi:10.1145/945645.945658Borth, D., Ji, R., Chen, T., Breuel, T., & Chang, S.-F. (2013). Large-scale visual sentiment ontology and detectors using adjective noun pairs. Proceedings of the 21st ACM international conference on Multimedia - MM ’13. doi:10.1145/2502081.2502282Deb, S., & Dandapat, S. (2019). Emotion Classification Using Segmentation of Vowel-Like and Non-Vowel-Like Regions. IEEE Transactions on Affective Computing, 10(3), 360-373. doi:10.1109/taffc.2017.2730187Deng, J., Zhang, Z., Marchi, E., & Schuller, B. (2013). Sparse Autoencoder-Based Feature Transfer Learning for Speech Emotion Recognition. 2013 Humaine Association Conference on Affective Computing and Intelligent Interaction. doi:10.1109/acii.2013.90Nicolaou, M. A., Gunes, H., & Pantic, M. (2011). Continuous Prediction of Spontaneous Affect from Multiple Cues and Modalities in Valence-Arousal Space. IEEE Transactions on Affective Computing, 2(2), 92-105. doi:10.1109/t-affc.2011.9Hossain, M. S., Muhammad, G., Alhamid, M. F., Song, B., & Al-Mutib, K. (2016). Audio-Visual Emotion Recognition Using Big Data Towards 5G. Mobile Networks and Applications, 21(5), 753-763. doi:10.1007/s11036-016-0685-9Zhou, F., Jianxin Jiao, R., & Linsey, J. S. (2015). Latent Customer Needs Elicitation by Use Case Analogical Reasoning From Sentiment Analysis of Online Product Reviews. Journal of Mechanical Design, 137(7). doi:10.1115/1.4030159Ceci, F., Goncalves, A. L., & Weber, R. (2016). A model for sentiment analysis based on ontology and cases. IEEE Latin America Transactions, 14(11), 4560-4566. doi:10.1109/tla.2016.7795829Vizer, L. M., Zhou, L., & Sears, A. (2009). Automated stress detection using keystroke and linguistic features: An exploratory study. International Journal of Human-Computer Studies, 67(10), 870-886. doi:10.1016/j.ijhcs.2009.07.005Feldman, R. (2013). Techniques and applications for sentiment analysis. Communications of the ACM, 56(4), 82-89. doi:10.1145/2436256.2436274Schouten, K., & Frasincar, F. (2016). Survey on Aspect-Level Sentiment Analysis. IEEE Transactions on Knowledge and Data Engineering, 28(3), 813-830. doi:10.1109/tkde.2015.2485209Ji, R., Cao, D., Zhou, Y., & Chen, F. (2016). Survey of visual sentiment prediction for social media analysis. Frontiers of Computer Science, 10(4), 602-611. doi:10.1007/s11704-016-5453-2Li, L., Cao, D., Li, S., & Ji, R. (2015). Sentiment analysis of Chinese micro-blog based on multi-modal correlation model. 2015 IEEE International Conference on Image Processing (ICIP). doi:10.1109/icip.2015.7351718Lee, P.-M., Tsui, W.-H., & Hsiao, T.-C. (2015). The Influence of Emotion on Keyboard Typing: An Experimental Study Using Auditory Stimuli. PLOS ONE, 10(6), e0129056. doi:10.1371/journal.pone.0129056Matsiola, M., Dimoulas, C., Kalliris, G., & Veglis, A. A. (2018). Augmenting User Interaction Experience Through Embedded Multimodal Media Agents in Social Networks. Information Retrieval and Management, 1972-1993. doi:10.4018/978-1-5225-5191-1.ch088Rosaci, D. (2007). CILIOS: Connectionist inductive learning and inter-ontology similarities for recommending information agents. Information Systems, 32(6), 793-825. doi:10.1016/j.is.2006.06.003Buccafurri, F., Comi, A., Lax, G., & Rosaci, D. (2016). Experimenting with Certified Reputation in a Competitive Multi-Agent Scenario. IEEE Intelligent Systems, 31(1), 48-55. doi:10.1109/mis.2015.98Rosaci, D., & Sarnè, G. M. L. (2014). Multi-agent technology and ontologies to support personalization in B2C E-Commerce. Electronic Commerce Research and Applications, 13(1), 13-23. doi:10.1016/j.elerap.2013.07.003Singh, A., & Sharma, A. (2017). MAICBR: A Multi-agent Intelligent Content-Based Recommendation System. Lecture Notes in Networks and Systems, 399-411. doi:10.1007/978-981-10-3920-1_41Villavicencio, C., Schiaffino, S., Diaz-Pace, J. A., Monteserin, A., Demazeau, Y., & Adam, C. (2016). A MAS Approach for Group Recommendation Based on Negotiation Techniques. Lecture Notes in Computer Science, 219-231. doi:10.1007/978-3-319-39324-7_19Rincon, J. A., de la Prieta, F., Zanardini, D., Julian, V., & Carrascosa, C. (2017). Influencing over people with a social emotional model. Neurocomputing, 231, 47-54. doi:10.1016/j.neucom.2016.03.107Aguado, G., Julian, V., Garcia-Fornes, A., & Espinosa, A. (2020). A Multi-Agent System for guiding users in on-line social environments. Engineering Applications of Artificial Intelligence, 94, 103740. doi:10.1016/j.engappai.2020.103740Aguado, G., Julián, V., García-Fornes, A., & Espinosa, A. (2020). Using Keystroke Dynamics in a Multi-Agent System for User Guiding in Online Social Networks. Applied Sciences, 10(11), 3754. doi:10.3390/app10113754Camara, M., Bonham-Carter, O., & Jumadinova, J. (2015). A multi-agent system with reinforcement learning agents for biomedical text mining. Proceedings of the 6th ACM Conference on Bioinformatics, Computational Biology and Health Informatics. doi:10.1145/2808719.2812596Lombardo, G., Fornacciari, P., Mordonini, M., Tomaiuolo, M., & Poggi, A. (2019). A Multi-Agent Architecture for Data Analysis. Future Internet, 11(2), 49. doi:10.3390/fi11020049Schweitzer, F., & Garcia, D. (2010). An agent-based model of collective emotions in online communities. The European Physical Journal B, 77(4), 533-545. doi:10.1140/epjb/e2010-00292-

    Advances in Emotion Recognition: Link to Depressive Disorder

    Get PDF
    Emotion recognition enables real-time analysis, tagging, and inference of cognitive affective states from human facial expression, speech and tone, body posture and physiological signal, as well as social text on social network platform. Recognition of emotion pattern based on explicit and implicit features extracted through wearable and other devices could be decoded through computational modeling. Meanwhile, emotion recognition and computation are critical to detection and diagnosis of potential patients of mood disorder. The chapter aims to summarize the main findings in the area of affective recognition and its applications in major depressive disorder (MDD), which have made rapid progress in the last decade

    Computer audition for emotional wellbeing

    Get PDF
    This thesis is focused on the application of computer audition (i. e., machine listening) methodologies for monitoring states of emotional wellbeing. Computer audition is a growing field and has been successfully applied to an array of use cases in recent years. There are several advantages to audio-based computational analysis; for example, audio can be recorded non-invasively, stored economically, and can capture rich information on happenings in a given environment, e. g., human behaviour. With this in mind, maintaining emotional wellbeing is a challenge for humans and emotion-altering conditions, including stress and anxiety, have become increasingly common in recent years. Such conditions manifest in the body, inherently changing how we express ourselves. Research shows these alterations are perceivable within vocalisation, suggesting that speech-based audio monitoring may be valuable for developing artificially intelligent systems that target improved wellbeing. Furthermore, computer audition applies machine learning and other computational techniques to audio understanding, and so by combining computer audition with applications in the domain of computational paralinguistics and emotional wellbeing, this research concerns the broader field of empathy for Artificial Intelligence (AI). To this end, speech-based audio modelling that incorporates and understands paralinguistic wellbeing-related states may be a vital cornerstone for improving the degree of empathy that an artificial intelligence has. To summarise, this thesis investigates the extent to which speech-based computer audition methodologies can be utilised to understand human emotional wellbeing. A fundamental background on the fields in question as they pertain to emotional wellbeing is first presented, followed by an outline of the applied audio-based methodologies. Next, detail is provided for several machine learning experiments focused on emotional wellbeing applications, including analysis and recognition of under-researched phenomena in speech, e. g., anxiety, and markers of stress. Core contributions from this thesis include the collection of several related datasets, hybrid fusion strategies for an emotional gold standard, novel machine learning strategies for data interpretation, and an in-depth acoustic-based computational evaluation of several human states. All of these contributions focus on ascertaining the advantage of audio in the context of modelling emotional wellbeing. Given the sensitive nature of human wellbeing, the ethical implications involved with developing and applying such systems are discussed throughout

    Multimodal sentiment analysis in real-life videos

    Get PDF
    This thesis extends the emerging field of multimodal sentiment analysis of real-life videos, taking two components into consideration: the emotion and the emotion's target. The emotion component of media is traditionally represented as a segment-based intensity model of emotion classes. This representation is replaced here by a value- and time-continuous view. Adjacent research fields, such as affective computing, have largely neglected the linguistic information available from automatic transcripts of audio-video material. As is demonstrated here, this text modality is well-suited for time- and value-continuous prediction. Moreover, source-specific problems, such as trustworthiness, have been largely unexplored so far. This work examines perceived trustworthiness of the source, and its quantification, in user-generated video data and presents a possible modelling path. Furthermore, the transfer between the continuous and discrete emotion representations is explored in order to summarise the emotional context at a segment level. The other component deals with the target of the emotion, for example, the topic the speaker is addressing. Emotion targets in a video dataset can, as is shown here, be coherently extracted based on automatic transcripts without limiting a priori parameters, such as the expected number of targets. Furthermore, alternatives to purely linguistic investigation in predicting targets, such as knowledge-bases and multimodal systems, are investigated. A new dataset is designed for this investigation, and, in conjunction with proposed novel deep neural networks, extensive experiments are conducted to explore the components described above. The developed systems show robust prediction results and demonstrate strengths of the respective modalities, feature sets, and modelling techniques. Finally, foundations are laid for cross-modal information prediction systems with applications to the correction of corrupted in-the-wild signals from real-life videos
    • …
    corecore