2,582 research outputs found

    Conversational Agent: Developing a Model for Intelligent Agents with Transient Emotional States

    Get PDF
    The inclusion of human characteristics (i.e., emotions, personality) within an intelligent agent can often increase the effectiveness of information delivery and retrieval. Chat-bots offer a plethora of benefits within an eclectic range of disciplines (e.g., education, medicine, clinical and mental health). Hence, chatbots offer an effective way to observe, assess, and evaluate human communication patterns. Current research aims to develop a computational model for conversational agents with an emotional component to be applied to the army leadership training program that will allow for the examination of interpersonal skills in future research. Overall, the current research explores the application of the deep learning algorithm to the development of a generalized framework that will be based upon modeling empathetic conversation between an intelligent conversational agent (chatbot) and a human user in order to allow for higher level observation of interpersonal communication skills. Preliminary results demonstrate the promising potential of the seq2seq technique (e.g., through the use of Dialog Flow Chatbot platform) when applied to emotion-oriented conversational tasks. Both the classification and generative conversational modeling tasks demonstrate the promising potential of the current research for representing human to agent dialogue. However, this implementation may be extended by utilizing, a larger more high-quality dataset

    Crowdsourcing a Word-Emotion Association Lexicon

    Full text link
    Even though considerable attention has been given to the polarity of words (positive and negative) and the creation of large polarity lexicons, research in emotion analysis has had to rely on limited and small emotion lexicons. In this paper we show how the combined strength and wisdom of the crowds can be used to generate a large, high-quality, word-emotion and word-polarity association lexicon quickly and inexpensively. We enumerate the challenges in emotion annotation in a crowdsourcing scenario and propose solutions to address them. Most notably, in addition to questions about emotions associated with terms, we show how the inclusion of a word choice question can discourage malicious data entry, help identify instances where the annotator may not be familiar with the target term (allowing us to reject such annotations), and help obtain annotations at sense level (rather than at word level). We conducted experiments on how to formulate the emotion-annotation questions, and show that asking if a term is associated with an emotion leads to markedly higher inter-annotator agreement than that obtained by asking if a term evokes an emotion

    A framework for emotion and sentiment predicting supported in ensembles

    Get PDF
    Humans are prepared to comprehend each other’s emotions through subtle body movements or facial expressions; using those expressions, individuals change how they deliver messages when communicating between them. Machines, user interfaces, or robots need to empower this ability, in a way to change the interaction from the traditional “human-computer interaction” to a “human-machine cooperation”, where the machine provides the “right” information and functionality, at the “right” time, and in the “right” way. This dissertation presents a framework for emotion classification based on facial, speech, and text emotion prediction sources, supported by an ensemble of open-source code retrieved from off-the-shelf available methods. The main contribution is integrating outputs from different sources and methods in a single prediction, consistent with the emotions presented by the system’s user. For each different source, an initial aggregation of primary classifiers was implemented: for facial emotion classification, the aggregation achieved an accuracy above 73% in both FER2013 and RAF-DB datasets; For the speech emotion classification, four datasets were used, namely: RAVDESS, TESS, CREMA-D, and SAVEE. The aggregation of primary classifiers, achieved for a combination of three of the mentioned datasets results above 86 % of accuracy; The text emotion aggregation of primary classifiers was tested with one dataset called EMOTIONLINES, the classification of emotions achieved an accuracy above 53 %. Finally, the integration of all the methods in a single framework allows us to develop an emotion multi-source aggregator (EMsA), which aggregates the results extracted from the primary emotion classifications from different sources, such as facial, speech, text etc. We describe the EMsA and results using the RAVDESS dataset, which achieved 81.99% accuracy, in the case of the EMsA using a combination of faces and speech. Finally, we present an initial approach for sentiment classification.Os humanos estão preparados para compreender as emoções uns dos outros por meio de movimentos subtis do corpo ou expressões faciais; i.e., a forma como esses movimentos e expressões são enviados mudam a forma de como são entregues as mensagens quando os humanos comunicam entre eles. Máquinas, interfaces de utilizador ou robôs precisam de potencializar essa capacidade, de forma a mudar a interação do tradicional “interação humano-computador” para uma “cooperação homem-máquina”, onde a máquina fornece as informações e funcionalidades “certas”, na hora “certa” e da maneira “certa”. Nesta dissertação é apresentada uma estrutura (um ensemble de modelos) para classificação de emoções baseada em múltiplas fontes, nomeadamente na previsão de emoções faciais, de fala e de texto. Os classificadores base são suportados em código-fonte aberto associados a métodos disponíveis na literatura (classificadores primários). A principal contribuição é integrar diferentes fontes e diferentes métodos (os classificadores primários) numa única previsão consistente com as emoções apresentadas pelo utilizador do sistema. Neste contexto, salienta-se que da análise ao estado da arte efetuada sobre as diferentes formas de classificar emoções em humanos, existe o reconhecimento de emoção corporal (não considerando a face). No entanto, não foi encontrado código-fonte aberto e publicado para os classificadores primários que possam ser utilizados no âmbito desta dissertação. No reconhecimento de emoções da fala e texto foram também encontradas algumas dificuldades em encontrar classificadores primários com os requisitos necessários, principalmente no texto, pois existem bastantes modelos, mas com inúmeras emoções diferentes das 6 emoções básicas consideradas (tristeza, medo, surpresa, repulsa, raiva e alegria). Para o texto ainda possível verificar que existem mais modelos com a previsão de sentimento do que de emoções. De forma isolada para cada uma das fontes, i.e., para cada componente analisada (face, fala e texto), foi desenvolvido uma framework em Python que implementa um agregador primário com n classificadores primários (nesta dissertação considerou-se n igual 3). Para executar os testes e obter os resultados de cada agregador primário é usado um dataset específico e é enviado a informação do dataset para o agregador. I.e., no caso do agregador facial é enviado uma imagem, no caso do agregador da fala é enviado um áudio e no caso do texto é enviado a frase para a correspondente framework. Cada dataset usado foi dividido em ficheiros treino, validação e teste. Quando a framework acaba de processar a informação recebida são gerados os respetivos resultados, nomeadamente: nome do ficheiro/identificação do input, resultados do primeiro classificador primário, resultados do segundo classificador primário, resultados do terceiro classificador primário e ground-truth do dataset. Os resultados dos classificadores primários são depois enviados para o classificador final desse agregador primário, onde foram testados quatro classificadores: (a) voting, que, no caso de n igual 3, consiste na comparação dos resultados da emoção de cada classificador primário, i.e., se 2 classificadores primários tiverem a mesma emoção o resultado do voting será esse, se todos os classificadores tiverem resultados diferentes nenhum resultado é escolhido. Além deste “classificador” foram ainda usados (b) Random Forest, (c) Adaboost e (d) MLP (multiplayer perceptron). Quando a framework de cada agregador primário foi concluída, foi desenvolvido um super-agregador que tem o mesmo princípio dos agregadores primários, mas, agora, em vez de ter os resultados/agregação de apenas 3 classificadores primários, vão existir n × 3 resultados de classificadores primários (n da face, n da fala e n do texto). Relativamente aos resultados dos agregadores usados para cada uma das fontes, face, fala e texto, obteve-se para a classificação de emoção facial uma precisão de classificação acima de 73% nos datasets FER2013 e RAF-DB. Na classificação da emoção da fala foram utilizados quatro datasets, nomeadamente RAVDESS, TESS, CREMA-D e SAVEE, tendo que o melhor resultado de precisão obtido foi acima dos 86% quando usado a combinação de 3 dos 4 datasets. Para a classificação da emoção do texto, testou-se com o um dataset EMOTIONLINES, sendo o melhor resultado obtido foi de 53% (precisão). A integração de todas os classificadores primários agora num único framework permitiu desenvolver o agregador multi-fonte (emotion multi-source aggregator - EMsA), onde a classificação final da emoção é extraída, como já referido da agregação dos classificadores de emoções primárias de diferentes fontes. Para EMsA são apresentados resultados usando o dataset RAVDESS, onde foi alcançado uma precisão de 81.99 %, no caso do EMsA usar uma combinação de faces e fala. Não foi possível testar EMsA usando um dataset reconhecido na literatura que tenha ao mesmo tempo informação do texto, face e fala. Por último, foi apresentada uma abordagem inicial para classificação de sentimentos

    Individual mentalizing ability boosts flexibility toward a linguistic marker of social distance: An ERP investigation

    Get PDF
    Sentence-final particles (SFPs) as bound morphemes in Japanese have no obvious effect on the truth conditions of a sentence. However, they encompass a diverse range of usages, from typical to atypical, according to the context and the interpersonal relationships in the specific situation. The most frequent particle,-ne, is typically used after addressee-oriented propositions for information sharing, while another frequent particle,-yo, is typically used after addresser-oriented propositions to elicit a sense of strength. This study sheds light on individual differences among native speakers in flexibly understanding such linguistic markers based on their mentalizing ability (i.e., the ability to infer the mental states of others). Two experiments employing electroencephalography (EEG) consistently showed enhanced early posterior negativities (EPN) for atypical SFP usage compared to typical usage, especially when understanding-ne compared to -yo, in both an SFP appropriateness judgment task and a content comprehension task. Importantly, the amplitude of the EPN for atypical usages of-ne was significantly higher in participants with lower mentalizing ability than in those with a higher mentalizing ability. This effect plausibly reflects low-ability mentalizers' stronger sense of strangeness toward atypical-ne usage. While high-ability mentalizers may aptly perceive others' attitudes via their various usages of-ne, low-ability mentalizers seem to adopt a more stereotypical understanding. These results attest to the greater degree of difficulty low-ability mentalizers have in establishing a smooth regulation of interpersonal distance during social encounters

    A Review on Human-Computer Interaction and Intelligent Robots

    Get PDF
    In the field of artificial intelligence, human–computer interaction (HCI) technology and its related intelligent robot technologies are essential and interesting contents of research. From the perspective of software algorithm and hardware system, these above-mentioned technologies study and try to build a natural HCI environment. The purpose of this research is to provide an overview of HCI and intelligent robots. This research highlights the existing technologies of listening, speaking, reading, writing, and other senses, which are widely used in human interaction. Based on these same technologies, this research introduces some intelligent robot systems and platforms. This paper also forecasts some vital challenges of researching HCI and intelligent robots. The authors hope that this work will help researchers in the field to acquire the necessary information and technologies to further conduct more advanced research

    チノウ エージェント オヨビ コウガクブ ナビゲーション システム ノ カイハツ

    Get PDF
    Recent years, a huge amount of information is available through the internet, and many information retrievers have been developed. However, these retrievers only show retrieved results without hearty communication. In this paper, an intelligent agent is developed. It recognizes a user’s utterance using a speech recognizer, and retrieves information from the World Wide Web. Finally, the agent makes an appropriate answer from retrieved results, and give it to the user. In order to communicate with a user warmheartedly, the agent also recognizes user’s emotion from a voice and a facial expression, and the agent represents it’s own emotion using voice and behaviour. We also develop the intelligent campus navigation robot using the proposed intelligent agent. The robot can give a user campus information, chat with a user, and communicate with a user warmheartedly

    Toward an affect-sensitive multimodal human-computer interaction

    No full text
    The ability to recognize affective states of a person... This paper argues that next-generation human-computer interaction (HCI) designs need to include the essence of emotional intelligence -- the ability to recognize a user's affective states -- in order to become more human-like, more effective, and more efficient. Affective arousal modulates all nonverbal communicative cues (facial expressions, body movements, and vocal and physiological reactions). In a face-to-face interaction, humans detect and interpret those interactive signals of their communicator with little or no effort. Yet design and development of an automated system that accomplishes these tasks is rather difficult. This paper surveys the past work in solving these problems by a computer and provides a set of recommendations for developing the first part of an intelligent multimodal HCI -- an automatic personalized analyzer of a user's nonverbal affective feedback

    Non-acted multi-view audio-visual dyadic interactions. Project non-verbal emotion recognition in dyadic scenarios and speaker segmentation

    Get PDF
    Treballs finals del Màster de Fonaments de Ciència de Dades, Facultat de matemàtiques, Universitat de Barcelona, Any: 2019, Tutor: Sergio Escalera Guerrero i Cristina Palmero[en] In particular, this Master Thesis is focused on the development of baseline Emotion Recognition System in a dyadic environment using raw and handcraft audio features and cropped faces from the videos. This system is analyzed at frame and utterance level without temporal information. As well, a baseline Speaker Segmenta- tion System has been developed to facilitate the annotation task. For this reason, an exhaustive study of the state-of-the-art on emotion recognition and speaker segmentation techniques has been conducted, paying particular attention on Deep Learning techniques for emotion recognition and clustering for speaker aegmentation. While studying the state-of-the-art from the theoretical point of view, a dataset consisting of videos of sessions of dyadic interactions between individuals in different scenarios has been recorded. Different attributes were captured and labelled from these videos: body pose, hand pose, emotion, age, gender, etc. Once the ar- chitectures for emotion recognition have been trained with other dataset, a proof of concept is done with this new database in order to extract conclusions. In addition, this database can help future systems to achieve better results. A large number of experiments with audio and video are performed to create the emotion recognition system. The IEMOCAP database is used to perform the training and evaluation experiments of the emotion recognition system. Once the audio and video are trained separately with two different architectures, a fusion of both methods is done. In this work, the importance of preprocessing data (face detection, windows analysis length, handcrafted features, etc.) and choosing the correct parameters for the architectures (network depth, fusion, etc.) has been demonstrated and studied. On the other hand, the experiments for the speaker segmentation system are performed with a piece of audio from IEMOCAP database. In this work, the prerprocessing steps, the problems of an unsupervised system such as clustering and the feature representation are studied and discussed. Finally, the conclusions drawn throughout this work are exposed, as well as the possible lines of future work including new systems for emotion recognition and the experiments with the database recorded in this work
    corecore