761 research outputs found

    深層学習に基づく感情会話分析に関する研究

    Get PDF
    Owning the capability to express specific emotions by a chatbot during a conversation is one of the key parts of artificial intelligence, which has an intuitive and quantifiable impact on the improvement of chatbot’s usability and user satisfaction. Enabling machines to emotion recognition in conversation is challenging, mainly because the information in human dialogue innately conveys emotions by long-term experience, abundant knowledge, context, and the intricate patterns between the affective states. Recently, many studies on neural emotional conversational models have been conducted. However, enabling the chatbot to control what kind of emotion to respond to upon its own characters in conversation is still underexplored. At this stage, people are no longer satisfied with using a dialogue system to solve specific tasks, and are more eager to achieve spiritual communication. In the chat process, if the robot can perceive the user's emotions and can accurately process them, it can greatly enrich the content of the dialogue and make the user empathize. In the process of emotional dialogue, our ultimate goal is to make the machine understand human emotions and give matching responses. Based on these two points, this thesis explores and in-depth emotion recognition in conversation task and emotional dialogue generation task. In the past few years, although considerable progress has been made in emotional research in dialogue, there are still some difficulties and challenges due to the complex nature of human emotions. The key contributions in this thesis are summarized as below: (1) Researchers have paid more attention to enhancing natural language models with knowledge graphs these days, since knowledge graph has gained a lot of systematic knowledge. A large number of studies had shown that the introduction of external commonsense knowledge is very helpful to improve the characteristic information. We address the task of emotion recognition in conversations using external knowledge to enhance semantics. In this work, we employ an external knowledge graph ATOMIC to extract the knowledge sources. We proposed KES model, a new framework that incorporates different elements of external knowledge and conversational semantic role labeling, where build upon them to learn interactions between interlocutors participating in a conversation. The conversation is a sequence of coherent and orderly discourses. For neural networks, the capture of long-range context information is a weakness. We adopt Transformer a structure composed of self-attention and feed forward neural network, instead of the traditional RNN model, aiming at capturing remote context information. We design a self-attention layer specialized for enhanced semantic text features with external commonsense knowledge. Then, two different networks composed of LSTM are responsible for tracking individual internal state and context external state. In addition, the proposed model has experimented on three datasets in emotion detection in conversation. The experimental results show that our model outperforms the state-of-the-art approaches on most of the tested datasets. (2) We proposed an emotional dialogue model based on Seq2Seq, which is improved from three aspects: model input, encoder structure, and decoder structure, so that the model can generate responses with rich emotions, diversity, and context. In terms of model input, emotional information and location information are added based on word vectors. In terms of the encoder, the proposed model first encodes the current input and sentence sentiment to generate a semantic vector, and additionally encodes the context and sentence sentiment to generate a context vector, adding contextual information while ensuring the independence of the current input. On the decoder side, attention is used to calculate the weights of the two semantic vectors separately and then decode, to fully integrate the local emotional semantic information and the global emotional semantic information. We used seven objective evaluation indicators to evaluate the model's generation results, context similarity, response diversity, and emotional response. Experimental results show that the model can generate diverse responses with rich sentiment, contextual associations

    A novel Big Data analytics and intelligent technique to predict driver's intent

    Get PDF
    Modern age offers a great potential for automatically predicting the driver's intent through the increasing miniaturization of computing technologies, rapid advancements in communication technologies and continuous connectivity of heterogeneous smart objects. Inside the cabin and engine of modern cars, dedicated computer systems need to possess the ability to exploit the wealth of information generated by heterogeneous data sources with different contextual and conceptual representations. Processing and utilizing this diverse and voluminous data, involves many challenges concerning the design of the computational technique used to perform this task. In this paper, we investigate the various data sources available in the car and the surrounding environment, which can be utilized as inputs in order to predict driver's intent and behavior. As part of investigating these potential data sources, we conducted experiments on e-calendars for a large number of employees, and have reviewed a number of available geo referencing systems. Through the results of a statistical analysis and by computing location recognition accuracy results, we explored in detail the potential utilization of calendar location data to detect the driver's intentions. In order to exploit the numerous diverse data inputs available in modern vehicles, we investigate the suitability of different Computational Intelligence (CI) techniques, and propose a novel fuzzy computational modelling methodology. Finally, we outline the impact of applying advanced CI and Big Data analytics techniques in modern vehicles on the driver and society in general, and discuss ethical and legal issues arising from the deployment of intelligent self-learning cars

    FAF: A novel multimodal emotion recognition approach integrating face, body and text

    Full text link
    Multimodal emotion analysis performed better in emotion recognition depending on more comprehensive emotional clues and multimodal emotion dataset. In this paper, we developed a large multimodal emotion dataset, named "HED" dataset, to facilitate the emotion recognition task, and accordingly propose a multimodal emotion recognition method. To promote recognition accuracy, "Feature After Feature" framework was used to explore crucial emotional information from the aligned face, body and text samples. We employ various benchmarks to evaluate the "HED" dataset and compare the performance with our method. The results show that the five classification accuracy of the proposed multimodal fusion method is about 83.75%, and the performance is improved by 1.83%, 9.38%, and 21.62% respectively compared with that of individual modalities. The complementarity between each channel is effectively used to improve the performance of emotion recognition. We had also established a multimodal online emotion prediction platform, aiming to provide free emotion prediction to more users

    A Survey on Emotion Recognition for Human Robot Interaction

    Get PDF
    With the recent developments of technology and the advances in artificial intelligent and machine learning techniques, it becomes possible for the robot to acquire and show the emotions as a part of Human-Robot Interaction (HRI). An emotional robot can recognize the emotional states of humans so that it will be able to interact more naturally with its human counterpart in different environments. In this article, a survey on emotion recognition for HRI systems has been presented. The survey aims to achieve two objectives. Firstly, it aims to discuss the main challenges that face researchers when building emotional HRI systems. Secondly, it seeks to identify sensing channels that can be used to detect emotions and provides a literature review about recent researches published within each channel, along with the used methodologies and achieved results. Finally, some of the existing emotion recognition issues and recommendations for future works have been outlined

    Cognitive Computing for Multimodal Sentiment Sensing and Emotion Recognition Fusion Based on Machine Learning Techniques Implemented by Computer Interface System

    Get PDF
    A multiple slot fractal antenna design has been determined communication efficiency and its multi-function activities.  High-speed small communication devices have been required for future smart chip applications, so that researchers have been employed new and creative antenna design. Antennas are key part in communication systems, those are used to improve communication parameters like gain, efficiency, and bandwidth. Consistently, modern antennas design with high bandwidth and gain balancing is very difficult, therefore an adaptive antenna array chip design is required. In this research work a coaxial fed antenna with fractal geometry design has been implemented for Wi-Fi and Radio altimeter application. The fractal geometry has been taken with multiple numbers of slots in the radiating structure for uncertain applications. The coaxial feeding location has been selected based on the good impedance matching condition (50 Ohms). The overall dimension mentioned for antenna are approximately 50X50X1.6 mm on FR4 substrate and performance characteristic analysis is performed with change in substrate material presented in this work. Dual-band resonant frequency is being emitted by the antenna with resonance at 3.1 and 4.3 GHz for FR4 substrate material and change in the resonant bands is obtained with change in substrate. The proposed Antenna is prototyped on Anritsu VNA tool and presented the comparative analysis like VSWR 12%, reflection coefficient 9.4%,3D-Gain 6.2% and surface current 9.3% had been improved

    Social-Context Middleware for At-Risk Veterans

    Get PDF
    Many veterans undergo challenges when reintegrating into civilian society. These challenges include readapting to their communities and families. During the reintegration process veterans have difficulties finding employment, education or resources that aid veteran health. Research suggests that these challenges often result in veterans encountering serious mental illness. Post-Traumatic Stress Disorder (PTSD) is a common mental disease that veterans often develop. This disease impacts between 15-20% of veterans. PTSD increases the likelihood of veterans engaging in high risk behaviors which may consist of impulsivity, substance abuse, and angry outbursts. These behaviors raise the veterans’ risk of becoming violent and lashing out at others around them. In more recent studies the VA has started to define PTSD by its association to specific high risk behaviors rather than defining PTSD based on a combination of psychiatric symptoms. Some researchers have suggested that high risk behaviors -- extreme anger (i.e., rage or angry outbursts) is particularly problematic within the context of military PTSD. Comparatively little research has been done linking sensor based systems to identify these angry episodes in the daily lives of military veterans or others with similar issues. This thesis presents a middleware solution for systems that work to detect, and with additional work possibly prevent, angry outbursts (also described in psychological literature as “rage”) using physiological sensor data and context-aware technology. This paper will cover a range of topics from methods for collecting system requirements for a subject group to the development of a social-context aware middleware. In doing such, the goal is to present a system that can be constructed and used in an in lab environment to further the research of building real-world systems that predict crisis events, setting the state for early intervention methods based on this approach

    Improved Intention Discovery with Classified Emotions in A Modified POMDP

    Get PDF
    Emotions are one of the most proactive topics in psychology, a basis of forceful conversation and divergence from the earliest philosophers and other thinkers to the present day. Human emotion classification using different machine learning techniques is an active area of research over the last decade. This investigation discusses a new approach for virtual agents to better understand and interact with the user. Our research focuses on deducing the belief state of a user who interacts with a single agent using recognized emotions from the text/speech based input. We built a customized decision tree with six primary states of emotions being recognized from different sets of inputs. The belief state at each given instance of time slice is inferred by drawing a belief network using the different sets of emotions and calculating state of belief using a POMDP (Partially Observable Markov Decision Process) based solver. Hence the existing POMDP model is customized in order to incorporate emotion as observations for finding the possible user intentions. This helps to overcome the limitations of the present methods to better recognize the belief state. As well, the new approach allows us to analyze human emotional behaviour in indefinite environments and helps to generate an effective interaction between the human and the computer

    A Review on Human-Computer Interaction and Intelligent Robots

    Get PDF
    In the field of artificial intelligence, human–computer interaction (HCI) technology and its related intelligent robot technologies are essential and interesting contents of research. From the perspective of software algorithm and hardware system, these above-mentioned technologies study and try to build a natural HCI environment. The purpose of this research is to provide an overview of HCI and intelligent robots. This research highlights the existing technologies of listening, speaking, reading, writing, and other senses, which are widely used in human interaction. Based on these same technologies, this research introduces some intelligent robot systems and platforms. This paper also forecasts some vital challenges of researching HCI and intelligent robots. The authors hope that this work will help researchers in the field to acquire the necessary information and technologies to further conduct more advanced research

    Detection of Interaction-based Knowledge for Reclassification of Service Robots: Big Data Analytics Perspective

    Get PDF
    With the advancement of artificial intelligence technology, the robot industry in human- robot interactive service has rapidly developed. The purpose of this paper is to uncover user acceptance of human-robot interactive service robots based on online reviews. Extract reviews the public service robots and the domestic service robots from YouTube uses word2vec, sentiment classification, and LDA (Latent Dirichlet Allocation) analysis methods for research. The results show that in the interactive technology, the public service robots, the domestic service robots, and the service robots can well receive the user’s speech, gestures, and understanding of emotional states and navigating with and around. However, collaborating with humans, users may be more fearful and worried. At the same time, the positive topic of the public service robots is experience value, and the negative topic is system quality. The positive topic of the domestic service robots is anthropomorphism, and the negative topic is perceived intelligence
    corecore