22 research outputs found

    Cognitive Architecture to Generate Motivational Feelings: A Way to Improve Visual Learning in Robots

    Get PDF
    Expressions and voice pitch of an individual play an indispensable role in different cognitive processes. These factor help humans to learn a lot about different things present in their environment. This paper proposes a way to motivate robot learning through their environment and human around them. This mechanism is based on recognition of other agent’s facial expressions and voice pitch analysis by robot. A motivational level can be calculated through these feelings. Motivational level can impel the robots to improve their past learning. This mechanism can possibly help a robot to apprehend its environment and interact with other agents effectively. Keywords: cognition; motivation; facial expression; voice pitch; perception; memory

    Comparison of emotion evaluation perception using human voice signals of robots and humans

    Get PDF
    Emotion perception is the process of perceiving other people’s emotions. It can be based on their facial expression, movement, voice and other biosignals people emit. The evaluation of human’s emotion is one characteristic of emotions. One of the research areas in Robotics is adapting humanistic behavior in robots. Today many robots are constructed. Some of them can even perceive emotions. In this paper a custom built emotion aware robot that perceives emotion evaluation is used to investigate the similarity and differences of the robot\u27s and human\u27s emotion perception. Voice signals from real human were recorded and the information for the emotion evaluation was obtained from our robot, but also from a set of human evaluators. This paper presents the results of the experiments done. The experimental results show the difficulty of the problem of emotion evaluation perception in general. The significance of human voice signals in emotion evaluation is also investigated

    Comparison of emotion evaluation perception using human voice signals of robots and humans

    Get PDF
    Emotion perception is the process of perceiving other people’s emotions. It can be based on their facial expression, movement, voice and other biosignals people emit. The evaluation of human’s emotion is one characteristic of emotions. One of the research areas in Robotics is adapting humanistic behavior in robots. Today many robots are constructed. Some of them can even perceive emotions. In this paper a custom built emotion aware robot that perceives emotion evaluation is used to investigate the similarity and differences of the robot\u27s and human\u27s emotion perception. Voice signals from real human were recorded and the information for the emotion evaluation was obtained from our robot, but also from a set of human evaluators. This paper presents the results of the experiments done. The experimental results show the difficulty of the problem of emotion evaluation perception in general. The significance of human voice signals in emotion evaluation is also investigated

    FILTWAM and Voice Emotion Recognition

    Get PDF
    This paper introduces the voice emotion recognition part of our framework for improving learning through webcams and microphones (FILTWAM). This framework enables multimodal emotion recognition of learners during game-based learning. The main goal of this study is to validate the use of microphone data for a real-time and adequate interpretation of vocal expressions into emotional states were the software is calibrated with end users. FILTWAM already incorporates a valid face emotion recognition module and is extended with a voice emotion recognition module. This extension aims to provide relevant and timely feedback based upon learner's vocal intonations. The feedback is expected to enhance learner’s awareness of his or her own behavior. Six test persons received the same computer-based tasks in which they were requested to mimic specific vocal expressions. Each test person mimicked 82 emotions, which led to a dataset of 492 emotions. All sessions were recorded on video. An overall accuracy of our software based on the requested emotions and the recognized emotions is a pretty good 74.6% for the emotions happy and neutral emotions; but will be improved for the lower values of an extended set of emotions. In contrast with existing software our solution allows to continuously and unobtrusively monitor learners’ intonations and convert these intonations into emotional states. This paves the way for enhancing the quality and efficacy of game-based learning by including the learner's emotional states, and links these to pedagogical scaffolding.The Netherlands Laboratory for Lifelong Learning (NELLL) of the Open University of the Netherlands

    Development of an Emotion-Sensitive mHealth Approach for Mood-State Recognition in Bipolar Disorder

    Get PDF
    Internet- and mobile-based approaches have become increasingly significant to psychological research in the field of bipolar disorders. While research suggests that emotional aspects of bipolar disorders are substantially related to the social and global functioning or the suicidality of patients, these aspects have so far not sufficiently been considered within the context of mobile-based disease management approaches. As a multiprofessional research team, we have developed a new and emotion-sensitive assistance system, which we have adapted to the needs of patients with bipolar disorder. Next to the analysis of self-assessments, third-party assessments, and sensor data, the new assistance system analyzes audio and video data of these patients regarding their emotional content or the presence of emotional cues. In this viewpoint, we describe the theoretical and technological basis of our emotion-sensitive approach and do not present empirical data or a proof of concept. To our knowledge, the new assistance system incorporates the first mobile-based approach to analyze emotional expressions of patients with bipolar disorder. As a next step, the validity and feasibility of our emotion-sensitive approach must be evaluated. In the future, it might benefit diagnostic, prognostic, or even therapeutic purposes and complement existing systems with the help of new and intuitive interaction models

    Emotion recognition from speech: An implementation in MATLAB

    Get PDF
    Capstone Project submitted to the Department of Engineering, Ashesi University in partial fulfillment of the requirements for the award of Bachelor of Science degree in Electrical and Electronic Engineering, April 2019Human Computer Interaction now focuses more on being able to relate to human emotions. Recognizing human emotions from speech is an area that a lot of research is being done into with the rise of robots and Virtual reality. In this paper, emotion recognition from speech is done in MATLAB. Feature extraction is done based on the pitch and 13 MFCCs of the audio files. Two classification methods are used and compared to determine the one with the highest accuracy for the data set.Ashesi Universit

    Real-time vocal emotion recognition in artistic installations and interactive storytelling: Experiences and lessons learnt from CALLAS and IRIS

    Get PDF
    Many field programmable gate array (FPGA)-based security primitives have been developed, e.g., physical unclonable functions (PUFs) and true random number generator (TRNG). To accurately evaluate the performance of a PUF or other security designs, data from a large number of devices are required. A slice is the smallest reconfigurable logic block in an FPGA. The maximum or minimum entropy, exploitable from each slice of an FPGA, is an important factor for the design of a single-bit disorder-based security primitive. Previous research has shown that the locations of slices can impact the quality of delay-based PUF designs implemented on FPGAs. To investigate the effect of the placement of each single-bit PUF cell free from the routing resource constraint between slices, single-bit ring oscillator (RO) and identity-based PUF design (PicoPUF) cells that can each be fully fitted into a single slice are evaluated. 217 Xilinx Artix-7 FPGAs has been employed to provide a large-scale comprehensive analysis for the two designs. This is the first time two different single slice based security entities have been investigated and compared on 28nm Xilinx FPGA. Experimental results, including uniqueness, uniformity, correlation, reliability, bit-aliasing and min-entropy, based on 4 different floorplan locations are presented. The experimental results demonstrate that the lower the correlation between devices, the higher the minentropy and uniqueness for both designs on the FPGAs. While the implementation location of both designs on the FPGA affects their performances, the overall min-entropy, correlation and uniqueness of PicoPUF are slightly higher than those of RO. All other metrics, including uniformity, bit-aliasing and reliability of the PicoPUF are slightly lower than those of the RO. The raw data for the PicoPUF design is made publicly available to enable the research community to use them for benchmarking and/or validation.Ministry of Education (MOE)Accepted versionThis work was partly supported the Engineering and Physical Sci- ences Research Council (EPSRC) (EP/N508664/-CSIT2), the Singa- pore Ministry of Education AcRF Tier 1 Grant No. 2018-T1-001-131 and National Natural Science Foundation of China (61771239)

    Gaze behavior during interaction with a virtual character in interactive storytelling

    Get PDF
    Y a-t-il une spécificité de l’archéologie des pays du Nord de l’Europe ? C’est à un archéologue danois, conservateur de l’Oldnordisk Museum de Copenhague, Christian Jürgensen Thomsen, que nous devons depuis 1836 le « système des trois âges », c’est-à-dire la distinction, fondatrice de la préhistoire européenne et au-delà, entre un âge de la pierre, un âge du bronze et un âge du fer. Dès le xviie siècle, le royaume de Suède avait institué un service archéologique national – il faut attendre le..

    Data Fusion for Real-time Multimodal Emotion Recognition through Webcams and Microphones in E-Learning

    Get PDF
    The original article is available on the Taylor & Francis Online website in the following link: http://www.tandfonline.com/doi/abs/10.1080/10447318.2016.1159799?journalCode=hihc20This paper describes the validation study of our software that uses combined webcam and microphone data for real-time, continuous, unobtrusive emotion recognition as part of our FILTWAM framework. FILTWAM aims at deploying a real time multimodal emotion recognition method for providing more adequate feedback to the learners through an online communication skills training. Herein, timely feedback is needed that reflects on their shown intended emotions and which is also useful to increase learners’ awareness of their own behaviour. At least, a reliable and valid software interpretation of performed face and voice emotions is needed to warrant such adequate feedback. This validation study therefore calibrates our software. The study uses a multimodal fusion method. Twelve test persons performed computer-based tasks in which they were asked to mimic specific facial and vocal emotions. All test persons’ behaviour was recorded on video and two raters independently scored the showed emotions, which were contrasted with the software recognition outcomes. A hybrid method for multimodal fusion of our multimodal software shows accuracy between 96.1% and 98.6% for the best-chosen WEKA classifiers over predicted emotions. The software fulfils its requirements of real-time data interpretation and reliable results.The Netherlands Laboratory for Lifelong Learning (NELLL) of the Open University Netherlands
    corecore