3,425 research outputs found

    A Survey on Emotion Recognition for Human Robot Interaction

    Get PDF
    With the recent developments of technology and the advances in artificial intelligent and machine learning techniques, it becomes possible for the robot to acquire and show the emotions as a part of Human-Robot Interaction (HRI). An emotional robot can recognize the emotional states of humans so that it will be able to interact more naturally with its human counterpart in different environments. In this article, a survey on emotion recognition for HRI systems has been presented. The survey aims to achieve two objectives. Firstly, it aims to discuss the main challenges that face researchers when building emotional HRI systems. Secondly, it seeks to identify sensing channels that can be used to detect emotions and provides a literature review about recent researches published within each channel, along with the used methodologies and achieved results. Finally, some of the existing emotion recognition issues and recommendations for future works have been outlined

    Affect Recognition in Human Emotional Speech using Probabilistic Support Vector Machines

    Get PDF
    The problem of inferring human emotional state automatically from speech has become one of the central problems in Man Machine Interaction (MMI). Though Support Vector Machines (SVMs) were used in several worksfor emotion recognition from speech, the potential of using probabilistic SVMs for this task is not explored. The emphasis of the current work is on how to use probabilistic SVMs for the efficient recognition of emotions from speech. Emotional speech corpuses for two Dravidian languages- Telugu & Tamil- were constructed for assessing the recognition accuracy of Probabilistic SVMs. Recognition accuracy of the proposed model is analyzed using both Telugu and Tamil emotional speech corpuses and compared with three of the existing works. Experimental results indicated that the proposed model is significantly better compared with the existing methods

    Chapter From the Lab to the Real World: Affect Recognition Using Multiple Cues and Modalities

    Get PDF
    Interdisciplinary concept of dissipative soliton is unfolded in connection with ultrafast fibre lasers. The different mode-locking techniques as well as experimental realizations of dissipative soliton fibre lasers are surveyed briefly with an emphasis on their energy scalability. Basic topics of the dissipative soliton theory are elucidated in connection with concepts of energy scalability and stability. It is shown that the parametric space of dissipative soliton has reduced dimension and comparatively simple structure that simplifies the analysis and optimization of ultrafast fibre lasers. The main destabilization scenarios are described and the limits of energy scalability are connected with impact of optical turbulence and stimulated Raman scattering. The fast and slow dynamics of vector dissipative solitons are exposed

    Psychophysiological analysis of a pedagogical agent and robotic peer for individuals with autism spectrum disorders.

    Get PDF
    Autism spectrum disorder (ASD) is a neurodevelopmental disorder characterized by ongoing problems in social interaction and communication, and engagement in repetitive behaviors. According to Centers for Disease Control and Prevention, an estimated 1 in 68 children in the United States has ASD. Mounting evidence shows that many of these individuals display an interest in social interaction with computers and robots and, in general, feel comfortable spending time in such environments. It is known that the subtlety and unpredictability of people’s social behavior are intimidating and confusing for many individuals with ASD. Computerized learning environments and robots, however, prepare a predictable, dependable, and less complicated environment, where the interaction complexity can be adjusted so as to account for these individuals’ needs. The first phase of this dissertation presents an artificial-intelligence-based tutoring system which uses an interactive computer character as a pedagogical agent (PA) that simulates a human tutor teaching sight word reading to individuals with ASD. This phase examines the efficacy of an instructional package comprised of an autonomous pedagogical agent, automatic speech recognition, and an evidence-based instructional procedure referred to as constant time delay (CTD). A concurrent multiple-baseline across-participants design is used to evaluate the efficacy of intervention. Additionally, post-treatment probes are conducted to assess maintenance and generalization. The results suggest that all three participants acquired and maintained new sight words and demonstrated generalized responding. The second phase of this dissertation describes the augmentation of the tutoring system developed in the first phase with an autonomous humanoid robot which serves the instructional role of a peer for the student. In this tutoring paradigm, the robot adopts a peer metaphor, where its function is to act as a peer. With the introduction of the robotic peer (RP), the traditional dyadic interaction in tutoring systems is augmented to a novel triadic interaction in order to enhance the social richness of the tutoring system, and to facilitate learning through peer observation. This phase evaluates the feasibility and effects of using PA-delivered sight word instruction, based on a CTD procedure, within a small-group arrangement including a student with ASD and the robotic peer. A multiple-probe design across word sets, replicated across three participants, is used to evaluate the efficacy of intervention. The findings illustrate that all three participants acquired, maintained, and generalized all the words targeted for instruction. Furthermore, they learned a high percentage (94.44% on average) of the non-target words exclusively instructed to the RP. The data show that not only did the participants learn nontargeted words by observing the instruction to the RP but they also acquired their target words more efficiently and with less errors by the addition of an observational component to the direct instruction. The third and fourth phases of this dissertation focus on physiology-based modeling of the participants’ affective experiences during naturalistic interaction with the developed tutoring system. While computers and robots have begun to co-exist with humans and cooperatively share various tasks; they are still deficient in interpreting and responding to humans as emotional beings. Wearable biosensors that can be used for computerized emotion recognition offer great potential for addressing this issue. The third phase presents a Bluetooth-enabled eyewear – EmotiGO – for unobtrusive acquisition of a set of physiological signals, i.e., skin conductivity, photoplethysmography, and skin temperature, which can be used as autonomic readouts of emotions. EmotiGO is unobtrusive and sufficiently lightweight to be worn comfortably without interfering with the users’ usual activities. This phase presents the architecture of the device and results from testing that verify its effectiveness against an FDA-approved system for physiological measurement. The fourth and final phase attempts to model the students’ engagement levels using their physiological signals collected with EmotiGO during naturalistic interaction with the tutoring system developed in the second phase. Several physiological indices are extracted from each of the signals. The students’ engagement levels during the interaction with the tutoring system are rated by two trained coders using the video recordings of the instructional sessions. Supervised pattern recognition algorithms are subsequently used to map the physiological indices to the engagement scores. The results indicate that the trained models are successful at classifying participants’ engagement levels with the mean classification accuracy of 86.50%. These models are an important step toward an intelligent tutoring system that can dynamically adapt its pedagogical strategies to the affective needs of learners with ASD

    Computational approaches to Explainable Artificial Intelligence: Advances in theory, applications and trends

    Get PDF
    Deep Learning (DL), a groundbreaking branch of Machine Learning (ML), has emerged as a driving force in both theoretical and applied Artificial Intelligence (AI). DL algorithms, rooted in complex and non-linear artificial neural systems, excel at extracting high-level features from data. DL has demonstrated human-level performance in real-world tasks, including clinical diagnostics, and has unlocked solutions to previously intractable problems in virtual agent design, robotics, genomics, neuroimaging, computer vision, and industrial automation. In this paper, the most relevant advances from the last few years in Artificial Intelligence (AI) and several applications to neuroscience, neuroimaging, computer vision, and robotics are presented, reviewed and discussed. In this way, we summarize the state-of-the-art in AI methods, models and applications within a collection of works presented at the 9 International Conference on the Interplay between Natural and Artificial Computation (IWINAC). The works presented in this paper are excellent examples of new scientific discoveries made in laboratories that have successfully transitioned to real-life applications

    Computational approaches to Explainable Artificial Intelligence: Advances in theory, applications and trends

    Get PDF
    Financiado para publicaciĂłn en acceso aberto: Universidad de Granada / CBUA.[Abstract]: Deep Learning (DL), a groundbreaking branch of Machine Learning (ML), has emerged as a driving force in both theoretical and applied Artificial Intelligence (AI). DL algorithms, rooted in complex and non-linear artificial neural systems, excel at extracting high-level features from data. DL has demonstrated human-level performance in real-world tasks, including clinical diagnostics, and has unlocked solutions to previously intractable problems in virtual agent design, robotics, genomics, neuroimaging, computer vision, and industrial automation. In this paper, the most relevant advances from the last few years in Artificial Intelligence (AI) and several applications to neuroscience, neuroimaging, computer vision, and robotics are presented, reviewed and discussed. In this way, we summarize the state-of-the-art in AI methods, models and applications within a collection of works presented at the 9th International Conference on the Interplay between Natural and Artificial Computation (IWINAC). The works presented in this paper are excellent examples of new scientific discoveries made in laboratories that have successfully transitioned to real-life applications.Funding for open access charge: Universidad de Granada / CBUA. The work reported here has been partially funded by many public and private bodies: by the MCIN/AEI/10.13039/501100011033/ and FEDER “Una manera de hacer Europa” under the RTI2018-098913-B100 project, by the Consejeria de Economia, Innovacion, Ciencia y Empleo (Junta de Andalucia) and FEDER under CV20-45250, A-TIC-080-UGR18, B-TIC-586-UGR20 and P20-00525 projects, and by the Ministerio de Universidades under the FPU18/04902 grant given to C. Jimenez-Mesa, the Margarita-Salas grant to J.E. Arco, and the Juan de la Cierva grant to D. Castillo-Barnes. This work was supported by projects PGC2018-098813-B-C32 & RTI2018-098913-B100 (Spanish “Ministerio de Ciencia, InnovacĂłn y Universidades”), P18-RT-1624, UMA20-FEDERJA-086, CV20-45250, A-TIC-080-UGR18 and P20 00525 (ConsejerĂ­a de econnomĂ­a y conocimiento, Junta de AndalucĂ­a) and by European Regional Development Funds (ERDF). M.A. Formoso work was supported by Grant PRE2019-087350 funded by MCIN/AEI/10.13039/501100011033 by “ESF Investing in your future”. Work of J.E. Arco was supported by Ministerio de Universidades, Gobierno de España through grant “Margarita Salas”. The work reported here has been partially funded by Grant PID2020-115220RB-C22 funded by MCIN/AEI/10.13039/501100011033 and, as appropriate, by “ERDF A way of making Europe”, by the “European Union” or by the “European Union NextGenerationEU/PRTR”. The work of Paulo Novais is financed by National Funds through the Portuguese funding agency, FCT - Fundaça̋o para a CiĂȘncia e a Tecnologia within project DSAIPA/AI/0099/2019. Ramiro Varela was supported by the Spanish State Agency for Research (AEI) grant PID2019-106263RB-I00. JosĂ© Santos was supported by the Xunta de Galicia and the European Union (European Regional Development Fund - Galicia 2014–2020 Program), with grants CITIC (ED431G 2019/01), GPC ED431B 2022/33, and by the Spanish Ministry of Science and Innovation (project PID2020-116201GB-I00). The work reported here has been partially funded by Project Fondecyt 1201572 (ANID). The work reported here has been partially funded by Project Fondecyt 1201572 (ANID). In [247], the project has received funding by grant RTI2018-098969-B-100 from the Spanish Ministerio de Ciencia InnovaciĂłn y Universidades and by grant PROMETEO/2019/119 from the Generalitat Valenciana (Spain). In [248], the research work has been partially supported by the National Science Fund of Bulgaria (scientific project “Digital Accessibility for People with Special Needs: Methodology, Conceptual Models and Innovative Ecosystems”), Grant Number KP-06-N42/4, 08.12.2020; EC for project CybSPEED, 777720, H2020-MSCA-RISE-2017 and OP Science and Education for Smart Growth (2014–2020) for project Competence Center “Intelligent mechatronic, eco- and energy saving sytems and technologies”BG05M2OP001-1.002-0023. The work reported here has been partially funded by the support of MICIN project PID2020-116346GB-I00. The work reported here has been partially funded by many public and private bodies: by MCIN/AEI/10.13039/501100011033 and “ERDF A way to make Europe” under the PID2020-115220RB-C21 and EQC2019-006063-P projects; by MCIN/AEI/10.13039/501100011033 and “ESF Investing in your future” under FPU16/03740 grant; by the CIBERSAM of the Instituto de Salud Carlos III; by MinCiencias project 1222-852-69927, contract 495-2020. The work is partially supported by the Autonomous Government of Andalusia (Spain) under project UMA18-FEDERJA-084, project name Detection of anomalous behavior agents by DL in low-cost video surveillance intelligent systems. Authors gratefully acknowledge the support of NVIDIA Corporation with the donation of a RTX A6000 48 Gb. This work was conducted in the context of the Horizon Europe project PRE-ACT, and it has received funding through the European Commission Horizon Europe Program (Grant Agreement number: 101057746). In addition, this work was supported by the Swiss State Secretariat for Education, Research and Innovation (SERI) under contract nummber 22 00058. S.B Cho was supported by Institute of Information & Communications Technology Planning & Evaluation (IITP) grant funded by the Korean government (MSIT) (No. 2020-0-01361, Artificial Intelligence Graduate School Program (Yonsei University)).Junta de AndalucĂ­a; CV20-45250Junta de AndalucĂ­a; A-TIC-080-UGR18Junta de AndalucĂ­a; B-TIC-586-UGR20Junta de AndalucĂ­a; P20-00525Junta de AndalucĂ­a; P18-RT-1624Junta de AndalucĂ­a; UMA20-FEDERJA-086Portugal. Fundação para a CiĂȘncia e a Tecnologia; DSAIPA/AI/0099/2019Xunta de Galicia; ED431G 2019/01Xunta de Galicia; GPC ED431B 2022/33Chile. Agencia Nacional de InvestigaciĂłn y Desarrollo; 1201572Generalitat Valenciana; PROMETEO/2019/119Bulgarian National Science Fund; KP-06-N42/4Bulgaria. Operational Programme Science and Education for Smart Growth; BG05M2OP001-1.002-0023Colombia. Ministerio de Ciencia, TecnologĂ­a e InnovaciĂłn; 1222-852-69927Junta de AndalucĂ­a; UMA18-FEDERJA-084SuĂ­za. State Secretariat for Education, Research and Innovation; 22 00058Institute of Information & Communications Technology Planning & Evaluation (Corea del Sur); 2020-0-0136
    • 

    corecore