85 research outputs found

    Affective Communication for Socially Assistive Robots (SARs) for Children with Autism Spectrum Disorder: A Systematic Review

    Get PDF
    Research on affective communication for socially assistive robots has been conducted to enable physical robots to perceive, express, and respond emotionally. However, the use of affective computing in social robots has been limited, especially when social robots are designed for children, and especially those with autism spectrum disorder (ASD). Social robots are based on cognitiveaffective models, which allow them to communicate with people following social behaviors and rules. However, interactions between a child and a robot may change or be different compared to those with an adult or when the child has an emotional deficit. In this study, we systematically reviewed studies related to computational models of emotions for children with ASD. We used the Scopus, WoS, Springer, and IEEE-Xplore databases to answer different research questions related to the definition, interaction, and design of computational models supported by theoretical psychology approaches from 1997 to 2021. Our review found 46 articles; not all the studies considered children or those with ASD.This research was funded by VRIEA-PUCV, grant number 039.358/202

    Activities suggestion based on emotions in AAL environments

    Get PDF
    The elderly population is increasing and the response of the society was to provide them with services directed to them to cope with their needs. One of the oldest solutions is the retirement home, providing housing and permanent assistance for the elderly. Furthermore, most of the retirement homes are inhabited by multiple elderly people, thus creating a community of people who are somewhat related in age and medical issues. The ambient assisted living (AAL) area tries to solve some of the elderly issues by producing technological products, some of them dedicated to elderly homes. One of the identified problem is that elderly people are sometimes discontent about the activities that consume most of their day promoted by the retirement home social workers. The work presented in this paper attempts to improve how these activities are scheduled taking into account the elderlies' emotional response to these activities. The aim is to maximize the group happiness by promoting the activities the group likes, minding if they are bored due to activities repetition. In this sense, this paper presents an extension of the Cognitive Life Assistant platform incorporating a social emotional model. The proposed system has been modelled as a free time activity manager which is in charge of suggesting activities to the social workers. (C) 2018 Elsevier B.V. All rights reserved.Angelo Costa thanks the Fundacao para a Ciencia e a Tecnologia (FCT) the Post-Doc scholarship with the Ref. SFRH/BPD/102696/2014. This work is also supported by COMPETE: POCI-01-0145-FEDER-007043 and FCT - Fundacao para a Ciencia e Tecnologia within the Project Scope: UID/CEC/00319/2013 and partially supported by the MINECO/FEDER TIN2015-65515-C4-1-R and the FPI grant AP2013-01276 awarded to Jaime-Andres Rincon

    Social Emotions in Multiagent Systems

    Full text link
    Tesis por compendioA lo largo de los últimos años, los sistemas multi-agente (SMA) han demostrado ser un paradigma potente y versátil, con un gran potencial a la hora de resolver problemas complejos en entornos dinámicos y distribuidos. Este potencial no se debe principalmente a sus características individuales (como son su autonomía, su capacidad de percepción, reacción y de razonamiento), sino que también a la capacidad de comunicación y cooperación a la hora de conseguir un objetivo. De hecho, su capacidad social es la que más llama la atención, es este comportamiento social el que dota de potencial a los sistemas multi-agente. Estas características han hecho de los SMA, la herramienta de inteligencia artificial (IA) más utilizada para el diseño de entornos virtuales inteligentes (IVE), los cuales son herramientas de simulación compleja basadas en agentes. Sin embargo, los IVE incorporan restricciones físicas (como gravedad, fuerzas, rozamientos, etc.), así como una representación 3D de lo que se quiere simular. Así mismo, estas herramientas no son sólo utilizadas para la realización de simulaciones. Con la aparición de nuevas aplicaciones como \emph{Internet of Things (IoT)}, \emph{Ambient Intelligence (AmI)}, robot asistentes, entre otras, las cuales están en contacto directo con el ser humano. Este contacto plantea nuevos retos a la hora de interactuar con estas aplicaciones. Una nueva forma de interacción que ha despertado un especial interés, es el que se relaciona con la detección y/o simulación de estados emocionales. Esto ha permitido que estas aplicaciones no sólo puedan detectar nuestros estados emocionales, sino que puedan simular y expresar sus propias emociones mejorando así la experiencia del usuario con dichas aplicaciones. Con el fin de mejorar la experiencia humano-máquina, esta tesis plantea como objetivo principal la creación de modelos emocionales sociales, los cuales podrán ser utilizados en aplicaciones MAS permitiendo a los agentes interpretar y/o emular diferentes estados emocionales y, además, emular fenómenos de contagio emocional. Estos modelos permitirán realizar simulaciones complejas basadas en emociones y aplicaciones más realistas en dominios como IoT, AIm, SH.Over the past few years, multi-agent systems (SMA) have proven to be a powerful and versatile paradigm, with great potential for solving complex problems in dynamic and distributed environments. This potential is not primarily due to their individual characteristics (such as their autonomy, their capacity for perception, reaction and reasoning), but also the ability to communicate and cooperate in achieving a goal. In fact, its social capacity is the one that draws the most attention, it is this social behavior that gives potential to multi-agent systems. These characteristics have made the SMA, the artificial intelligence (AI) tool most used for the design of intelligent virtual environments (IVE), which are complex agent-based simulation tools. However, IVE incorporates physical constraints (such as gravity, forces, friction, etc.), as well as a 3D representation of what you want to simulate. Also, these tools are not only used for simulations. With the emergence of new applications such as \emph {Internet of Things (IoT)}, \emph {Ambient Intelligence (AmI)}, robot assistants, among others, which are in direct contact with humans. This contact poses new challenges when it comes to interacting with these applications. A new form of interaction that has aroused a special interest is that which is related to the detection and / or simulation of emotional states. This has allowed these applications not only to detect our emotional states, but also to simulate and express their own emotions, thus improving the user experience with those applications. In order to improve the human-machine experience, this thesis aims to create social emotional models, which can be used in MAS applications, allowing agents to interpret and / or emulate different emotional states, and emulate phenomena of emotional contagion. These models will allow complex simulations based on emotions and more realistic applications in domains like IoT, AIm, SH.Al llarg dels últims anys, els sistemes multi-agent (SMA) han demostrat ser un paradigma potent i versàtil, amb un gran potencial a l'hora de resoldre problemes complexos en entorns dinàmics i distribuïts. Aquest potencial no es deu principalment a les seues característiques individuals (com són la seua autonomia, la seua capacitat de percepció, reacció i de raonament), sinó que també a la capacitat de comunicació i cooperació a l'hora d'aconseguir un objectiu. De fet, la seua capacitat social és la que més crida l'atenció, és aquest comportament social el que dota de potencial als sistemes multi-agent. Aquestes característiques han fet dels SMA, l'eina d'intel·ligència artificial (IA) més utilitzada per al disseny d'entorns virtuals intel·ligents (IVE), els quals són eines de simulació complexa basades en agents. No obstant això, els IVE incorporen restriccions físiques (com gravetat, forces, fregaments, etc.), així com una representació 3D del que es vol simular. Així mateix, aquestes eines no són només utilitzades per a la realització de simulacions. Amb l'aparició de noves aplicacions com \emph{Internet of Things (IOT)}, \emph{Ambient Intelligence (AmI)}, robot assistents, entre altres, les quals estan en contacte directe amb l'ésser humà. Aquest contacte planteja nous reptes a l'hora d'interactuar amb aquestes aplicacions. Una nova forma d'interacció que ha despertat un especial interès, és el que es relaciona amb la detecció i/o simulació d'estats emocionals. Això ha permès que aquestes aplicacions no només puguen detectar els nostres estats emocionals, sinó que puguen simular i expressar les seues pròpies emocions millorant així l'experiència de l'usuari amb aquestes aplicacions. Per tal de millorar l'experiència humà-màquina, aquesta tesi planteja com a objectiu principal la creació de models emocionals socials, els quals podran ser utilitzats en aplicacions MAS permetent als agents interpretar i/o emular diferents estats emocionals i, a més, emular fenòmens de contagi emocional. Aquests models permetran realitzar simulacions complexes basades en emocions i aplicacions més realistes en dominis com IoT, AIM, SH.Rincón Arango, JA. (2018). Social Emotions in Multiagent Systems [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/98090TESISCompendi

    Emotions detection on an ambient intelligent system using wearable devices

    Get PDF
    This paper presents the Emotional Smart Wristband and its integration with the iGenda. The aim is to detect emotional states of a group of entities through the wristband and send the social emotion value to the iGenda so it may change the home environment and notify the caregivers. This project is advantageous to communities of elderly people, like retirement homes, where a harmonious environment is imperative and where the number of inhabitants keeps increasing. The iGenda provides the visual interface and the information center, receiving the information from the Emotional Smart Wristband and tries achieve a specific emotion (such as calm or excitement). Thus, the goal is to provide an affective system that directly interacts with humans by discreetly improving their lifestyle. In this paper, it is described the wristband in depth and the data models, and is provided an evaluation of them performed by real individuals and the validation of this evaluation.- This work is supported by COMPETE, Portugal: POCI-01-0145-FEDER-007043 and FCT - Fundacao para a Ciencia e Tecnologi, Portugal a within the projects UID/CEC/00319/2013 and Post-Doc scholarship SFRH/BPD/102696/2014 (Angelo Costa) This work is partially supported by the MINECO/FEDER, Spain TIN2015-65515-C4-1-R and AP2013-01276 awarded to Jaime-Andres Rincon

    A Study of Non-Linguistic Utterances for Social Human-Robot Interaction

    Get PDF
    The world of animation has painted an inspiring image of what the robots of the future could be. Taking the robots R2D2 and C3PO from the Star Wars films as representative examples, these robots are portrayed as being more than just machines, rather, they are presented as intelligent and capable social peers, exhibiting many of the traits that people have also. These robots have the ability to interact with people, understand us, and even relate to us in very personal ways through a wide repertoire of social cues. As robotic technologies continue to make their way into society at large, there is a growing trend toward making social robots. The field of Human-Robot Interaction concerns itself with studying, developing and realising these socially capable machines, equipping them with a very rich variety of capabilities that allow them to interact with people in natural and intuitive ways, ranging from the use of natural language, body language and facial gestures, to more unique ways such as expression through colours and abstract sounds. This thesis studies the use of abstract, expressive sounds, like those used iconically by the robot R2D2. These are termed Non-Linguistic Utterances (NLUs) and are a means of communication which has a rich history in film and animation. However, very little is understood about how such expressive sounds may be utilised by social robots, and how people respond to these. This work presents a series of experiments aimed at understanding how NLUs can be utilised by a social robot in order to convey affective meaning to people both young and old, and what factors impact on the production and perception of NLUs. Firstly, it is shown that not all robots should use NLUs. The morphology of the robot matters. People perceive NLUs differently across different robots, and not always in a desired manner. Next it is shown that people readily project affective meaning onto NLUs though not in a coherent manner. Furthermore, people's affective inferences are not subtle, rather they are drawn to well established, basic affect prototypes. Moreover, it is shown that the valence of the situation in which an NLU is made, overrides the initial valence of the NLU itself: situational context biases how people perceive utterances made by a robot, and through this, coherence between people in their affective inferences is found to increase. Finally, it is uncovered that NLUs are best not used as a replacement to natural language (as they are by R2D2), rather, people show a preference for them being used alongside natural language where they can play a supportive role by providing essential social cues

    Multi-Platform Intelligent System for Multimodal Human-Computer Interaction

    Get PDF
    We present a flexible human--robot interaction architecture that incorporates emotions and moods to provide a natural experience for humans. To determine the emotional state of the user, information representing eye gaze and facial expression is combined with other contextual information such as whether the user is asking questions or has been quiet for some time. Subsequently, an appropriate robot behaviour is selected from a multi-path scenario. This architecture can be easily adapted to interactions with non-embodied robots such as avatars on a mobile device or a PC. We present the outcome of evaluating an implementation of our proposed architecture as a whole, and also of its modules for detecting emotions and questions. Results are promising and provide a basis for further development

    Artificial Vision Algorithms for Socially Assistive Robot Applications: A Review of the Literature

    Get PDF
    Today, computer vision algorithms are very important for different fields and applications, such as closed-circuit television security, health status monitoring, and recognizing a specific person or object and robotics. Regarding this topic, the present paper deals with a recent review of the literature on computer vision algorithms (recognition and tracking of faces, bodies, and objects) oriented towards socially assistive robot applications. The performance, frames per second (FPS) processing speed, and hardware implemented to run the algorithms are highlighted by comparing the available solutions. Moreover, this paper provides general information for researchers interested in knowing which vision algorithms are available, enabling them to select the one that is most suitable to include in their robotic system applicationsBeca Conacyt Doctorado No de CVU: 64683

    Toward Context-Aware, Affective, and Impactful Social Robots

    Get PDF

    Emotion-driven human-cobot interaction based on EEG in industrial applications

    Get PDF
    Currently, the world is facing its fourth industrial revolution which has been conceptualized into the term of “Industry 4.0”. Among all the features that include this idea, becomes essential to keep humans in the loop of the processes. Taking into account the human factor inside the industry is one of the most difficult aspects to manage optimally. Different from machines, human beings’ behaviour is complicated to parametrize and anticipate. Combined with the fact that as time goes by industries are becoming more complex, it is necessary to develop adaptive systems. One of the duties of the adaptive systems is to make machines adjust to situations that could be modified quickly or several times without decreasing the performance of the system. As a consequence, this concept is being applied to the robotic field too. As the industries are evolving, more robots are being introduced in them sharing the workplace with humans. Consequently, the concept of Human-Robot Interaction (HRI) has become a great topic to improve. The level of success of this interaction is closely related to the level of trust in the same one. Human-Robot trust relation can be increased in several ways related to the robot, to the human or to the environment that surrounds the interaction. In order to increase trust in HRI, among other parameters, it has been developed the collaboratives robots (cobots). Cobots are robots mean to work in a collaborative way with humans, changing the patterns of interaction established with robots. However even if cobots are able to work hand-to-hand with humans, they sill need to understand in a better way human needs to become their colleagues. The main aspect that defines human beings is their emotional state. Humans’ cognitive state is what characterize mostly all the decisions that they made during a day. Therefore, understanding humans’ emotional response is becoming a key target for the industrial field. In seek of understanding humans emotions, there have been developed several ways to analyse the emotional state of a person by gesture recognition, speech recognition, electroencephalography, etc. Depending on the technique of analysis, there have been developed devices that can detect those emotions. The aim of this thesis work is to develop a system that acts as a bridge between humans and cobots influencing positively to their interaction making a cobot adapt its behaviour to the emotional state of a human while performing a collaborative task. By achieving this, major issues related to trust in HRI can be favoured, having impacts both in industry as in social fields
    corecore