1,496 research outputs found

    Emotional design and human-robot interaction

    Get PDF
    Recent years have shown an increase in the importance of emotions applied to the Design field - Emotional Design. In this sense, the emotional design aims to elicit (e.g., pleasure) or prevent (e.g., displeasure) determined emotions, during human product interaction. That is, the emotional design regulates the emotional interaction between the individual and the product (e.g., robot). Robot design has been a growing area whereby robots are interacting directly with humans in which emotions are essential in the interaction. Therefore, this paper aims, through a non-systematic literature review, to explore the application of emotional design, particularly on Human-Robot Interaction. Robot design features (e.g., appearance, expressing emotions and spatial distance) that affect emotional design are introduced. The chapter ends with a discussion and a conclusion.info:eu-repo/semantics/acceptedVersio

    Robotic Faces: Exploring Dynamical Patterns of Social Interaction between Humans and Robots

    Get PDF
    Thesis (Ph.D.) - Indiana University, Informatics, 2015The purpose of this dissertation is two-fold: 1) to develop an empirically-based design for an interactive robotic face, and 2) to understand how dynamical aspects of social interaction may be leveraged to design better interactive technologies and/or further our understanding of social cognition. Understanding the role that dynamics plays in social cognition is a challenging problem. This is particularly true in studying cognition via human-robot interaction, which entails both the natural social cognition of the human and the “artificial intelligence” of the robot. Clearly, humans who are interacting with other humans (or even other mammals such as dogs) are cognizant of the social nature of the interaction – their behavior in those cases differs from that when interacting with inanimate objects such as tools. Humans (and many other animals) have some awareness of “social”, some sense of other agents. However, it is not clear how or why. Social interaction patterns vary across culture, context, and individual characteristics of the human interactor. These factors are subsumed into the larger interaction system, influencing the unfolding of the system over time (i.e. the dynamics). The overarching question is whether we can figure out how to utilize factors that influence the dynamics of the social interaction in order to imbue our interactive technologies (robots, clinical AI, decision support systems, etc.) with some "awareness of social", and potentially create more natural interaction paradigms for those technologies. In this work, we explore the above questions across a range of studies, including lab-based experiments, field observations, and placing autonomous, interactive robotic faces in public spaces. We also discuss future work, how this research relates to making sense of what a robot "sees", creating data-driven models of robot social behavior, and development of robotic face personalities

    Sensing and mapping for interactive performance

    Get PDF
    This paper describes a trans-domain mapping (TDM) framework for translating meaningful activities from one creative domain onto another. The multi-disciplinary framework is designed to facilitate an intuitive and non-intrusive interactive multimedia performance interface that offers the users or performers real-time control of multimedia events using their physical movements. It is intended to be a highly dynamic real-time performance tool, sensing and tracking activities and changes, in order to provide interactive multimedia performances. From a straightforward definition of the TDM framework, this paper reports several implementations and multi-disciplinary collaborative projects using the proposed framework, including a motion and colour-sensitive system, a sensor-based system for triggering musical events, and a distributed multimedia server for audio mapping of a real-time face tracker, and discusses different aspects of mapping strategies in their context. Plausible future directions, developments and exploration with the proposed framework, including stage augmenta tion, virtual and augmented reality, which involve sensing and mapping of physical and non-physical changes onto multimedia control events, are discussed

    Definición de estados emocionales en sistemas dinámicos

    Get PDF
    Este trabajo presenta una forma de lograr sistemas adaptables y autónomos,con el propósito de emular inteligencia en máquinas, de maneraque estas puedan verse como entidades que tracen sus propiosplanes y los sigan. Sin embargo, el enfoque del artículo no es mostrarla construcción de humanoides, sino que es presentar mejoras en elcontrol de cada actuador en una máquina, esto como primer pasopara tener dispositivos inteligentes. El trabajo de cada algoritmo decontrol es el tomar decisiones para fi jar la dinámica del sistema quedirige en un valor llamado referencia. En este sentido, el autor asociadescubrimientos recientes en neurociencia y psicología acerca de laimportancia de las emociones en el proceso de toma de decisionescon el trabajo de los controladores. Como resultado, se defi ne y pruebaun grupo de emociones básicas que deberían tener los controladores

    Shhh…listen to your pebble Mindfulness Education: The Relationship Between Children, Imagination, and Nature

    Get PDF
    A culture of education embedded in mind-body learning experiences and mindfulness approaches to knowledge on multiple levels through awareness has the potential to cultivate versatile and flexible social individuals who are mentally, emotionally, and intellectually “capable.” Since children experience the world primarily through their bodies, movement and interaction with the environment are a means for them to explore, find, understand, and fortify the self to better regulate their physical, emotional, and social realms so that they develop into social individuals who find value in themselves and in others. Therefore, focusing on strengthening children‟s mind-body connection through mindfulness-based practices is a topic worth investigating. Accordingly, this thesis addresses the processes of awareness in preschool children through movement and interaction with nature. The nature component is crucial because it grounds the self inside a world that is boundless yet intimate and engulfing yet comforting. Therefore, the theme of nature was incorporated throughout the thesis project. Awareness and openness, particularly in preschool children, begins in noticing and caring not only for themselves but for others – people, animals, and “things.” Thus I created 3 activities – nature walks, stone building, and mindful movement –while observing the ways children behaved throughout each activity. The gathered observations were analyzed and related back to the importance and benefits of mindfulness education. The goal was to foster intentional attention and sense of awareness while strengthening the mind-body connection. This study was qualitative, multidisciplinary, and experiential – an exploration based on observations of ongoing activities in the classroom – and was not intended to measure outcomes but rather document and record the process, including the children‟s verbal, physical, and symbolic expression of their experiences

    A Systematic Review of Adaptivity in Human-Robot Interaction

    Get PDF
    As the field of social robotics is growing, a consensus has been made on the design and implementation of robotic systems that are capable of adapting based on the user actions. These actions may be based on their emotions, personality or memory of past interactions. Therefore, we believe it is significant to report a review of the past research on the use of adaptive robots that have been utilised in various social environments. In this paper, we present a systematic review on the reported adaptive interactions across a number of domain areas during Human-Robot Interaction and also give future directions that can guide the design of future adaptive social robots. We conjecture that this will help towards achieving long-term applicability of robots in various social domains

    Sympathy for the devil:On the neural mechanisms of threat and distress reactivity

    Get PDF

    Mirroring and recognizing emotions through facial expressions for a Robokind platform

    Get PDF
    Dissertação de mestrado integrado em Engenharia Eletrónica Industrial e ComputadoresFacial expressions play an important role during human social interaction, enabling communicative cues, ascertaining the level of interest or signalling the desire to take a speaking turn. They also give continuous feedback indicating that the information conveyed has been understood. However, certain individuals have difficulties in social interaction in particular verbal and non-verbal communication (e.g. emotions and gestures). Autism Spectrum Disorders (ASD) are a special case of social impairments. Individuals that are affected with ASD are characterized by repetitive patterns of behaviour, restricted activities or interests, and impairments in social communication. The use of robots had already been proven to encourage the promotion of social interaction and skills in children with ASD. Following this trend, in this work a robotic platform is used as a mediator in the social interaction activities with children with special needs. The main purpose of this dissertation is to develop a system capable of automatic detecting emotions through facial expressions and interfacing it with a robotic platform in order to allow social interaction with children with special needs. The proposed experimental setup uses the Intel RealSense 3D camera and the Zeno R50 Robokind robotic platform. This layout has two subsystems, a Mirroring Emotion System (MES) and an Emotion Recognition System (ERS). The first subsystem (MES) is capable of synthetizing human emotions through facial expressions, on-line. The other subsystem (ERS) is able to recognize human emotions through facial features in real time. MES extracts the user facial Action Units (AUs), sends the data to the robot allowing on-line imitation. ERS uses Support Vector Machine (SVM) technique to automatic classify the emotion expressed by the User in real time. Finally, the proposed subsystems, MES and ERS, were evaluated in a laboratorial and controlled environment in order to check the integration and operation of the systems. Then, both subsystems were tested in a school environment in different configurations. The results of these preliminary tests allowed to detect some constraints of the system, as well as validate its adequacy in an intervention setting.As expressões faciais desempenham um papel importante na interação social, permitindo fornecer pistas comunicativas, conhecer o nível de interesse ou sinalizar o desejo de falar. No entanto, algumas pessoas têm dificuldades na interação social, em particular, na comunicação verbal e não-verbal (por exemplo, emoções e gestos). As Perturbações do Espectro do Autismo (PEA) são um caso especial de transtorno e dificuldades sociais. Os indivíduos que são afetados com PEA são caracterizados por padrões repetitivos de comportamento, atividades e interesses restritos e possuem deficiências na comunicação social. A utilização de robôs para incentivar a promoção da interação social e habilidades em crianças com PEA tem sido apresentada na literatura. Seguindo essa tendência, neste trabalho uma plataforma robótica é utilizada como um mediador nas atividades de interação social com crianças com necessidades especiais. O objetivo principal desta dissertação é desenvolver um sistema capaz de detetar automaticamente emoções através de expressões faciais e fazer interface com uma plataforma robótica, a fim de permitir uma interação social com crianças com necessidades especiais. O trabalho experimental proposto utiliza a câmara Intel RealSense 3D e a plataforma robótica Zeno R50 Robokind. Este esquema possui dois subsistemas, um sistema de imitação de expressões faciais (MES) e um sistema de reconhecimentos de emoções (ERS). O primeiro subsistema (MES) é capaz de sintetizar on-line as emoções humanas através de expressões faciais. O subsistema ERS é capaz de reconhecer em tempo-real emoções humanas através de características faciais. O MES extrai as Unidades de Ação faciais do utilizador (UAs), envia os dados para o robô permitindo imitação on-line. O ERS utiliza Support Vector Machine (SVM) para automaticamente classificar a emoção exibida pelo utilizador. Finalmente, os subsistemas propostos, MES e ERS, foram avaliados num ambiente laboratorial e controlado, a fim de verificar a integração e a operação de ambos. Em seguida, os subsistemas foram testados num ambiente escolar em diferentes configurações. Os resultados destes testes preliminares permitiram detetar algumas limitações do sistema, bem como validar a sua adequação na intervenção com crianças com necessidades especiais
    corecore