448 research outputs found

    Generación de expresiones faciales basadas en emociones para criaturas virtuales

    Get PDF
    En este trabajo de investigaci on se estudian los comportamientos humanos, específicamente las expresiones faciales, ya que estas son un aspecto fundamental en las emociones que genera el ser humano al poder discutir si podemos o no expresar en el rostro algo que no sea una emoci on. En el contenido de esta tesis se introducen algunos conceptos b asicos de la Inteligencia Arti cial, como agentes inteligentes, la computaci on bio-inspirada, entre otros, para lograr que una criatura virtual represente 5 emociones b asicas (alegr a, tristeza, miedo, enojo y desagrado) en su rostro, la generaci on de estas emociones son detonadas mediante est mulos que se encuentran en un ambiente virtual y a la vez mostradas en una expresi on facial. De esta manera la propuesta que se hace en esta tesis es modelar una criatura virtual embebiendo en ella una base de conocimiento inicial generada con base en Neurociencias y generar est mulos con diversas caracter sticas, tomando como caso de estudio los coloresque est an relacionados a las emociones b asicas, para desp ues introducir a la criatura virtual y a los est mulos de manera que estos ultimos exciten a la criatura virtual y esta pueda generar la expresi on facial de acuerdo a su base de conocmiento inicial, lo que permite mostrar un comportamiento que se asemeje un poco m as al que tendr a una persona real

    What makes a social robot good at interacting with humans?

    Get PDF
    This paper discusses the nuances of a social robot, how and why social robots are becoming increasingly significant, and what they are currently being used for. This paper also reflects on the current design of social robots as a means of interaction with humans and also reports potential solutions about several important questions around the futuristic design of these robots. The specific questions explored in this paper are: “Do social robots need to look like living creatures that already exist in the world for humans to interact well with them?”; “Do social robots need to have animated faces for humans to interact well with them?”; “Do social robots need to have the ability to speak a coherent human language for humans to interact well with them?” and “Do social robots need to have the capability to make physical gestures for humans to interact well with them?”. This paper reviews both verbal as well as nonverbal social and conversational cues that could be incorporated into the design of social robots, and also briefly discusses the emotional bonds that may be built between humans and robots. Facets surrounding acceptance of social robots by humans and also ethical/moral concerns have also been discussed

    What makes a social robot good at interacting with humans?

    Get PDF
    This paper discusses the nuances of a social robot, how and why social robots are becoming increasingly significant, and what they are currently being used for. This paper also reflects on the current design of social robots as a means of interaction with humans and also reports potential solutions about several important questions around the futuristic design of these robots. The specific questions explored in this paper are: “Do social robots need to look like living creatures that already exist in the world for humans to interact well with them?”; “Do social robots need to have animated faces for humans to interact well with them?”; “Do social robots need to have the ability to speak a coherent human language for humans to interact well with them?” and “Do social robots need to have the capability to make physical gestures for humans to interact well with them?”. This paper reviews both verbal as well as nonverbal social and conversational cues that could be incorporated into the design of social robots, and also briefly discusses the emotional bonds that may be built between humans and robots. Facets surrounding acceptance of social robots by humans and also ethical/moral concerns have also been discussed

    In the Blink of an Eye: Event-based Emotion Recognition

    Full text link
    We introduce a wearable single-eye emotion recognition device and a real-time approach to recognizing emotions from partial observations of an emotion that is robust to changes in lighting conditions. At the heart of our method is a bio-inspired event-based camera setup and a newly designed lightweight Spiking Eye Emotion Network (SEEN). Compared to conventional cameras, event-based cameras offer a higher dynamic range (up to 140 dB vs. 80 dB) and a higher temporal resolution. Thus, the captured events can encode rich temporal cues under challenging lighting conditions. However, these events lack texture information, posing problems in decoding temporal information effectively. SEEN tackles this issue from two different perspectives. First, we adopt convolutional spiking layers to take advantage of the spiking neural network's ability to decode pertinent temporal information. Second, SEEN learns to extract essential spatial cues from corresponding intensity frames and leverages a novel weight-copy scheme to convey spatial attention to the convolutional spiking layers during training and inference. We extensively validate and demonstrate the effectiveness of our approach on a specially collected Single-eye Event-based Emotion (SEE) dataset. To the best of our knowledge, our method is the first eye-based emotion recognition method that leverages event-based cameras and spiking neural network

    High Efficiency Real-Time Sensor and Actuator Control and Data Processing

    Get PDF
    The advances in sensor and actuator technology foster the use of large multitransducer networks in many different fields. The increasing complexity of such networks poses problems in data processing, especially when high-efficiency is required for real-time applications. In fact, multi-transducer data processing usually consists of interconnection and co-operation of several modules devoted to process different tasks. Multi-transducer network modules often include tasks such as control, data acquisition, data filtering interfaces, feature selection and pattern analysis. Heterogeneous techniques derived from chemometrics, neural networks, fuzzy-rules used to implement such tasks may introduce module interconnection and co-operation issues. To help dealing with these problems the author here presents a software library architecture for a dynamic and efficient management of multi-transducer data processing and control techniques. The framework’s base architecture and the implementation details of several extensions are described. Starting from the base models available in the framework core dedicated models for control processes and neural network tools have been derived. The Facial Automaton for Conveying Emotion (FACE) has been used as a test field for the control architecture

    Expressive social exchange between humans and robots

    Get PDF
    Thesis (Sc.D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2000.Includes bibliographical references (p. 253-264).Sociable humanoid robots are natural and intuitive for people to communicate with and to teach. We present recent advances in building an autonomous humanoid robot, Kismet, that can engage humans in expressive social interaction. We outline a set of design issues and a framework that we have found to be of particular importance for sociable robots. Having a human-in-the-loop places significant social constraints on how the robot aesthetically appears, how its sensors are configured, its quality of movement, and its behavior. Inspired by infant social development, psychology, ethology, and evolutionary perspectives, this work integrates theories and concepts from these diverse viewpoints to enable Kismet to enter into natural and intuitive social interaction with a human caregiver, reminiscent of parent-infant exchanges. Kismet perceives a variety of natural social cues from visual and auditory channels, and delivers social signals to people through gaze direction, facial expression, body posture, and vocalizations. We present the implementation of Kismet's social competencies and evaluate each with respect to: 1) the ability of naive subjects to read and interpret the robot's social cues, 2) the robot's ability to perceive and appropriately respond to naturally offered social cues, 3) the robot's ability to elicit interaction scenarios that afford rich learning potential, and 4) how this produces a rich, flexible, dynamic interaction that is physical, affective, and social. Numerous studies with naive human subjects are described that provide the data upon which we base our evaluations.by Cynthia L. Breazeal.Sc.D

    Design of a Virtual Assistant to Improve Interaction Between the Audience and the Presenter

    Get PDF
    This article presents a novel design of a Virtual Assistant as part of a human-machine interaction system to improve communication between the presenter and the audience that can be used in education or general presentations for improving interaction during the presentations (e.g., auditoriums with 200 people). The main goal of the proposed model is the design of a framework of interaction to increase the level of attention of the public in key aspects of the presentation. In this manner, the collaboration between the presenter and Virtual Assistant could improve the level of learning among the public. The design of the Virtual Assistant relies on non-anthropomorphic forms with ‘live’ characteristics generating an intuitive and self-explainable interface. A set of intuitive and useful virtual interactions to support the presenter was designed. This design was validated from various types of the public with a psychological study based on a discrete emotions’ questionnaire confirming the adequacy of the proposed solution. The human-machine interaction system supporting the Virtual Assistant should automatically recognize the attention level of the audience from audiovisual resources and synchronize the Virtual Assistant with the presentation. The system involves a complex artificial intelligence architecture embracing perception of high-level features from audio and video, knowledge representation, and reasoning for pervasive and affective computing and reinforcement learning to teach the intelligent agent to decide on the best strategy to increase the level of attention of the audience

    Emotions, behaviour and belief regulation in an intelligent guide with attitude

    Get PDF
    Abstract unavailable please refer to PD
    corecore