650 research outputs found

    Audio-Motor Integration for Robot Audition

    Get PDF
    International audienceIn the context of robotics, audio signal processing in the wild amounts to dealing with sounds recorded by a system that moves and whose actuators produce noise. This creates additional challenges in sound source localization, signal enhancement and recognition. But the speci-ficity of such platforms also brings interesting opportunities: can information about the robot actuators' states be meaningfully integrated in the audio processing pipeline to improve performance and efficiency? While robot audition grew to become an established field, methods that explicitly use motor-state information as a complementary modality to audio are scarcer. This chapter proposes a unified view of this endeavour, referred to as audio-motor integration. A literature review and two learning-based methods for audio-motor integration in robot audition are presented, with application to single-microphone sound source localization and ego-noise reduction on real data

    Classification of Known and Unknown Environmental Sounds Based on Self-Organized Space Using a Recurrent Neural Network

    Get PDF
    Our goal is to develop a system to learn and classify environmental sounds for robots working in the real world. In the real world, two main restrictions pertain in learning. (i) Robots have to learn using only a small amount of data in a limited time because of hardware restrictions. (ii) The system has to adapt to unknown data since it is virtually impossible to collect samples of all environmental sounds. We used a neuro-dynamical model to build a prediction and classification system. This neuro-dynamical model can self-organize sound classes into parameters by learning samples. The sound classification space, constructed by these parameters, is structured for the sound generation dynamics and obtains clusters not only for known classes, but also unknown classes. The proposed system searches on the basis of the sound classification space for classifying. In the experiment, we evaluated the accuracy of classification for both known and unknown sound classes

    Virtual Reality Games for Motor Rehabilitation

    Get PDF
    This paper presents a fuzzy logic based method to track user satisfaction without the need for devices to monitor users physiological conditions. User satisfaction is the key to any product’s acceptance; computer applications and video games provide a unique opportunity to provide a tailored environment for each user to better suit their needs. We have implemented a non-adaptive fuzzy logic model of emotion, based on the emotional component of the Fuzzy Logic Adaptive Model of Emotion (FLAME) proposed by El-Nasr, to estimate player emotion in UnrealTournament 2004. In this paper we describe the implementation of this system and present the results of one of several play tests. Our research contradicts the current literature that suggests physiological measurements are needed. We show that it is possible to use a software only method to estimate user emotion

    Conception d’un mécanisme intégré d’attention sélective dans une architecture comportementale pour robots autonomes

    Get PDF
    Le vieillissement de la population à travers le monde nous amène à considérer sérieusement l'intégration dans notre quotidien de robots de service afin d'alléger les besoins pour la prestation de soins. Or, il n'existe pas présentement de robots de service suffisamment avancés pour être utiles en tant que véritables assistants à des personnes en perte d'autonomie. Un des problèmes freinant le développement de tels robots en est un d'intégration logicielle. En effet, il est difficile d'intégrer les multiples capacités de perception et d'action nécessaires à interagir de manière naturelle et adéquate avec une personne en milieu réel, les limites des ressources de calculs disponibles sur une plateforme robotique étant rapidement atteintes. Même si le cerveau humain a des capacités supérieures à un ordinateur, lui aussi a des limites sur ses capacités de traitement de l'information. Pour faire face à ces limites, l'humain gère ses capacités cognitives avec l'aide de l'attention sélective. L'attention sélective lui permet par exemple d'ignorer certains stimuli pour concentrer ses ressources sur ceux utiles à sa tâche. Puisque les robots pourraient grandement bénéficier d'un tel mécanisme, l'objectif de la thèse est de développer une architecture de contrôle intégrant un mécanisme d'attention sélective afin de diminuer la charge de calcul demandée par les différents modules de traitement du robot. L'architecture de contrôle utilisé est basée sur l'approche comportementale, et porte le nom HBBA, pour Hybrid Behavior-Based Architecture. Pour répondre à cet objectif, le robot humanoïde nommé IRL-1 a été conçu pour permettre l'intégration de multiples capacités de perception et d'action sur une seule et même plateforme, afin de s'en servir comme plateforme expérimentale pouvant bénéficier de mécanismes d'attention sélective. Les capacités implémentées permettent d'interagir avec IRL-1 selon différentes modalités. IRL-1 peut être guidé physiquement en percevant les forces externes par le bias d'actionneurs élastiques utilisés dans la direction de sa plateforme omnidirectionnelle. La vision, le mouvement et l'audition ont été intégrés dans une interface de téléprésence augmentée. De plus, l'influence des délais de réaction à des sons dans l'environnement a pu être examinée. Cette implémentation a permis de valider l'usage de HBBA comme base de travail pour la prise de décision du robot, ainsi que d'explorer les limites en termes de capacités de traitement des modules sur le robot. Ensuite, un mécanisme d'attention sélective a été développé au sein de HBBA. Le mécanisme en question intègre l'activation de modules de traitement avec le filtrage perceptuel, soit la capacité de moduler la quantité de stimuli utilisés par les modules de traitement afin d'adapter le traitement aux ressources de calculs disponibles. Les résultats obtenus démontrent les bénéfices qu'apportent un tel mécanisme afin de permettre au robot d'optimiser l'usage de ses ressources de calculs afin de satisfaire ses buts. De ces travaux résulte une base sur laquelle il est maintenant possible de poursuivre l'intégration de capacités encore plus avancées et ainsi progresser efficacement vers la conception de robots domestiques pouvant nous assister dans notre quotidien

    Development of Cognitive Capabilities in Humanoid Robots

    Get PDF
    Merged with duplicate record 10026.1/645 on 03.04.2017 by CS (TIS)Building intelligent systems with human level of competence is the ultimate grand challenge for science and technology in general, and especially for the computational intelligence community. Recent theories in autonomous cognitive systems have focused on the close integration (grounding) of communication with perception, categorisation and action. Cognitive systems are essential for integrated multi-platform systems that are capable of sensing and communicating. This thesis presents a cognitive system for a humanoid robot that integrates abilities such as object detection and recognition, which are merged with natural language understanding and refined motor controls. The work includes three studies; (1) the use of generic manipulation of objects using the NMFT algorithm, by successfully testing the extension of the NMFT to control robot behaviour; (2) a study of the development of a robotic simulator; (3) robotic simulation experiments showing that a humanoid robot is able to acquire complex behavioural, cognitive, and linguistic skills through individual and social learning. The robot is able to learn to handle and manipulate objects autonomously, to cooperate with human users, and to adapt its abilities to changes in internal and environmental conditions. The model and the experimental results reported in this thesis, emphasise the importance of embodied cognition, i.e. the humanoid robot's physical interaction between its body and the environment

    Advances in Human-Robot Interaction

    Get PDF
    Rapid advances in the field of robotics have made it possible to use robots not just in industrial automation but also in entertainment, rehabilitation, and home service. Since robots will likely affect many aspects of human existence, fundamental questions of human-robot interaction must be formulated and, if at all possible, resolved. Some of these questions are addressed in this collection of papers by leading HRI researchers

    ベイズ法によるマイクロフォンアレイ処理

    Get PDF
    京都大学0048新制・課程博士博士(情報学)甲第18412号情博第527号新制||情||93(附属図書館)31270京都大学大学院情報学研究科知能情報学専攻(主査)教授 奥乃 博, 教授 河原 達也, 准教授 CUTURI CAMETO Marco, 講師 吉井 和佳学位規則第4条第1項該当Doctor of InformaticsKyoto UniversityDFA

    Towards a framework for socially interactive robots

    Get PDF
    250 p.En las últimas décadas, la investigación en el campo de la robótica social ha crecido considerablemente. El desarrollo de diferentes tipos de robots y sus roles dentro de la sociedad se están expandiendo poco a poco. Los robots dotados de habilidades sociales pretenden ser utilizados para diferentes aplicaciones; por ejemplo, como profesores interactivos y asistentes educativos, para apoyar el manejo de la diabetes en niños, para ayudar a personas mayores con necesidades especiales, como actores interactivos en el teatro o incluso como asistentes en hoteles y centros comerciales.El equipo de investigación RSAIT ha estado trabajando en varias áreas de la robótica, en particular,en arquitecturas de control, exploración y navegación de robots, aprendizaje automático y visión por computador. El trabajo presentado en este trabajo de investigación tiene como objetivo añadir una nueva capa al desarrollo anterior, la capa de interacción humano-robot que se centra en las capacidades sociales que un robot debe mostrar al interactuar con personas, como expresar y percibir emociones, mostrar un alto nivel de diálogo, aprender modelos de otros agentes, establecer y mantener relaciones sociales, usar medios naturales de comunicación (mirada, gestos, etc.),mostrar personalidad y carácter distintivos y aprender competencias sociales.En esta tesis doctoral, tratamos de aportar nuestro grano de arena a las preguntas básicas que surgen cuando pensamos en robots sociales: (1) ¿Cómo nos comunicamos (u operamos) los humanos con los robots sociales?; y (2) ¿Cómo actúan los robots sociales con nosotros? En esa línea, el trabajo se ha desarrollado en dos fases: en la primera, nos hemos centrado en explorar desde un punto de vista práctico varias formas que los humanos utilizan para comunicarse con los robots de una maneranatural. En la segunda además, hemos investigado cómo los robots sociales deben actuar con el usuario.Con respecto a la primera fase, hemos desarrollado tres interfaces de usuario naturales que pretenden hacer que la interacción con los robots sociales sea más natural. Para probar tales interfaces se han desarrollado dos aplicaciones de diferente uso: robots guía y un sistema de controlde robot humanoides con fines de entretenimiento. Trabajar en esas aplicaciones nos ha permitido dotar a nuestros robots con algunas habilidades básicas, como la navegación, la comunicación entre robots y el reconocimiento de voz y las capacidades de comprensión.Por otro lado, en la segunda fase nos hemos centrado en la identificación y el desarrollo de los módulos básicos de comportamiento que este tipo de robots necesitan para ser socialmente creíbles y confiables mientras actúan como agentes sociales. Se ha desarrollado una arquitectura(framework) para robots socialmente interactivos que permite a los robots expresar diferentes tipos de emociones y mostrar un lenguaje corporal natural similar al humano según la tarea a realizar y lascondiciones ambientales.La validación de los diferentes estados de desarrollo de nuestros robots sociales se ha realizado mediante representaciones públicas. La exposición de nuestros robots al público en esas actuaciones se ha convertido en una herramienta esencial para medir cualitativamente la aceptación social de los prototipos que estamos desarrollando. De la misma manera que los robots necesitan un cuerpo físico para interactuar con el entorno y convertirse en inteligentes, los robots sociales necesitan participar socialmente en tareas reales para las que han sido desarrollados, para así poder mejorar su sociabilida

    Design and Experimental Evaluation of a Context-aware Social Gaze Control System for a Humanlike Robot

    Get PDF
    Nowadays, social robots are increasingly being developed for a variety of human-centered scenarios in which they interact with people. For this reason, they should possess the ability to perceive and interpret human non-verbal/verbal communicative cues, in a humanlike way. In addition, they should be able to autonomously identify the most important interactional target at the proper time by exploring the perceptual information, and exhibit a believable behavior accordingly. Employing a social robot with such capabilities has several positive outcomes for human society. This thesis presents a multilayer context-aware gaze control system that has been implemented as a part of a humanlike social robot. Using this system the robot is able to mimic the human perception, attention, and gaze behavior in a dynamic multiparty social interaction. The system enables the robot to direct appropriately its gaze at the right time to the environmental targets and humans who are interacting with each other and with the robot. For this reason, the attention mechanism of the gaze control system is based on features that have been proven to guide human attention: the verbal and non-verbal cues, proxemics, the effective field of view, the habituation effect, and the low-level visual features. The gaze control system uses skeleton tracking and speech recognition,facial expression recognition, and salience detection to implement the same features. As part of a pilot evaluation, the gaze behavior of 11 participants was collected with a professional eye-tracking device, while they were watching a video of two-person interactions. Analyzing the average gaze behavior of participants, the importance of human-relevant features in human attention triggering were determined. Based on this finding, the parameters of the gaze control system were tuned in order to imitate the human behavior in selecting features of environment. The comparison between the human gaze behavior and the gaze behavior of the developed system running on the same videos shows that the proposed approach is promising as it replicated human gaze behavior 89% of the time
    corecore