591 research outputs found

    Classification of Known and Unknown Environmental Sounds Based on Self-Organized Space Using a Recurrent Neural Network

    Get PDF
    Our goal is to develop a system to learn and classify environmental sounds for robots working in the real world. In the real world, two main restrictions pertain in learning. (i) Robots have to learn using only a small amount of data in a limited time because of hardware restrictions. (ii) The system has to adapt to unknown data since it is virtually impossible to collect samples of all environmental sounds. We used a neuro-dynamical model to build a prediction and classification system. This neuro-dynamical model can self-organize sound classes into parameters by learning samples. The sound classification space, constructed by these parameters, is structured for the sound generation dynamics and obtains clusters not only for known classes, but also unknown classes. The proposed system searches on the basis of the sound classification space for classifying. In the experiment, we evaluated the accuracy of classification for both known and unknown sound classes

    Audio-Motor Integration for Robot Audition

    Get PDF
    International audienceIn the context of robotics, audio signal processing in the wild amounts to dealing with sounds recorded by a system that moves and whose actuators produce noise. This creates additional challenges in sound source localization, signal enhancement and recognition. But the speci-ficity of such platforms also brings interesting opportunities: can information about the robot actuators' states be meaningfully integrated in the audio processing pipeline to improve performance and efficiency? While robot audition grew to become an established field, methods that explicitly use motor-state information as a complementary modality to audio are scarcer. This chapter proposes a unified view of this endeavour, referred to as audio-motor integration. A literature review and two learning-based methods for audio-motor integration in robot audition are presented, with application to single-microphone sound source localization and ego-noise reduction on real data

    Sound Representation and Classification Benchmark for Domestic Robots

    Get PDF
    International audienceWe address the problem of sound representation and classification and present results of a comparative study in the context of a domestic robotic scenario. A dataset of sounds was recorded in realistic conditions (background noise, presence of several sound sources, reverberations, etc.) using the humanoid robot NAO. An extended benchmark is carried out to test a variety of representations combined with several classifiers. We provide results obtained with the annotated dataset and we assess the methods quantitatively on the basis of their classification scores, computation times and memory requirements. The annotated dataset is publicly available at https://team.inria.fr/perception/nard/

    Conception d’un mécanisme intégré d’attention sélective dans une architecture comportementale pour robots autonomes

    Get PDF
    Le vieillissement de la population à travers le monde nous amène à considérer sérieusement l'intégration dans notre quotidien de robots de service afin d'alléger les besoins pour la prestation de soins. Or, il n'existe pas présentement de robots de service suffisamment avancés pour être utiles en tant que véritables assistants à des personnes en perte d'autonomie. Un des problèmes freinant le développement de tels robots en est un d'intégration logicielle. En effet, il est difficile d'intégrer les multiples capacités de perception et d'action nécessaires à interagir de manière naturelle et adéquate avec une personne en milieu réel, les limites des ressources de calculs disponibles sur une plateforme robotique étant rapidement atteintes. Même si le cerveau humain a des capacités supérieures à un ordinateur, lui aussi a des limites sur ses capacités de traitement de l'information. Pour faire face à ces limites, l'humain gère ses capacités cognitives avec l'aide de l'attention sélective. L'attention sélective lui permet par exemple d'ignorer certains stimuli pour concentrer ses ressources sur ceux utiles à sa tâche. Puisque les robots pourraient grandement bénéficier d'un tel mécanisme, l'objectif de la thèse est de développer une architecture de contrôle intégrant un mécanisme d'attention sélective afin de diminuer la charge de calcul demandée par les différents modules de traitement du robot. L'architecture de contrôle utilisé est basée sur l'approche comportementale, et porte le nom HBBA, pour Hybrid Behavior-Based Architecture. Pour répondre à cet objectif, le robot humanoïde nommé IRL-1 a été conçu pour permettre l'intégration de multiples capacités de perception et d'action sur une seule et même plateforme, afin de s'en servir comme plateforme expérimentale pouvant bénéficier de mécanismes d'attention sélective. Les capacités implémentées permettent d'interagir avec IRL-1 selon différentes modalités. IRL-1 peut être guidé physiquement en percevant les forces externes par le bias d'actionneurs élastiques utilisés dans la direction de sa plateforme omnidirectionnelle. La vision, le mouvement et l'audition ont été intégrés dans une interface de téléprésence augmentée. De plus, l'influence des délais de réaction à des sons dans l'environnement a pu être examinée. Cette implémentation a permis de valider l'usage de HBBA comme base de travail pour la prise de décision du robot, ainsi que d'explorer les limites en termes de capacités de traitement des modules sur le robot. Ensuite, un mécanisme d'attention sélective a été développé au sein de HBBA. Le mécanisme en question intègre l'activation de modules de traitement avec le filtrage perceptuel, soit la capacité de moduler la quantité de stimuli utilisés par les modules de traitement afin d'adapter le traitement aux ressources de calculs disponibles. Les résultats obtenus démontrent les bénéfices qu'apportent un tel mécanisme afin de permettre au robot d'optimiser l'usage de ses ressources de calculs afin de satisfaire ses buts. De ces travaux résulte une base sur laquelle il est maintenant possible de poursuivre l'intégration de capacités encore plus avancées et ainsi progresser efficacement vers la conception de robots domestiques pouvant nous assister dans notre quotidien

    Virtual Reality Games for Motor Rehabilitation

    Get PDF
    This paper presents a fuzzy logic based method to track user satisfaction without the need for devices to monitor users physiological conditions. User satisfaction is the key to any product’s acceptance; computer applications and video games provide a unique opportunity to provide a tailored environment for each user to better suit their needs. We have implemented a non-adaptive fuzzy logic model of emotion, based on the emotional component of the Fuzzy Logic Adaptive Model of Emotion (FLAME) proposed by El-Nasr, to estimate player emotion in UnrealTournament 2004. In this paper we describe the implementation of this system and present the results of one of several play tests. Our research contradicts the current literature that suggests physiological measurements are needed. We show that it is possible to use a software only method to estimate user emotion

    Development of Cognitive Capabilities in Humanoid Robots

    Get PDF
    Merged with duplicate record 10026.1/645 on 03.04.2017 by CS (TIS)Building intelligent systems with human level of competence is the ultimate grand challenge for science and technology in general, and especially for the computational intelligence community. Recent theories in autonomous cognitive systems have focused on the close integration (grounding) of communication with perception, categorisation and action. Cognitive systems are essential for integrated multi-platform systems that are capable of sensing and communicating. This thesis presents a cognitive system for a humanoid robot that integrates abilities such as object detection and recognition, which are merged with natural language understanding and refined motor controls. The work includes three studies; (1) the use of generic manipulation of objects using the NMFT algorithm, by successfully testing the extension of the NMFT to control robot behaviour; (2) a study of the development of a robotic simulator; (3) robotic simulation experiments showing that a humanoid robot is able to acquire complex behavioural, cognitive, and linguistic skills through individual and social learning. The robot is able to learn to handle and manipulate objects autonomously, to cooperate with human users, and to adapt its abilities to changes in internal and environmental conditions. The model and the experimental results reported in this thesis, emphasise the importance of embodied cognition, i.e. the humanoid robot's physical interaction between its body and the environment

    Advances in Human-Robot Interaction

    Get PDF
    Rapid advances in the field of robotics have made it possible to use robots not just in industrial automation but also in entertainment, rehabilitation, and home service. Since robots will likely affect many aspects of human existence, fundamental questions of human-robot interaction must be formulated and, if at all possible, resolved. Some of these questions are addressed in this collection of papers by leading HRI researchers

    ベイズ法によるマイクロフォンアレイ処理

    Get PDF
    京都大学0048新制・課程博士博士(情報学)甲第18412号情博第527号新制||情||93(附属図書館)31270京都大学大学院情報学研究科知能情報学専攻(主査)教授 奥乃 博, 教授 河原 達也, 准教授 CUTURI CAMETO Marco, 講師 吉井 和佳学位規則第4条第1項該当Doctor of InformaticsKyoto UniversityDFA

    Towards using Cough for Respiratory Disease Diagnosis by leveraging Artificial Intelligence: A Survey

    Full text link
    Cough acoustics contain multitudes of vital information about pathomorphological alterations in the respiratory system. Reliable and accurate detection of cough events by investigating the underlying cough latent features and disease diagnosis can play an indispensable role in revitalizing the healthcare practices. The recent application of Artificial Intelligence (AI) and advances of ubiquitous computing for respiratory disease prediction has created an auspicious trend and myriad of future possibilities in the medical domain. In particular, there is an expeditiously emerging trend of Machine learning (ML) and Deep Learning (DL)-based diagnostic algorithms exploiting cough signatures. The enormous body of literature on cough-based AI algorithms demonstrate that these models can play a significant role for detecting the onset of a specific respiratory disease. However, it is pertinent to collect the information from all relevant studies in an exhaustive manner for the medical experts and AI scientists to analyze the decisive role of AI/ML. This survey offers a comprehensive overview of the cough data-driven ML/DL detection and preliminary diagnosis frameworks, along with a detailed list of significant features. We investigate the mechanism that causes cough and the latent cough features of the respiratory modalities. We also analyze the customized cough monitoring application, and their AI-powered recognition algorithms. Challenges and prospective future research directions to develop practical, robust, and ubiquitous solutions are also discussed in detail.Comment: 30 pages, 12 figures, 9 table

    Life patterns : structure from wearable sensors

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, February 2003.Includes bibliographical references (leaves 123-129).In this thesis I develop and evaluate computational methods for extracting life's patterns from wearable sensor data. Life patterns are the reoccurring events in daily behavior, such as those induced by the regular cycle of night and day, weekdays and weekends, work and play, eating and sleeping. My hypothesis is that since a "raw, low-level" wearable sensor stream is intimately connected to the individual's life, it provides the means to directly match similar events, statistically model habitual behavior and highlight hidden structures in a corpus of recorded memories. I approach the problem of computationally modeling daily human experience as a task of statistical data mining similar to the earlier efforts of speech researchers searching for the building block that were believed to make up speech. First we find the atomic immutable events that mark the succession of our daily activities. These are like the "phonemes" of our lives, but don't necessarily take on their finite and discrete nature. Since our activities and behaviors operate at multiple time-scales from seconds to weeks, we look at how these events combine into sequences, and then sequences of sequences, and so on. These are the words, sentences and grammars of an individual's daily experience. I have collected 100 days of wearable sensor data from an individual's life. I show through quantitative experiments that clustering, classification, and prediction is feasible on a data set of this nature. I give methods and results for determining the similarity between memories recorded at different moments in time, which allow me to associate almost every moment of an individual's life to another similar moment. I present models that accurately and automatically classify the sensor data into location and activity.(cont.) Finally, I show how to use the redundancies in an individual's life to predict his actions from his past behavior.by Brian Patrick Clarkson.Ph.D
    corecore