29 research outputs found

    User-friendly robot environment for creation of social scenarios

    No full text
    This paper proposes a user-friendly framework for designing robot behaviors by users with minimal understanding of programming. It is a step towards an end-user platform which is meant to be used by domain specialists for creating social scenarios, i.e. scenarios in which not high precision of movement is needed but frequent redesign of the robot behavior is a necessity. We show by a hand shaking experiment how convincing it is to construct robot behavior in this framework

    Humanoid robots are retrieving emotion from motion analysis

    Get PDF
    \u3cp\u3eThis paper presents an application for hand waving in real time using a parallel framework. Analysis of 15 different video fragments demonstrates that acceleration and frequency are relevant parameters for emotion classification of hand waving. Its solution will be used for human-robot interaction with the aim of training autistic children social behavioral skills in a natural environment.\u3c/p\u3

    Interplay between natural and artificial intelligence in training autistic children with robots

    No full text
    \u3cp\u3eThe need to understand and model human-like behavior and intelligence has been embraced by a multidisciplinary community for several decades. The success so far has been shown in solutions for a concrete task or a competence, and these solutions are seldom a truly multidisciplinary effort. In this paper we analyze the needs and the opportunities for combining artificial intelligence and bio-inspired computation within an application domain that provides a cluster of solutions instead of searching for a solution to a single task. We analyze applications of training children with autism spectrum disorder (ASD) with a humanoid robot, because it must include multidisciplinary effort and at the same time there is a clear need for better models of human-like behavior which will be tested in real life scenarios through these robots. We designed, implemented, and carried out three applied behavior analysis (ABA) based robot interventions. All interventions aim to promote self initiated social behavior in children with ASD. We found out that the standardization of the robot training scenarios and using unified robot platforms can be an enabler for integrating multiple intelligent and bio-inspired algorithms for creation of tailored, but domain specific robot skills and competencies. This approach might set a new trend to how artificial and bio-inspired robot applications develop. We suggest that social computing techniques are a pragmatic solution to creation of standardized training scenarios and therefore enable the replacement of perceivably intelligent robot behaviors with truly intelligent ones.\u3c/p\u3

    The Web as an Autobiographical Agent

    No full text
    Abstract. The reward-based autobiographical memory approach has been applied to the Web search agent. The approach is based on the analogy between the Web and the environmental exploration by a robot and has branched off from a currently developed method for autonomous agent learning of novel environments and consolidating the learned information for efficient further use. The paper describes a model of an agent with “autobiographical memories”, inspired by studies on neurobiology of human memory, the experiments of search path categorisation by the model and its application to Web agent design.

    Active audition for humanoid

    No full text
    In this paper, we present an active audition system for humanoid robot “SIG the humanoid”. The audition system of the highly intelligent humanoid requires localization of sound sources and identification of meanings of the sound in the auditory scene. The active audition reported in this paper focuses on improved sound source tracking by integrating audition, vision, and motor movements. Given the multiple sound sources in the auditory scene, SIG actively moves its head to improve localization by aligning microphones orthogonal to the sound source and by capturing the possible sound sources by vision. However, such an active head movement inevitably creates motor noise. The system must adaptively cancel motor noise using motor control signals. The experimental result demonstrates that the active audition by integration of audition, vision, and motor control enables sound source tracking in variety of conditions
    corecore