2 research outputs found

    Human action recognition oriented to humanoid robots action reproduction

    No full text
    Our research aims at providing a humanoid robot with the ability of observing, learning, and reproducing actions performed by humans in order to acquire new skills. In other words, we want to apply articial intelligence techniques to automatically recognize a human activity in order to make a humanoid robot able to reproduce it.This system has not only to distinguish between dierent actions, but also to represent them in a proper manner to allow a robot to reproduce the motion trajectories the demonstrator showed and learn new skills. Since the nal system is going to be integrated in an autonomous humanoid robot (specically model Aldebran Nao), we are working with an RGB-D sensor (Microsoft Kinect) that can be easily applied to it. This objective introduces also strict real-time constrains to the action recognition algorithm: we have opted for a probabilistic approach that oers good modeling and fast recognition performances

    Robot Learning by observing human actions

    Get PDF
    Nowadays, robotics is entering in our life. One can see robot in industries, offices and even in homes. The more robots are in contact with people, the more requests of new capabilities and new features increase, in order to make robots able to act in case of need, help humans or be a companion. Therefore, it becomes essential to have a quick and easy way to teach new skills to robots. That is the aim of Robot Learning from Demonstration. This paradigm allows to directly program new tasks in a robot through demonstrations. This thesis proposes a novel approach to Robot Learning from Demonstration able to learn new skills from natural demonstrations carried out from naive users. To this aim, we introduce a novel Robot Learning from Demonstration framework by proposing novel approaches in all functional sub-units: from data acquisition to motion elaboration, from information modeling to robot control. A novel method is explained to extract 3D motion flow information from both RGB and depth data acquired by using recently introduced consumer RGB-D cameras. The motion data are computed over the time to recognize and classify human actions. In this thesis, we describe new techniques to remap human motion to robotic joints. Our methods allow people to natural interact with robots by re-targeting the whole body movements in an intuitive way. We develop algorithm for both humanoids and manipulators motion and test them in different situations. Finally, we improve modeling techniques by using a probabilistic method: the Donut Mixture Model. This model is able to manage several interpretations that different people can produce performing a task. The estimated model can also be updated directly by using new attempts carried out by the robot. This feature is very important to rapidly obtain correct robot trajectories by means of few human demonstrations. A further contribution of this thesis is the creation of a number of new virtual models for the different robots we used to test our algorithms. All the developed models are compliant with ROS, so they can be used to foster research in the field from all the community of this very diffuse robotics framework. Moreover, a new 3D dataset is collected to compare different action recognition algorithms. The dataset contains both RGB-D information coming directly from the sensor and skeleton data provided by a skeleton tracker
    corecore