17 research outputs found

    Peculiarities of the Electronic Educational and Methodological Complex under the Point-rating Technology

    Get PDF
    At present in the sphere of electronic learning in the light of solving tasks put by the Bologna Declaration an important place is being taken by electronic educational and methodological complexes (EMC).The authors have put forward new components of EMC necessary for the organization of educational process and for the determination of labour intensity according to modules and students’ activity type. They also suggested a technology of defining the rating of the grade in the credit-module system

    Developmental Learning for Object Perception

    Get PDF
    International audienceThe goal of this work is to design a visual system for a humanoid robot. Taking inspiration from child's perception and following the principles of developmental robotics, the robot should detect and learn objects from interactions with people and own experiments

    From passive to interactive object learning and recognition through self-identification on a humanoid robot

    Get PDF
    International audienceService robots, working in evolving human environments , need the ability to continuously learn to recognize new objects. Ideally, they should act as humans do, by observing their environment and interacting with objects, without specific supervision. Taking inspiration from infant development, we propose a developmental approach that enables a robot to progressively learn objects appearances in a social environment: first, only through observation, then through active object manipulation. We focus on incremen-tal, continuous, and unsupervised learning that does not require prior knowledge about the environment or the robot. In the first phase, we analyse the visual space and detect proto-objects as units of attention that are learned and recognized as possible physical entities. The appearance of each entity is represented as a multi-view model based on complementary visual features. In the second phase, entities are classified into three categories: parts of the body of the robot, parts of a human partner, and manipulable objects. The cate-gorization approach is based on mutual information between the visual and proprioceptive data, and on motion behaviour of entities. The ability to categorize entities is then used during interactive object exploration to improve the previously acquired objects models. The proposed system is implemented and evaluated with an iCub and a Meka robot learning 20 objects. The system is able to recognize objects with 88.5% success and create coherent representation models that are further improved by interactive learning

    Object learning through active exploration

    Get PDF
    International audienceThis paper addresses the problem of active object learning by a humanoid child-like robot, using a developmental approach. We propose a cognitive architecture where the visual representation of the objects is built incrementally through active exploration. We present the design guidelines of the cognitive architecture, its main functionalities, and we outline the cognitive process of the robot by showing how it learns to recognize objects in a human-robot interaction scenario inspired by social parenting. The robot actively explores the objects through manipulation, driven by a combination of social guidance and intrinsic motivation. Besides the robotics and engineering achievements, our experiments replicate some observations about the coupling of vision and manipulation in infants, particularly how they focus on the most informative objects. We discuss the further benefits of our architecture, particularly how it can be improved and used to ground concepts

    Learning to recognize objects through curiosity-driven manipulation with the iCub humanoid robot

    Get PDF
    International audienceIn this paper we address the problem of learning to recognize objects by manipulation in a developmental robotics scenario. In a life-long learning perspective, a humanoid robot should be capable of improving its knowledge of objects with active perception. Our approach stems from the cognitive devel- opment of infants, exploiting active curiosity-driven manipulation to improve perceptual learning of objects. These functionalities are implemented as perception, control and active exploration modules as part of the Cognitive Architecture of the MACSi project. In this paper we integrate these functionalities into an ac- tive perception system which learns to recognise objects through manipulation. Our work in this paper integrates a bottom-up vision system, a control system of a complex robot system and a top-down interactive exploration method, which actively chooses an exploration method to collect data and whether interacting with humans is profitable or not. Experimental results show that the humanoid robot iCub can learn to recognize 3D objects by manipulation and in interaction with teachers by choosing the adequate exploration strategy to enhance competence progress and by focusing its efforts on the most complex tasks. Thus the learner can learn interactively with humans by actively self- regulating its requests for help

    Approche développementale de la perception pour un robot humanoïde

    No full text
    Future service robots will need the ability to work in unpredicted human environments. These robots should be able to learn autonomously without constant supervision in order to adapt to the environment, different users, and changing circumstances. Exploration of unstructured environments requires continuous detection of new objects and learning about them, ideally like a child, through curiosity-driven interactive exploration. Our research work is aimed to design a developmental approach that enables a humanoid robot to perceive its close environment. We take inspiration from human perception in terms of its functionalities and from infant development in terms of the way of learning, and we propose an approach that enables a humanoid robot to explore its environment pro- gressively, like a child through physical actions and social interaction. Following principles of developmental robotics, we focus on incremental, continuous, and autonomous learning that does not require a prior knowledge about the environment or the robot. The perceptual system starts from segmentation of the visual space into proto-objects as units of attention. The appearance of each proto-object is characterized by low-level features based on color and texture that are considered as complementary features. These low-level features are integrated into more complex features and then, into a multi-view model that is learned incrementally and associated with one physical entity. Entities are then classified into three categories : parts of the robot's body, human parts, and manipulable objects. The categorization approach is based on mutual information between the sensory data and proprioception, and also on motion behavior of physical entities. Once the robot is able to categorize entities, it focuses on interactive object exploration. During interaction, the information acquired about an object's appearance is integrated into its model. Thus, interactive learning enhances the knowledge about objects appearances and improves the informativeness of objects models. The implemented active perceptual system is evaluated on an iCub humanoid robot, learning 20 objects through interaction with a human partner and the robot's own actions. Our system is able to recognize objects with 88.5% success and to create coherent representation models that are further improved by interactive learning.Les robots de service ou d'assistance doivent évoluer dans un environnent humain en constant changement, souvent imprévisible. Ils doivent donc être capables de s'adapter à ces changements, idéalement de manière autonome, afin de ne pas dépendre de la présence constante d'une supervision. Une telle adaptation en environnements non structurés nécessite notamment une détection et un apprentissage continu des nouveaux objets présents, que l'on peut imaginer inspirés des enfants, basés sur l'interaction avec leur parents et la manipulation motivée par la curiosité. Notre travail vise donc à concevoir une approche développementale permettant à un robot humanoïde de percevoir son environnement. Nous nous inspirons à la fois de la perception humaine en termes de fonctionnalités et du développements cognitifs observé chez les infants. Nous proposons une approche qui permet à un robot humanoïde d'ex- plorer son environnement de manière progressive, comme un enfant, grâce à des interactions physiques et sociales. Suivant les principes de la robotique développementale, nous nous concentrons sur l'apprentissage progressif, continu et autonome qui ne nécessite pas de connaissances a priori des objets. Notre système de perception débute par la segmentation de l'espace visuel en proto-objets, qui serviront d'unités d'attention. Chaque proto-objet est représenté par des carac- téristiques bas-niveaux (la couleur et la texture) et sont eux-mêmes intégrés au sein de caractéristiques de plus haut niveau pour ensuite former un modèle multi-vues. Cet apprentissage s'effectue de manière incrémentale et chaque proto-objet est associé à une ou plusieurs entités physiques distinctes. Les entités physiques sont ensuite classés en trois catégories : parties du robot, parties des humains et objets. La caractérisation est basée sur l'analyse de mouvements des entités physiques provenant de la vision ainsi que sur l'information mutuelle entre la vison et proprioception. Une fois que le robot est capable de catégoriser les entités, il se concentre sur l'interaction active avec les objets permettant ainsi d'acquérir de nouvelles informations sur leur apparence qui sont intégrés dans leurs modèles de représen- tation. Ainsi, l'interaction améliore les connaissances sur les objets et augmente la quantité d'information dans leurs modèles. Notre système de perception actif est évalué avec le robot humanoïde iCub en utilisant une base expérimentale de 20 objets. Le robot apprend par interaction avec un partenaire humain ainsi que par ses propres actions sur les objets. Notre système est capable de créer de manière non supervisée des modèles cohérents des différentes entités et d'améliorer les modèles des objets par apprentissage interactif et au final de reconnaître des objets avec 88.5% de réussite

    Approche développementale de la perception pour un robot humanoïde

    No full text
    Les robots de service ou d'assistance doivent évoluer dans un environnent humain en constant changement, souvent imprévisible. Ils doivent donc être capables de s'adapter à ces changements, idéalement de manière autonome, afin de ne pas dépendre de la présence constante d'une supervision. Une telle adaptation en environnements non structurés nécessite notamment une détection et un apprentissage continu des nouveaux objets présents, que l'on peut imaginer inspirés des enfants, basés sur l'interaction avec leur parents et la manipulation motivée par la curiosité. Notre travail vise donc à concevoir une approche développementale permettant à un robot humanoïde de percevoir son environnement. Nous nous inspirons à la fois de la perception humaine en termes de fonctionnalités et du développements cognitifs observé chez les infants. Nous proposons une approche qui permet à un robot humanoïde d'ex- plorer son environnement de manière progressive, comme un enfant, grâce à des interactions physiques et sociales. Suivant les principes de la robotique développementale, nous nous concentrons sur l'apprentissage progressif, continu et autonome qui ne nécessite pas de connaissances a priori des objets. Notre système de perception débute par la segmentation de l'espace visuel en proto-objets, qui serviront d'unités d'attention. Chaque proto-objet est représenté par des carac- téristiques bas-niveaux (la couleur et la texture) et sont eux-mêmes intégrés au sein de caractéristiques de plus haut niveau pour ensuite former un modèle multi-vues. Cet apprentissage s'effectue de manière incrémentale et chaque proto-objet est associé à une ou plusieurs entités physiques distinctes. Les entités physiques sont ensuite classés en trois catégories : parties du robot, parties des humains et objets. La caractérisation est basée sur l'analyse de mouvements des entités physiques provenant de la vision ainsi que sur l'information mutuelle entre la vison et proprioception. Une fois que le robot est capable de catégoriser les entités, il se concentre sur l'interaction active avec les objets permettant ainsi d'acquérir de nouvelles informations sur leur apparence qui sont intégrés dans leurs modèles de représen- tation. Ainsi, l'interaction améliore les connaissances sur les objets et augmente la quantité d'information dans leurs modèles. Notre système de perception actif est évalué avec le robot humanoïde iCub en utilisant une base expérimentale de 20 objets. Le robot apprend par interaction avec un partenaire humain ainsi que par ses propres actions sur les objets. Notre système est capable de créer de manière non supervisée des modèles cohérents des différentes entités et d'améliorer les modèles des objets par apprentissage interactif et au final de reconnaître des objets avec 88.5% de réussite.Future service robots will need the ability to work in unpredicted human environments. These robots should be able to learn autonomously without constant supervision in order to adapt to the environment, different users, and changing circumstances. Exploration of unstructured environments requires continuous detection of new objects and learning about them, ideally like a child, through curiosity-driven interactive exploration. Our research work is aimed to design a developmental approach that enables a humanoid robot to perceive its close environment. We take inspiration from human perception in terms of its functionalities and from infant development in terms of the way of learning, and we propose an approach that enables a humanoid robot to explore its environment pro- gressively, like a child through physical actions and social interaction. Following principles of developmental robotics, we focus on incremental, continuous, and autonomous learning that does not require a prior knowledge about the environment or the robot. The perceptual system starts from segmentation of the visual space into proto-objects as units of attention. The appearance of each proto-object is characterized by low-level features based on color and texture that are considered as complementary features. These low-level features are integrated into more complex features and then, into a multi-view model that is learned incrementally and associated with one physical entity. Entities are then classified into three categories : parts of the robot's body, human parts, and manipulable objects. The categorization approach is based on mutual information between the sensory data and proprioception, and also on motion behavior of physical entities. Once the robot is able to categorize entities, it focuses on interactive object exploration. During interaction, the information acquired about an object's appearance is integrated into its model. Thus, interactive learning enhances the knowledge about objects appearances and improves the informativeness of objects models. The implemented active perceptual system is evaluated on an iCub humanoid robot, learning 20 objects through interaction with a human partner and the robot's own actions. Our system is able to recognize objects with 88.5% success and to create coherent representation models that are further improved by interactive learning.PALAISEAU-Polytechnique (914772301) / SudocSudocFranceF

    Developmental Approach for Interactive Object Discovery

    Get PDF
    Abstract—We present a visual system for a humanoid robot that supports an efficient online learning and recognition of various elements of the environment. Taking inspiration from child’s perception and following the principles of developmental robotics, our algorithm does not require image databases, predefined objects nor face/skin detectors. The robot explores the visual space from interactions with people and its own experiments. The object detection is based on the hypothesis of coherent motion and appearance during manipulations. A hierarchical object representation is constructed from SURF points and color of superpixels that are grouped in local geometric structures and form the basis of a multiple-view object model. The learning algorithm accumulates the statistics of feature occurrences and identifies objects using a maximum likelihood approach and temporal coherency. The proposed visual system is implemented on the iCub robot and shows 0.85 % average recognition rate for 10 objects after 30 minutes of interaction. I

    Improving object learning through manipulation and robot self-identification

    No full text
    International audienceWe present a developmental approach that allows a humanoid robot to continuously and incrementally learn entities through interaction with a human partner in a first stage before categorizing these entities into objects, humans or robot parts and using this knowledge to improve objects models by manipulation in a second stage. This approach does not require prior knowledge about the appearance of the robot, the human or the objects. The proposed perceptual system segments the visual space into proto-objects, analyses their appearance, and associates them with physical entities. Entities are then classified based on the mutual information with proprioception and on motion statistics. The ability to discriminate between the robot's parts and a manipulated object then allows to update the object model with newly observed object views during manipulation. We evaluate our system on an iCub robot, showing the independence of the self-identification method on the robot's hands appearances by wearing different colored gloves. The interactive object learning using self-identification shows an improvement in the objects recognition accuracy with respect to learning through observation only

    Mind the regularized GAP, for human action classification and semi-supervised localization based on visual saliency

    No full text
    International audienceThis work addresses the issue of image classification and localization of human actions based on visual data acquired from RGB sensors. Our approach is inspired by the success of deep learning in image classification. In this paper, we describe our method and how the concept of Global Average Pooling (GAP) applies in the context of semi-supervised class localization. We benchmark it with respect to Class Activation Mapping initiated in (Zhou et al., 2016), propose a regularization over the GAP maps to enhance the results, and study whether a combination of these two ideas can result in a better classification accuracy. The models are trained and tested on the Stanford 40 Action dataset (Yao et al., 2011) describing people performing 40 different actions such as drinking, cooking or watching TV. Compared to the aforementioned baseline, our model improves the classification accuracy by 5.3 percent points, achieves a localization accuracy of 50.3%, and drastically diminishes the computation needed to retrieve the class saliency from the base convolutional model
    corecore