4 research outputs found

    From passive to interactive object learning and recognition through self-identification on a humanoid robot

    Get PDF
    International audienceService robots, working in evolving human environments , need the ability to continuously learn to recognize new objects. Ideally, they should act as humans do, by observing their environment and interacting with objects, without specific supervision. Taking inspiration from infant development, we propose a developmental approach that enables a robot to progressively learn objects appearances in a social environment: first, only through observation, then through active object manipulation. We focus on incremen-tal, continuous, and unsupervised learning that does not require prior knowledge about the environment or the robot. In the first phase, we analyse the visual space and detect proto-objects as units of attention that are learned and recognized as possible physical entities. The appearance of each entity is represented as a multi-view model based on complementary visual features. In the second phase, entities are classified into three categories: parts of the body of the robot, parts of a human partner, and manipulable objects. The cate-gorization approach is based on mutual information between the visual and proprioceptive data, and on motion behaviour of entities. The ability to categorize entities is then used during interactive object exploration to improve the previously acquired objects models. The proposed system is implemented and evaluated with an iCub and a Meka robot learning 20 objects. The system is able to recognize objects with 88.5% success and create coherent representation models that are further improved by interactive learning

    Sensitive manipulation

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2007.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Includes bibliographical references (p. 161-172).This thesis presents an effective alternative to the traditional approach to robotic manipulation. In our approach, manipulation is mainly guided by tactile feedback as opposed to vision. The motivation comes from the fact that manipulating an object implies coming in contact with it, consequently, directly sensing physical contact seems more important than vision to control the interaction of the object and the robot. In this work, the traditional approach of a highly precise arm and vision system controlled by a model-based architecture is replaced by one that uses a low mechanical impedance arm with dense tactile sensing and exploration capabilities run by a behavior-based architecture. The robot OBRERO has been built to implement this approach. New tactile sensing technology has been developed and mounted on the robot's hand. These sensors are biologically inspired and present more adequate features for manipulation than those of state of the art tactile sensors. The robot's limb was built with compliant actuators, which present low mechanical impedance, to make the interaction between the robot and the environment safer than that of a traditional high-stiffness arm. A new actuator was created to fit in the hand size constraints.(cont.) The reduced precision of OBRERO's limb is compensated by the capability of exploration given by the tactile sensors, actuators and the software architecture. The success of this approach is shown by picking up objects in an unmodelled environment. This task, simple for humans, has been a challenge for robots. The robot can deal with new, unmodelled objects. OBRERO can come gently in contact, explore, lift, and place the object in a different location. It can also detect slippage and external forces acting on an object while it is held. Each one of these steps are done by using tactile feedback. This task can be done with very light objects with no fixtures and on slippery surfaces.by Eduardo Rafael Torres Jara.Ph.D

    Semantische Objektmodellierung mittels multimodaler Interaktion

    Get PDF
    Ein Konzept für eine interaktive semantische Objektmodellierung wird vorgeschlagen. Die flexible und erweiterbare Objektrepräsentation ermöglicht die Modellierung funktionaler und semantischer Objektinformationen durch die Darstellung von Eigenschaften, die menschliche Begriffe und Kategorien abbilden und die Verbindung von Objekten mit Handlungen und mit sensoriell erfassbaren Attributen herstellen. Das interaktive Modellierungssystem erlaubt die intuitive Erstellung semantischer Objektmodelle

    Memory-Based Active Visual Search for Humanoid Robots

    Get PDF
    corecore