1 research outputs found

    Self-supervised learning of depth-based navigation affordances from haptic cues

    Get PDF
    This paper presents a ground vehicle capable of exploiting haptic cues to learn navigation affordances from depth cues. A simple pan-tilt telescopic antenna and a Kinect sensor, both fitted to the robot’s body frame, provide the required haptic and depth sensory feedback, respectively. With the antenna, the robot determines whether an object is traversable by the robot. Then, the interaction outcome is associated to the object’s depth-based descriptor. Later on, the robot to predict if a newly observed object is traversable just by inspecting its depth-based appearance uses this acquired knowledge. A set of field trials show the ability of the to robot progressively learn which elements of the environment are traversable.info:eu-repo/semantics/acceptedVersio
    corecore