4 research outputs found

    Temporal accumulation of oriented visual features

    Get PDF
    In this paper we present a framework for accumulating on-line a model of a moving object (e.g., when manipulated by a robot). The proposed scheme is based on Bayesian filtering of local features, filtering jointly position, orientation and appearance information. The work presented here is novel in two aspects: first, we use an estimation mechanism that updates iteratively not only geometrical information, but also appearance information. Second, we propose a probabilistic version of the classical n-scan criterion that allows us to select which features are preserved and which are discarded, while making use of the available uncertainty model. The accumulated representations have been used in three different contexts: pose estimation, robotic grasping, and driver assistance scenario

    Learning and recognition of objects inspired by early cognition

    Full text link
    In this paper, we present a unifying approach for learning and recognition of objects in unstructured environments through exploration. Taking inspiration from how young infants learn objects, we establish four principles for object learning. First, early object detection is based on an attention mechanism detecting salient parts in the scene. Second, motion of the object allows more accurate object localization. Next, acquiring multiple observations of the object through manipulation allows a more robust representation of the object. And last, object recognition benefits from a multi-modal representation. Using these principles, we developed a unifying method including visual attention, smooth pursuit of the object, and a multi-view and multi-modal object representation. Our results indicate the effectiveness of this approach and the improvement of the system when multiple observations are acquired from active object manipulation

    Temporal accumulation of oriented visual features

    No full text
    corecore