2 research outputs found

    On the Use of Intrinsic Motivation for Visual Saliency Learning

    Get PDF
    International audienceThe use of intrinsic motivation for the task of learning sensori-motor properties has received a lot of attention over the last few years, but only little work has been provided toward using intrinsic motivation for the task of learning visual signals. In this paper, we propose to apply the main ideas of the Intelligent Adaptive Curiosity (IAC) for the task of visual saliency learning. We here present RL-IAC, an adapted version of IAC that uses reinforcement learning to deal with time consuming displacements while actively learning saliency based on local learning progress. We also introduce the use of a backward evaluation to deal with a learner that is shared between several regions. We demonstrate the good performance of RL-IAC compared to other exploration techniques, and we discuss the performance of other intrinsic motivation sources instead of learning progress in our problem

    Exploring to learn visual saliency: The RL-IAC approach

    Get PDF
    International audienceThe problem of object localization and recognition on autonomous mobile robots is still an active topic. In this context, we tackle the problem of learning a model of visual saliency directly on a robot. This model, learned and improved on-the-fly during the robot's exploration provides an efficient tool for localizing relevant objects within their environment. The proposed approach includes two intertwined components. On the one hand, we describe a method for learning and incrementally updating a model of visual saliency from a depth-based object detector. This model of saliency can also be exploited to produce bounding box proposals around objects of interest. On the other hand, we investigate an autonomous exploration technique to efficiently learn such a saliency model. The proposed exploration, called Reinforcement Learning-Intelligent Adaptive Curiosity (RL-IAC) is able to drive the robot's exploration so that samples selected by the robot are likely to improve the current model of saliency. We then demonstrate that such a saliency model learned directly on a robot outperforms several state-of-the-art saliency techniques, and that RL-IAC can drastically decrease the required time for learning a reliable saliency model
    corecore