9 research outputs found
Learning to Look in Different Environments: An Active-Vision Model which Learns and Readapts Visual Routines
One of the main claims of the active vision framework is that nding data on the basis of task requirements is more ecient than reconstructing the whole scene by performing a complete visual scan. To be successful, this approach requires that agents learn visual routines to direct overt attention to locations with the information needed to accomplish the task. In ecological conditions, learning such visual routines is dicult due to the partial observability of the world, the changes in the environment, and the fact that learning signals might be indirect. This paper uses a reinforcement-learning actor-critic model to study how visual routines can be formed, and then adapted when the environment changes, in a system endowed with a controllable gaze and reaching capabilities. The tests of the model show that: (a) the autonomouslydeveloped visual routines are strongly dependent on the task and the statistical properties of the environment; (b) when the statistics of the environment change, the performance of the system remains rather stable thanks to the re-use of previously discovered visual routines while the visual exploration policy remains for long time sub-optimal. We conclude that the model has a robust behaviour but the acquisition of an optimal visual exploration policy is particularly hard given its complex dependence on statistical properties of the environment, showing another of the diculties that adaptive active vision agents must face
Automatic Selection of Object Recognition Methods using Reinforcement Learning
Abstract Selecting which algorithms should be used by a mobile robot computer vision system is a decision that is usually made a priori by the system developer, based on past experience and intuition, not systematically taking into account information that can be found in the images and in the visual process itself to learn which algorithm should be used, in execution time. This paper presents a method that uses Reinforcement Learning to decide which algorithm should be used to recognize objects seen by a mobile robot in an indoor environment, based on simple attributes extracted on-line from the images, such as mean intensity and intensity deviation. Two state-of-the-art object recognition algorithms can be selected: the constellation method proposed by Lowe together with its interest point detector and descriptor, the Scale-Invariant Feature Transform and Nistér and Stewénius Vocabulary Tree approach. A set of empirical evaluations was conducted using a image database acquired with a mobile robot in an indoor environment, and results obtained shows that the approach adopted here is very promising.