5 research outputs found

    How can bottom-up information shape learning of top-down attention-control skills?

    Get PDF
    How does bottom-up information affect the development of top-down attentional control skills during the learning of visuomotor tasks? Why is the eye fovea so small? Strong evidence supports the idea that in humans foveation is mainly guided by task-specific skills, but how these are learned is still an important open problem. We designed and implemented a simulated neural eye-arm coordination model to study the development of attention control in a search-and-reach task involving simple coloured stimuli. The model is endowed with a hard-wired bottom-up attention saliency map and a top-down attention component which acquires task-specific knowledge on potential gaze targets and their spatial relations. This architecture achieves high performance very fast. To explain this result, we argue that: (a) the interaction between bottom-up and top-down mechanisms supports the development of task-specific attention control skills by allowing an efficient exploration of potentially useful gaze targets; (b) bottom-up mechanisms boast the exploitation of the initial limited task-specific knowledge by actively selecting areas where it can be suitably applied; (c) bottom-up processes shape objects representation, their value, and their roles (these can change during learning, e.g. distractors can become useful attentional cues); (d) increasing the size of the fovea alleviates perceptual aliasing, but at the same time increases input processing costs and the number of trials required to learn. Overall, the results indicate that bottom-up attention mechanisms can play a relevant role in attention control, especially during the acquisition of new task-specific skills, but also during task performance

    Ecological active vision: four bio-inspired principles to integrate bottom-up and adaptive top-down attention tested with a simple camera-arm robot

    Get PDF
    Vision gives primates a wealth of information useful to manipulate the environment, but at the same time it can easily overwhelm their computational resources. Active vision is a key solution found by nature to solve this problem: a limited fovea actively displaced in space to collect only relevant information. Here we highlight that in ecological conditions this solution encounters four problems: 1) the agent needs to learn where to look based on its goals; 2) manipulation causes learning feedback in areas of space possibly outside the attention focus; 3) good visual actions are needed to guide manipulation actions, but only these can generate learning feedback; and 4) a limited fovea causes aliasing problems. We then propose a computational architecture ("BITPIC") to overcome the four problems, integrating four bioinspired key ingredients: 1) reinforcement-learning fovea-based top-down attention; 2) a strong vision-manipulation coupling; 3) bottom-up periphery-based attention; and 4) a novel action-oriented memory. The system is tested with a simple simulated camera-arm robot solving a class of search-and-reach tasks involving color-blob "objects." The results show that the architecture solves the problems, and hence the tasks, very ef?ciently, and highlight how the architecture principles can contribute to a full exploitation of the advantages of active vision in ecological conditions
    corecore