139,968 research outputs found

    From perception to action and vice versa: a new architecture showing how perception and action can modulate each other simultaneously

    Get PDF
    Presentado en: 6th European Conference on Mobile Robots (ECMR) Sep 25-27, 2013 Barcelona, SpainArtificial vision systems can not process all the information that they receive from the world in real time because it is highly expensive and inefficient in terms of computational cost. However, inspired by biological perception systems, it is possible to develop an artificial attention model able to select only the relevant part of the scene, as human vision does. From the Automated Planning point of view, a relevant area can be seen as an area where the objects involved in the execution of a plan are located. Thus, the planning system should guide the attention model to track relevant objects. But, at the same time, the perceived objects may constrain or provide new information that could suggest the modification of a current plan. Therefore, a plan that is being executed should be adapted or recomputed taking into account actual information perceived from the world. In this work, we introduce an architecture that creates a symbiosis between the planning and the attention modules of a robotic system, linking visual features with high level behaviours. The architecture is based on the interaction of an oversubscription planner, that produces plans constrained by the information perceived from the vision system, and an object-based attention system, able to focus on the relevant objects of the plan being executed.Spanish MINECO projects TIN2008-06196, TIN2012-38079-C03-03 and TIN2012-38079-C03-02. Universidad de Málaga. Campus de Excelencia Internacional Andalucía Tec

    The reentry hypothesis: linking eye movements to visual perception

    Get PDF
    Cortical organization of vision appears to be divided into perception and action. Models of vision have generally assumed that eye movements serve to select a scene for perception, so action and perception are sequential processes. We suggest a less distinct separation. According to our model, occulomotor areas responsible for planning an eye movement, such as the frontal eye field, influence perception prior to the eye movement. The activity reflecting the planning of an eye movement reenters the ventral pathway and sensitizes all cells within the movement field so the planned action determines perception. We demonstrate the performance of the computational model in a visual search task that demands an eye movement toward a target

    A feedback model of perceptual learning and categorisation

    Get PDF
    Top-down, feedback, influences are known to have significant effects on visual information processing. Such influences are also likely to affect perceptual learning. This article employs a computational model of the cortical region interactions underlying visual perception to investigate possible influences of top-down information on learning. The results suggest that feedback could bias the way in which perceptual stimuli are categorised and could also facilitate the learning of sub-ordinate level representations suitable for object identification and perceptual expertise

    Towards Active Event Recognition

    No full text
    Directing robot attention to recognise activities and to anticipate events like goal-directed actions is a crucial skill for human-robot interaction. Unfortunately, issues like intrinsic time constraints, the spatially distributed nature of the entailed information sources, and the existence of a multitude of unobservable states affecting the system, like latent intentions, have long rendered achievement of such skills a rather elusive goal. The problem tests the limits of current attention control systems. It requires an integrated solution for tracking, exploration and recognition, which traditionally have been seen as separate problems in active vision.We propose a probabilistic generative framework based on a mixture of Kalman filters and information gain maximisation that uses predictions in both recognition and attention-control. This framework can efficiently use the observations of one element in a dynamic environment to provide information on other elements, and consequently enables guided exploration.Interestingly, the sensors-control policy, directly derived from first principles, represents the intuitive trade-off between finding the most discriminative clues and maintaining overall awareness.Experiments on a simulated humanoid robot observing a human executing goal-oriented actions demonstrated improvement on recognition time and precision over baseline systems

    Egocentric Spatial Representation in Action and Perception

    Get PDF
    Neuropsychological findings used to motivate the “two visual systems” hypothesis have been taken to endanger a pair of widely accepted claims about spatial representation in visual experience. The first is the claim that visual experience represents 3-D space around the perceiver using an egocentric frame of reference. The second is the claim that there is a constitutive link between the spatial contents of visual experience and the perceiver’s bodily actions. In this paper, I carefully assess three main sources of evidence for the two visual systems hypothesis and argue that the best interpretation of the evidence is in fact consistent with both claims. I conclude with some brief remarks on the relation between visual consciousness and rational agency
    corecore