Robot vision systems- inspired by human-like vision- are required to employ mechanisms similar to those that have proven to be crucial in human visual performance. One of these mechanisms is attentive perception. Findings from vision science research suggest that attentive perception requires a multitude of properties: A retina with fovea-periphery distinction, an attention mechanism that can be manipulated both mechanically and internally, an extensive set of visual primitives that enable different representation modes, an integration mechanism that can infer the appropriate visual information in spite of eye, head, body and target motion, and finally memory for guiding eye movements and modeling the environment. In this paper we present an attentively “perceiving” robot called APES. The novelty of this system stems from the fact that it incorporates all of these properties simultaneously. As is explained, original approaches have to be taken to realize each of the properties so that they can be integrated together in an attentive perception framework
To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.