77,445 research outputs found
Effect of contextual information on object tracking
Local object information, such as the appearance and motion features of the object, are useful for object tracking in videos provided the object is not occluded by other elements in the scene. During occlusion, however, the local object information in the video frame does not properly represent the true properties of the object, which leads to tracking failure. We propose a framework that combines multiple cues including the local object information, the background characteristics and group motion dynamics to improve object tracking in challenging cluttered environments. The performance of the proposed tracking model is compared with the kernelised correlation filter (KCF) tracker. In the tested video sequences the proposed tracking model correctly tracked objects even when the KCF tracker failed because of occlusion and background noise
Representation, space and Hollywood Squares: Looking at things that aren't there anymore
It has been argued that the human cognitive system is capable of using spatial indexes or oculomotor coordinates to relieve working memory load (Ballard, Hayhoe, Pook & Rao, 1997) track multiple moving items through occlusion (Scholl & Pylyshyn, 1999) or link incompatible cognitive and sensorimotor codes (Bridgeman and Huemer, 1998). Here we examine the use of such spatial information in memory for semantic information. Previous research has often focused on the role of task demands and the level of automaticity in the encoding of spatial location in memory tasks. We present five experiments where location is irrelevant to the task, and participants' encoding of spatial information is measured implicitly by their looking behavior during recall. In a paradigm developed from Spivey and Geng (submitted), participants were presented with pieces of auditory, semantic information as part of an event occurring in one of four regions of a computer screen. In front of a blank grid, they were asked a question relating to one of those facts. Under certain conditions it was found that during the question period participants made significantly more saccades to the empty region of space where the semantic information had been previously presented. Our findings are discussed in relation to previous research on memory and spatial location, the dorsal and ventral streams of the visual system, and the notion of a cognitive-perceptual system using spatial indexes to exploit the stability of the external world
Perceptual Context in Cognitive Hierarchies
Cognition does not only depend on bottom-up sensor feature abstraction, but
also relies on contextual information being passed top-down. Context is higher
level information that helps to predict belief states at lower levels. The main
contribution of this paper is to provide a formalisation of perceptual context
and its integration into a new process model for cognitive hierarchies. Several
simple instantiations of a cognitive hierarchy are used to illustrate the role
of context. Notably, we demonstrate the use context in a novel approach to
visually track the pose of rigid objects with just a 2D camera
Online Context-based Object Recognition for Mobile Robots
This work proposes a robotic object recognition
system that takes advantage of the contextual information latent
in human-like environments in an online fashion. To fully leverage
context, it is needed perceptual information from (at least) a
portion of the scene containing the objects of interest, which could
not be entirely covered by just an one-shot sensor observation.
Information from a larger portion of the scenario could still
be considered by progressively registering observations, but this
approach experiences difficulties under some circumstances, e.g.
limited and heavily demanded computational resources, dynamic
environments, etc. Instead of this, the proposed recognition
system relies on an anchoring process for the fast registration
and propagation of objects’ features and locations beyond the
current sensor frustum. In this way, the system builds a graphbased
world model containing the objects in the scenario (both
in the current and previously perceived shots), which is exploited
by a Probabilistic Graphical Model (PGM) in order to leverage
contextual information during recognition. We also propose a
novel way to include the outcome of local object recognition
methods in the PGM, which results in a decrease in the usually
high CRF learning complexity. A demonstration of our proposal
has been conducted employing a dataset captured by a mobile
robot from restaurant-like settings, showing promising results.Universidad de Málaga. Campus de Excelencia Internacional Andalucía Tech
- …