5,811 research outputs found
Visual motion processing and human tracking behavior
The accurate visual tracking of a moving object is a human fundamental skill
that allows to reduce the relative slip and instability of the object's image
on the retina, thus granting a stable, high-quality vision. In order to
optimize tracking performance across time, a quick estimate of the object's
global motion properties needs to be fed to the oculomotor system and
dynamically updated. Concurrently, performance can be greatly improved in terms
of latency and accuracy by taking into account predictive cues, especially
under variable conditions of visibility and in presence of ambiguous retinal
information. Here, we review several recent studies focusing on the integration
of retinal and extra-retinal information for the control of human smooth
pursuit.By dynamically probing the tracking performance with well established
paradigms in the visual perception and oculomotor literature we provide the
basis to test theoretical hypotheses within the framework of dynamic
probabilistic inference. We will in particular present the applications of
these results in light of state-of-the-art computer vision algorithms
Review of computer vision in intelligent environment design
This paper discusses and compares the use of vision based and non-vision based technologies in developing intelligent environments. By reviewing the related projects that use vision based techniques in intelligent environment design, the achieved functions, technical issues and drawbacks of those projects are discussed and summarized, and the potential solutions for future improvement are proposed, which leads to the prospective direction of my PhD research
Online Multi-Object Tracking Using CNN-based Single Object Tracker with Spatial-Temporal Attention Mechanism
In this paper, we propose a CNN-based framework for online MOT. This
framework utilizes the merits of single object trackers in adapting appearance
models and searching for target in the next frame. Simply applying single
object tracker for MOT will encounter the problem in computational efficiency
and drifted results caused by occlusion. Our framework achieves computational
efficiency by sharing features and using ROI-Pooling to obtain individual
features for each target. Some online learned target-specific CNN layers are
used for adapting the appearance model for each target. In the framework, we
introduce spatial-temporal attention mechanism (STAM) to handle the drift
caused by occlusion and interaction among targets. The visibility map of the
target is learned and used for inferring the spatial attention map. The spatial
attention map is then applied to weight the features. Besides, the occlusion
status can be estimated from the visibility map, which controls the online
updating process via weighted loss on training samples with different occlusion
statuses in different frames. It can be considered as temporal attention
mechanism. The proposed algorithm achieves 34.3% and 46.0% in MOTA on
challenging MOT15 and MOT16 benchmark dataset respectively.Comment: Accepted at International Conference on Computer Vision (ICCV) 201
Future Person Localization in First-Person Videos
We present a new task that predicts future locations of people observed in
first-person videos. Consider a first-person video stream continuously recorded
by a wearable camera. Given a short clip of a person that is extracted from the
complete stream, we aim to predict that person's location in future frames. To
facilitate this future person localization ability, we make the following three
key observations: a) First-person videos typically involve significant
ego-motion which greatly affects the location of the target person in future
frames; b) Scales of the target person act as a salient cue to estimate a
perspective effect in first-person videos; c) First-person videos often capture
people up-close, making it easier to leverage target poses (e.g., where they
look) for predicting their future locations. We incorporate these three
observations into a prediction framework with a multi-stream
convolution-deconvolution architecture. Experimental results reveal our method
to be effective on our new dataset as well as on a public social interaction
dataset.Comment: Accepted to CVPR 201
Human robot interaction in a crowded environment
Human Robot Interaction (HRI) is the primary means of establishing natural and affective communication between humans and robots. HRI enables robots to act in a way similar to humans in order to assist in activities that are considered to be laborious, unsafe, or repetitive. Vision based human robot interaction is a major component of HRI, with which visual information is used to interpret how human interaction takes place. Common tasks of HRI include finding pre-trained static or dynamic gestures in an image, which involves localising different key parts of the human body such as the face and hands. This information is subsequently used to extract different gestures. After the initial detection process, the robot is required to comprehend the underlying meaning of these gestures [3].
Thus far, most gesture recognition systems can only detect gestures and identify a person in relatively static environments. This is not realistic for practical applications as difficulties may arise from people‟s movements and changing illumination conditions. Another issue to consider is that of identifying the commanding person in a crowded scene, which is important for interpreting the navigation commands. To this end, it is necessary to associate the gesture to the correct person and automatic reasoning is required to extract the most probable location of the person who has initiated the gesture. In this thesis, we have proposed a practical framework for addressing the above issues. It attempts to achieve a coarse level understanding about a given environment before engaging in active communication. This includes recognizing human robot interaction, where a person has the intention to communicate with the robot. In this regard, it is necessary to differentiate if people present are engaged with each other or their surrounding environment. The basic task is to detect and reason about the environmental context and different interactions so as to respond accordingly. For example, if individuals are engaged in conversation, the robot should realize it is best not to disturb or, if an individual is receptive to the robot‟s interaction, it may approach the person.
Finally, if the user is moving in the environment, it can analyse further to understand if any help can be offered in assisting this user. The method proposed in this thesis combines multiple visual cues in a Bayesian framework to identify people in a scene and determine potential intentions. For improving system performance, contextual feedback is used, which allows the Bayesian network to evolve and adjust itself according to the surrounding environment. The results achieved demonstrate the effectiveness of the technique in dealing with human-robot interaction in a relatively crowded environment [7]
FollowMe: Efficient Online Min-Cost Flow Tracking with Bounded Memory and Computation
One of the most popular approaches to multi-target tracking is
tracking-by-detection. Current min-cost flow algorithms which solve the data
association problem optimally have three main drawbacks: they are
computationally expensive, they assume that the whole video is given as a
batch, and they scale badly in memory and computation with the length of the
video sequence. In this paper, we address each of these issues, resulting in a
computationally and memory-bounded solution. First, we introduce a dynamic
version of the successive shortest-path algorithm which solves the data
association problem optimally while reusing computation, resulting in
significantly faster inference than standard solvers. Second, we address the
optimal solution to the data association problem when dealing with an incoming
stream of data (i.e., online setting). Finally, we present our main
contribution which is an approximate online solution with bounded memory and
computation which is capable of handling videos of arbitrarily length while
performing tracking in real time. We demonstrate the effectiveness of our
algorithms on the KITTI and PETS2009 benchmarks and show state-of-the-art
performance, while being significantly faster than existing solvers
- …