2,310 research outputs found
A survey on 2d object tracking in digital video
This paper presents object tracking methods in video.Different algorithms based on rigid, non rigid and articulated object tracking are studied. The goal of this article is to review the state-of-the-art tracking methods, classify them
into different categories, and identify new trends.It is often the case that tracking objects in consecutive frames is supported by a prediction scheme. Based on information extracted from previous frames and any high level information that can be obtained, the state (location) of the
object is predicted.An excellent framework for prediction is kalman filter, which additionally estimates prediction error.In complex scenes, instead of single hypothesis, multiple hypotheses using Particle filter can be used.Different
techniques are given for different types of constraints in video
Human Motion Trajectory Prediction: A Survey
With growing numbers of intelligent autonomous systems in human environments,
the ability of such systems to perceive, understand and anticipate human
behavior becomes increasingly important. Specifically, predicting future
positions of dynamic agents and planning considering such predictions are key
tasks for self-driving vehicles, service robots and advanced surveillance
systems. This paper provides a survey of human motion trajectory prediction. We
review, analyze and structure a large selection of work from different
communities and propose a taxonomy that categorizes existing methods based on
the motion modeling approach and level of contextual information used. We
provide an overview of the existing datasets and performance metrics. We
discuss limitations of the state of the art and outline directions for further
research.Comment: Submitted to the International Journal of Robotics Research (IJRR),
37 page
Analyzing Structured Scenarios by Tracking People and Their Limbs
The analysis of human activities is a fundamental problem in computer vision. Though complex, interactions between people and their environment often exhibit a spatio-temporal structure that can be exploited during analysis. This structure can be leveraged to mitigate the effects of missing or noisy visual observations caused, for example, by sensor noise, inaccurate models, or occlusion. Trajectories of people and their hands and feet, often sufficient for recognition of human activities, lead to a natural qualitative spatio-temporal description of these interactions.
This work introduces the following contributions to the task of human activity understanding: 1) a framework that efficiently detects and tracks multiple interacting people and their limbs, 2) an event recognition approach that integrates both logical and probabilistic reasoning in analyzing the spatio-temporal structure of multi-agent scenarios, and 3) an effective computational model of the visibility constraints imposed on humans as they navigate through their environment. The tracking framework mixes probabilistic models with deterministic constraints and uses AND/OR search and lazy evaluation to efficiently obtain the globally optimal solution in each frame. Our high-level reasoning framework efficiently and robustly interprets noisy visual observations to deduce the events comprising structured scenarios. This is accomplished by combining First-Order Logic, Allen's Interval Logic, and Markov Logic Networks with an event hypothesis generation process that reduces the size of the ground Markov network. When applied to outdoor one-on-one basketball videos, our framework tracks the players and, guided by the game rules, analyzes their interactions with each other and the ball, annotating the videos with the relevant basketball events that occurred. Finally, motivated by studies of spatial behavior, we use a set of features from visibility analysis to represent spatial context in the interpretation of human spatial activities. We demonstrate the effectiveness of our representation on trajectories generated by humans in a virtual environment
Multimodal Data Analysis of Dyadic Interactions for an Automated Feedback System Supporting Parent Implementation of Pivotal Response Treatment
abstract: Parents fulfill a pivotal role in early childhood development of social and communication
skills. In children with autism, the development of these skills can be delayed. Applied
behavioral analysis (ABA) techniques have been created to aid in skill acquisition.
Among these, pivotal response treatment (PRT) has been empirically shown to foster
improvements. Research into PRT implementation has also shown that parents can be
trained to be effective interventionists for their children. The current difficulty in PRT
training is how to disseminate training to parents who need it, and how to support and
motivate practitioners after training.
Evaluation of the parents’ fidelity to implementation is often undertaken using video
probes that depict the dyadic interaction occurring between the parent and the child during
PRT sessions. These videos are time consuming for clinicians to process, and often result
in only minimal feedback for the parents. Current trends in technology could be utilized to
alleviate the manual cost of extracting data from the videos, affording greater
opportunities for providing clinician created feedback as well as automated assessments.
The naturalistic context of the video probes along with the dependence on ubiquitous
recording devices creates a difficult scenario for classification tasks. The domain of the
PRT video probes can be expected to have high levels of both aleatory and epistemic
uncertainty. Addressing these challenges requires examination of the multimodal data
along with implementation and evaluation of classification algorithms. This is explored
through the use of a new dataset of PRT videos.
The relationship between the parent and the clinician is important. The clinician can
provide support and help build self-efficacy in addition to providing knowledge and
modeling of treatment procedures. Facilitating this relationship along with automated
feedback not only provides the opportunity to present expert feedback to the parent, but
also allows the clinician to aid in personalizing the classification models. By utilizing a
human-in-the-loop framework, clinicians can aid in addressing the uncertainty in the
classification models by providing additional labeled samples. This will allow the system
to improve classification and provides a person-centered approach to extracting
multimodal data from PRT video probes.Dissertation/ThesisDoctoral Dissertation Computer Science 201
Visual Human Tracking and Group Activity Analysis: A Video Mining System for Retail Marketing
Thesis (PhD) - Indiana University, Computer Sciences, 2007In this thesis we present a system for automatic human tracking and activity recognition from
video sequences. The problem of automated analysis of visual information in order to derive descriptors
of high level human activities has intrigued computer vision community for decades and is
considered to be largely unsolved. A part of this interest is derived from the vast range of applications
in which such a solution may be useful. We attempt to find efficient formulations of these tasks
as applied to the extracting customer behavior information in a retail marketing context. Based on
these formulations, we present a system that visually tracks customers in a retail store and performs
a number of activity analysis tasks based on the output from the tracker.
In tracking we introduce new techniques for pedestrian detection, initialization of the body
model and a formulation of the temporal tracking as a global trans-dimensional optimization problem.
Initial human detection is addressed by a novel method for head detection, which incorporates
the knowledge of the camera projection model.The initialization of the human body model is addressed
by newly developed shape and appearance descriptors. Temporal tracking of customer
trajectories is performed by employing a human body tracking system designed as a Bayesian
jump-diffusion filter. This approach demonstrates the ability to overcome model dimensionality
ambiguities as people are leaving and entering the scene.
Following the tracking, we developed a two-stage group activity formulation based upon the
ideas from swarming research. For modeling purposes, all moving actors in the scene are viewed here as simplistic agents in the swarm. This allows to effectively define a set of inter-agent interactions,
which combine to derive a distance metric used in further swarm clustering. This way, in the
first stage the shoppers that belong to the same group are identified by deterministically clustering
bodies to detect short term events and in the second stage events are post-processed to form clusters
of group activities with fuzzy memberships.
Quantitative analysis of the tracking subsystem shows an improvement over the state of the
art methods, if used under similar conditions. Finally, based on the output from the tracker, the
activity recognition procedure achieves over 80% correct shopper group detection, as validated by
the human generated ground truth results
- …