270 research outputs found
A java framework for object detection and tracking, 2007
Object detection and tracking is an important problem in the automated analysis of video. There have been numerous approaches and technological advances for object detection and tracking in the video analysis. As one of the most challenging and active research areas, more algorithms will be proposed in the future. As a consequence, there will be the demand for the capability to provide a system that can effectively collect, organize, group, document and implement these approaches. The purpose of this thesis is to develop one uniform object detection and tracking framework, capable of detecting and tracking the multi-objects in the presence of occlusion. The object detection and tracking algorithms are classified into different categories and incorporated into the framework implemented in Java. The framework can adapt to different types, and different application domains, and be easy and convenient for developers to reuse. It also provides comprehensive descriptions of representative methods in each category and some examples to aspire to give developers or users, who require a tracker for a certain application, the ability to select the most suitable tracking algorithm for their particular needs
Novel Texture-based Probabilistic Object Recognition and Tracking Techniques for Food Intake Analysis and Traffic Monitoring
More complex image understanding algorithms are increasingly practical in a host of emerging applications. Object tracking has value in surveillance and data farming; and object recognition has applications in surveillance, data management, and industrial automation. In this work we introduce an object recognition application in automated nutritional intake analysis and a tracking application intended for surveillance in low quality videos. Automated food recognition is useful for personal health applications as well as nutritional studies used to improve public health or inform lawmakers. We introduce a complete, end-to-end system for automated food intake measurement. Images taken by a digital camera are analyzed, plates and food are located, food type is determined by neural network, distance and angle of food is determined and 3D volume estimated, the results are cross referenced with a nutritional database, and before and after meal photos are compared to determine nutritional intake. We compare against contemporary systems and provide detailed experimental results of our system\u27s performance. Our tracking systems consider the problem of car and human tracking on potentially very low quality surveillance videos, from fixed camera or high flying \acrfull{uav}. Our agile framework switches among different simple trackers to find the most applicable tracker based on the object and video properties. Our MAPTrack is an evolution of the agile tracker that uses soft switching to optimize between multiple pertinent trackers, and tracks objects based on motion, appearance, and positional data. In both cases we provide comparisons against trackers intended for similar applications i.e., trackers that stress robustness in bad conditions, with competitive results
Modelling and tracking objects with a topology preserving self-organising neural network
Human gestures form an integral part in our everyday communication. We use
gestures not only to reinforce meaning, but also to describe the shape of objects,
to play games, and to communicate in noisy environments. Vision systems that
exploit gestures are often limited by inaccuracies inherent in handcrafted models.
These models are generated from a collection of training examples which requires
segmentation and alignment. Segmentation in gesture recognition typically involves manual intervention, a time consuming process that is feasible only for a
limited set of gestures. Ideally gesture models should be automatically acquired
via a learning scheme that enables the acquisition of detailed behavioural knowledge only from topological and temporal observation.
The research described in this thesis is motivated by a desire to provide a framework for the unsupervised acquisition and tracking of gesture models. In any
learning framework, the initialisation of the shapes is very crucial. Hence, it would
be beneficial to have a robust model not prone to noise that can automatically correspond the set of shapes. In the first part of this thesis, we develop a framework
for building statistical 2D shape models by extracting, labelling and corresponding
landmark points using only topological relations derived from competitive hebbian learning. The method is based on the assumption that correspondences can
be addressed as an unsupervised classification problem where landmark points
are the cluster centres (nodes) in a high-dimensional vector space. The approach
is novel in that the network can be used in cases where the topological structure of
the input pattern is not known a priori thus no topology of fixed dimensionality is imposed onto the network.
In the second part, we propose an approach to minimise the user intervention
in the adaptation process, which requires to specify a priori the number of nodes
needed to represent an object, by utilising an automatic criterion for maximum
node growth. Furthermore, this model is used to represent motion in image sequences by initialising a suitable segmentation that separates the object of interest
from the background. The segmentation system takes into consideration some illumination tolerance, images as inputs from ordinary cameras and webcams, some
low to medium cluttered background avoiding extremely cluttered backgrounds,
and that the objects are at close range from the camera.
In the final part, we extend the framework for the automatic modelling and
unsupervised tracking of 2D hand gestures in a sequence of k frames. The aim
is to use the tracked frames as training examples in order to build the model and
maintain correspondences. To do that we add an active step to the Growing Neural Gas (GNG) network, which we call Active Growing Neural Gas (A-GNG) that
takes into consideration not only the geometrical position of the nodes, but also the
underlined local feature structure of the image, and the distance vector between
successive images. The quality of our model is measured through the calculation
of the topographic product. The topographic product is our topology preserving
measure which quantifies the neighbourhood preservation.
In our system we have applied specific restrictions in the velocity and the appearance of the gestures to simplify the difficulty of the motion analysis in the gesture representation. The proposed framework has been validated on applications
related to sign language. The work has great potential in Virtual Reality (VR) applications where the learning and the representation of gestures becomes natural
without the need of expensive wear cable sensors
Semi-automatic object tracking in video sequences
In this paper we present a method for semi-automatic object tracking in video sequences using multiple features and a method for probabilistic relaxation to improve the tracking results producing smooth and accurate tracked borders. Starting from a given initial position of the object in the first frame the proposed method automatically tracks the object in the sequence modeling the a posteriori probabilities of a set of features such as: color, position and motion, depth, etcIII Workshop de Computación Gráfica, Imágenes y Visualización (WCGIV)Red de Universidades con Carreras en Informática (RedUNCI
Semi-automatic object tracking in video sequences
A method is presented for semi-automatic object tracking in video sequences using multiple features and a method for probabilistic relaxation to improve the tracking results producing smooth and accurate tracked borders. Starting from a given initial position of the object in the first frame the proposed method automatically tracks the object in the sequence modelling the a posteriori probabilities of a set of features such as color, position and motion, depth, etc.Facultad de Informátic
Action Recognition in Videos: from Motion Capture Labs to the Web
This paper presents a survey of human action recognition approaches based on
visual data recorded from a single video camera. We propose an organizing
framework which puts in evidence the evolution of the area, with techniques
moving from heavily constrained motion capture scenarios towards more
challenging, realistic, "in the wild" videos. The proposed organization is
based on the representation used as input for the recognition task, emphasizing
the hypothesis assumed and thus, the constraints imposed on the type of video
that each technique is able to address. Expliciting the hypothesis and
constraints makes the framework particularly useful to select a method, given
an application. Another advantage of the proposed organization is that it
allows categorizing newest approaches seamlessly with traditional ones, while
providing an insightful perspective of the evolution of the action recognition
task up to now. That perspective is the basis for the discussion in the end of
the paper, where we also present the main open issues in the area.Comment: Preprint submitted to CVIU, survey paper, 46 pages, 2 figures, 4
table
Object Tracking: Appearance Modeling And Feature Learning
Object tracking in real scenes is an important problem in computer vision due to increasing usage of tracking systems day in and day out in various applications such as surveillance, security, monitoring and robotic vision. Object tracking is the process of locating objects of interest in every frame of video frames. Many systems have been proposed to address the tracking problem where the major challenges come from handling appearance variation during tracking caused by changing scale, pose, rotation, illumination and occlusion.
In this dissertation, we address these challenges by introducing several novel tracking techniques. First, we developed a multiple object tracking system that deals specially with occlusion issues. The system depends on our improved KLT tracker for accurate and robust tracking during partial occlusion. In full occlusion, we applied a Kalman filter to predict the object\u27s new location and connect the trajectory parts.
Many tracking methods depend on a rectangle or an ellipse mask to segment and track objects. Typically, using a larger or smaller mask will lead to loss of tracked objects. Second, we present an object tracking system (SegTrack) that deals with partial and full occlusions by employing improved segmentation methods: mixture of Gaussians and a silhouette segmentation algorithm. For re-identification, one or more feature vectors for each tracked object are used
after target reappearing.
Third, we propose a novel Bayesian Hierarchical Appearance Model (BHAM) for robust object tracking. Our idea is to model the appearance of a target as combination of multiple appearance models, each covering the target appearance changes under a certain situation (e.g. view angle). In addition, we built an object tracking system by integrating BHAM with background subtraction and the KLT tracker for static camera videos. For moving camera videos, we applied BHAM to cluster negative and positive target instances.
As tracking accuracy depends mainly on finding good discriminative features to estimate the target location, finally, we propose to learn good features for generic object tracking using online convolutional neural networks (OCNN). In order to learn discriminative and stable features for tracking, we propose a novel object function to train OCNN by penalizing the feature variations in consecutive frames, and the tracker is built by integrating OCNN with a
color-based multi-appearance model.
Our experimental results on real-world videos show that our tracking systems have superior performance when compared with several state-of-the-art trackers. In the feature, we plan to apply the Bayesian Hierarchical Appearance Model (BHAM) for multiple objects tracking
- …