263 research outputs found

    Multi-Target Tracking and Occlusion Handling with Learned Variational Bayesian Clusters and a Social Force Model

    Get PDF
    This paper considers the problem of tracking multiple humans in video. A solution is proposed which is able to deal with the challenges of a varying number of targets, occlusions and interactions when every target gives rise to multiple measurements. The developed novel framework comprises a variational Bayesian clustering approach combined with a social force model. It can handle a time-varying number of targets and can cope with complex inter-target occlusions by maintaining the identities of targets during their close physical interactions. It performs measurement to-target association by automatically detecting the measurement relevance. A variational Bayesian clustering technique clusters the measurements and thereby provides an elegant solution to the measurement origin uncertainty problem. A particle filter is developed in which the clustering algorithm and the social force model enhance the prediction step. The performance of the proposed framework is evaluated over several sequences from publicly available data sets: AV16.3, CAVIAR and PETS2006, which demonstrate that the proposed framework successfully initializes and tracks a variable number of human targets in the presence of complex occlusions. The proposed algorithm is compared with state-of-the-art techniques due to Khan et al., Laet et al. and Czyz et al. and is shown to achieve improved tracking performance

    Enhanced particle PHD filtering for multiple human tracking

    Get PDF
    PhD ThesisVideo-based single human tracking has found wide application but multiple human tracking is more challenging and enhanced processing techniques are required to estimate the positions and number of targets in each frame. In this thesis, the particle probability hypothesis density (PHD) lter is therefore the focus due to its ability to estimate both localization and cardinality information related to multiple human targets. To improve the tracking performance of the particle PHD lter, a number of enhancements are proposed. The Student's-t distribution is employed within the state and measurement models of the PHD lter to replace the Gaussian distribution because of its heavier tails, and thereby better predict particles with larger amplitudes. Moreover, the variational Bayesian approach is utilized to estimate the relationship between the measurement noise covariance matrix and the state model, and a joint multi-dimensioned Student's-t distribution is exploited. In order to obtain more observable measurements, a backward retrodiction step is employed to increase the measurement set, building upon the concept of a smoothing algorithm. To make further improvement, an adaptive step is used to combine the forward ltering and backward retrodiction ltering operations through the similarities of measurements achieved over discrete time. As such, the errors in the delayed measurements generated by false alarms and environment noise are avoided. In the nal work, information describing human behaviour is employed iv Abstract v to aid particle sampling in the prediction step of the particle PHD lter, which is captured in a social force model. A novel social force model is proposed based on the exponential function. Furthermore, a Markov Chain Monte Carlo (MCMC) step is utilized to resample the predicted particles, and the acceptance ratio is calculated by the results from the social force model to achieve more robust prediction. Then, a one class support vector machine (OCSVM) is applied in the measurement model of the PHD lter, trained on human features, to mitigate noise from the environment and to achieve better tracking performance. The proposed improvements of the particle PHD lters are evaluated with benchmark datasets such as the CAVIAR, PETS2009 and TUD datasets and assessed with quantitative and global evaluation measures, and are compared with state-of-the-art techniques to con rm the improvement of multiple human tracking performance

    Generic multiple object tracking

    No full text
    Multiple object tracking is an important problem in the computer vision community due to its applications, including but not limited to, visual surveillance, crowd behavior analysis and robotics. The difficulties of this problem lie in several challenges such as frequent occlusion, interaction, high-degree articulation, etc. In recent years, data association based approaches have been successful in tracking multiple pedestrians on top of specific kinds of object detectors. Thus these approaches are type-specific. This may constrain their application in scenario where type-specific object detectors are unavailable. In view of this, I investigate in this thesis tracking multiple objects without ready-to-use and type-specific object detectors. More specifically, the problem of multiple object tracking is generalized to tracking targets of a generic type. Namely, objects to be tracked are no longer constrained to be a specific kind of objects. This problem is termed as Generic Multiple Object Tracking (GMOT), which is handled by three approaches presented in this thesis. In the first approach, a generic object detector is learned based on manual annotation of only one initial bounding box. Then the detector is employed to regularize the online learning procedure of multiple trackers which are specialized to each object. More specifically, multiple trackers are learned simultaneously with shared features and are guided to keep close to the detector. Experimental results have shown considerable improvement on this problem compared with the state-of-the-art methods. The second approach treats detection and tracking of multiple generic objects as a bi-label propagation procedure, which is consisted of class label propagation (detection) and object label propagation (tracking). In particular, the cluster Multiple Task Learning (cMTL) is employed along with the spatio-temporal consistency to address the online detection problem. The tracking problem is addressed by associating existing trajectories with new detection responses considering appearance, motion and context information. The advantages of this approach is verified by extensive experiments on several public data sets. The aforementioned two approaches handle GMOT in an online manner. In contrast, a batch method is proposed in the third work. It dynamically clusters given detection hypotheses into groups corresponding to individual objects. Inspired by the success of topic model in tackling textual tasks, Dirichlet Process Mixture Model (DPMM) is utilized to address the tracking problem by cooperating with the so-called must-links and cannot-links, which are proposed to avoid physical collision. Moreover, two kinds of representations, superpixel and Deformable Part Model (DPM), are introduced to track both rigid and non-rigid objects. Effectiveness of the proposed method is demonstrated with experiments on public data sets.Open Acces

    Graphical models for visual object recognition and tracking

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2006.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Includes bibliographical references (p. 277-301).We develop statistical methods which allow effective visual detection, categorization, and tracking of objects in complex scenes. Such computer vision systems must be robust to wide variations in object appearance, the often small size of training databases, and ambiguities induced by articulated or partially occluded objects. Graphical models provide a powerful framework for encoding the statistical structure of visual scenes, and developing corresponding learning and inference algorithms. In this thesis, we describe several models which integrate graphical representations with nonparametric statistical methods. This approach leads to inference algorithms which tractably recover high-dimensional, continuous object pose variations, and learning procedures which transfer knowledge among related recognition tasks. Motivated by visual tracking problems, we first develop a nonparametric extension of the belief propagation (BP) algorithm. Using Monte Carlo methods, we provide general procedures for recursively updating particle-based approximations of continuous sufficient statistics. Efficient multiscale sampling methods then allow this nonparametric BP algorithm to be flexibly adapted to many different applications.(cont.) As a particular example, we consider a graphical model describing the hand's three-dimensional (3D) structure, kinematics, and dynamics. This graph encodes global hand pose via the 3D position and orientation of several rigid components, and thus exposes local structure in a high-dimensional articulated model. Applying nonparametric BP, we recover a hand tracking algorithm which is robust to outliers and local visual ambiguities. Via a set of latent occupancy masks, we also extend our approach to consistently infer occlusion events in a distributed fashion. In the second half of this thesis, we develop methods for learning hierarchical models of objects, the parts composing them, and the scenes surrounding them. Our approach couples topic models originally developed for text analysis with spatial transformations, and thus consistently accounts for geometric constraints. By building integrated scene models, we may discover contextual relationships, and better exploit partially labeled training images. We first consider images of isolated objects, and show that sharing parts among object categories improves accuracy when learning from few examples.(cont.) Turning to multiple object scenes, we propose nonparametric models which use Dirichlet processes to automatically learn the number of parts underlying each object category, and objects composing each scene. Adapting these transformed Dirichlet processes to images taken with a binocular stereo camera, we learn integrated, 3D models of object geometry and appearance. This leads to a Monte Carlo algorithm which automatically infers 3D scene structure from the predictable geometry of known object categories.by Erik B. Sudderth.Ph.D

    Pedestrian Attribute Recognition: A Survey

    Full text link
    Recognizing pedestrian attributes is an important task in computer vision community due to it plays an important role in video surveillance. Many algorithms has been proposed to handle this task. The goal of this paper is to review existing works using traditional methods or based on deep learning networks. Firstly, we introduce the background of pedestrian attributes recognition (PAR, for short), including the fundamental concepts of pedestrian attributes and corresponding challenges. Secondly, we introduce existing benchmarks, including popular datasets and evaluation criterion. Thirdly, we analyse the concept of multi-task learning and multi-label learning, and also explain the relations between these two learning algorithms and pedestrian attribute recognition. We also review some popular network architectures which have widely applied in the deep learning community. Fourthly, we analyse popular solutions for this task, such as attributes group, part-based, \emph{etc}. Fifthly, we shown some applications which takes pedestrian attributes into consideration and achieve better performance. Finally, we summarized this paper and give several possible research directions for pedestrian attributes recognition. The project page of this paper can be found from the following website: \url{https://sites.google.com/view/ahu-pedestrianattributes/}.Comment: Check our project page for High Resolution version of this survey: https://sites.google.com/view/ahu-pedestrianattributes

    Prédiction de trajectoires humaines pour la navigation de robots

    Get PDF
    Our lives are becoming increasingly influenced by robots. They are no longer limited to working in factories and increasingly appear in shared spaces with humans, to deliver goods and parcels, ferry medications, or give company to elderly people. Therefore, they need to perceive, analyze, and predict the behavior of surrounding people and take collision-free and socially-acceptable actions. In this thesis, we address the problem of (short-term) human trajectory prediction, to enable mobile robots, such as Pepper, to navigate crowded environments. We propose a novel socially-aware approach for prediction of multiple pedestrians. Our model is designed and trained based on Generative Adversarial Networks, which learn the multi-modal distribution of plausible predictions for each pedestrian. Additionally, we use a modified version of this model to perform data-driven crowd simulation. Predicting the location of occluded pedestrians is another problem discussed in this dissertation. Also, we carried out a study on common human trajectory datasets. A list of quantitative metrics is suggested to assess prediction complexity in those datasets.Nos vies sont de plus en plus influencées par les robots. Ils ne se limitent plus à travailler dans les usines et apparaissent de plus en plus dans des espaces partagés avec les humains, pour livrer des biens et des colis, transporter des médicaments ou tenir compagnie à des personnes âgées. Par conséquent, ils doivent percevoir, analyser et prévoir le comportement des personnes qui les entourent et prendre des mesures sans collision et socialement acceptables des actions sans collision et socialement acceptables. Dans cette thèse, nous abordons le problème de la prédiction de la trajectoire humaine (à court terme), afin de permettre aux robots mobiles, tels que Pepper, de naviguer dans des environnements bondés. Nous proposons une nouvelle approche socialement consciente pour la prédiction de plusieurs piétons. Notre modèle est conçu et entraîné sur la base de réseaux adversariaux génératifs, qui apprennent la distribution multimodale des prédictions plausibles pour chaque piéton. De plus, nous utilisons une version modifiée de ce modèle pour effectuer une simulation de foule basée sur des données. La prédiction de l’emplacement des piétons occultés est un autre problème abordé dans cette thèse. Nous avons également réalisé une étude sur des jeux de données courants de trajectoires humaines. Une liste de métriques quantitatives est proposée pour évaluer la complexité de la prédiction dans ces jeux de données

    WATCHING PEOPLE: ALGORITHMS TO STUDY HUMAN MOTION AND ACTIVITIES

    Get PDF
    Nowadays human motion analysis is one of the most active research topics in Computer Vision and it is receiving an increasing attention from both the industrial and scientific communities. The growing interest in human motion analysis is motivated by the increasing number of promising applications, ranging from surveillance, human–computer interaction, virtual reality to healthcare, sports, computer games and video conferencing, just to name a few. The aim of this thesis is to give an overview of the various tasks involved in visual motion analysis of the human body and to present the issues and possible solutions related to it. In this thesis, visual motion analysis is categorized into three major areas related to the interpretation of human motion: tracking of human motion using virtual pan-tilt-zoom (vPTZ) camera, recognition of human motions and human behaviors segmentation. In the field of human motion tracking, a virtual environment for PTZ cameras (vPTZ) is presented to overcame the mechanical limitations of PTZ cameras. The vPTZ is built on equirectangular images acquired by 360° cameras and it allows not only the development of pedestrian tracking algorithms but also the comparison of their performances. On the basis of this virtual environment, three novel pedestrian tracking algorithms for 360° cameras were developed, two of which adopt a tracking-by-detection approach while the last adopts a Bayesian approach. The action recognition problem is addressed by an algorithm that represents actions in terms of multinomial distributions of frequent sequential patterns of different length. Frequent sequential patterns are series of data descriptors that occur many times in the data. The proposed method learns a codebook of frequent sequential patterns by means of an apriori-like algorithm. An action is then represented with a Bag-of-Frequent-Sequential-Patterns approach. In the last part of this thesis a methodology to semi-automatically annotate behavioral data given a small set of manually annotated data is presented. The resulting methodology is not only effective in the semi-automated annotation task but can also be used in presence of abnormal behaviors, as demonstrated empirically by testing the system on data collected from children affected by neuro-developmental disorders

    Multigranularity Representations for Human Inter-Actions: Pose, Motion and Intention

    Get PDF
    Tracking people and their body pose in videos is a central problem in computer vision. Standard tracking representations reason about temporal coherence of detected people and body parts. They have difficulty tracking targets under partial occlusions or rare body poses, where detectors often fail, since the number of training examples is often too small to deal with the exponential variability of such configurations. We propose tracking representations that track and segment people and their body pose in videos by exploiting information at multiple detection and segmentation granularities when available, whole body, parts or point trajectories. Detections and motion estimates provide contradictory information in case of false alarm detections or leaking motion affinities. We consolidate contradictory information via graph steering, an algorithm for simultaneous detection and co-clustering in a two-granularity graph of motion trajectories and detections, that corrects motion leakage between correctly detected objects, while being robust to false alarms or spatially inaccurate detections. We first present a motion segmentation framework that exploits long range motion of point trajectories and large spatial support of image regions. We show resulting video segments adapt to targets under partial occlusions and deformations. Second, we augment motion-based representations with object detection for dealing with motion leakage. We demonstrate how to combine dense optical flow trajectory affinities with repulsions from confident detections to reach a global consensus of detection and tracking in crowded scenes. Third, we study human motion and pose estimation. We segment hard to detect, fast moving body limbs from their surrounding clutter and match them against pose exemplars to detect body pose under fast motion. We employ on-the-fly human body kinematics to improve tracking of body joints under wide deformations. We use motion segmentability of body parts for re-ranking a set of body joint candidate trajectories and jointly infer multi-frame body pose and video segmentation. We show empirically that such multi-granularity tracking representation is worthwhile, obtaining significantly more accurate multi-object tracking and detailed body pose estimation in popular datasets
    • …
    corecore