355 research outputs found

    An Accelerated Correlation Filter Tracker

    Full text link
    Recent visual object tracking methods have witnessed a continuous improvement in the state-of-the-art with the development of efficient discriminative correlation filters (DCF) and robust deep neural network features. Despite the outstanding performance achieved by the above combination, existing advanced trackers suffer from the burden of high computational complexity of the deep feature extraction and online model learning. We propose an accelerated ADMM optimisation method obtained by adding a momentum to the optimisation sequence iterates, and by relaxing the impact of the error between DCF parameters and their norm. The proposed optimisation method is applied to an innovative formulation of the DCF design, which seeks the most discriminative spatially regularised feature channels. A further speed up is achieved by an adaptive initialisation of the filter optimisation process. The significantly increased convergence of the DCF filter is demonstrated by establishing the optimisation process equivalence with a continuous dynamical system for which the convergence properties can readily be derived. The experimental results obtained on several well-known benchmarking datasets demonstrate the efficiency and robustness of the proposed ACFT method, with a tracking accuracy comparable to the start-of-the-art trackers

    BTLD+:A BAYESIAN APPROACH TO TRACKING LEARNING DETECTION BY PARTS

    Get PDF
    The contribution proposed in this thesis focuses on this particular instance of the visual tracking problem, referred as Adaptive Ap- iv \ufffcpearance Tracking. We proposed different approaches based on the Tracking Learning Detection (TLD) decomposition proposed in [55]. TLD decomposes visual tracking into three components, namely the tracker, the learner and detector. The tracker and the detector are two competitive processes for target localization based on comple- mentary sources of informations. The former searches for local fea- tures between consecutive frames in order to localize the target; the latter exploits an on-line appearance model to detect confident hy- pothesis over the entire image. The learner selects the final solution among the provided hypothesis. It updates the target appearance model, if necessary, reinitialize the tracker and bootstraps the detec- tor\u2019s appearance model. In particular, we investigated different ap- proaches to enforce the TLD stability. First, we replaced the tracker component with a novel one based on mcmc particle filtering; after- wards, we proposed a robust appearance modeling component able to characterize deformable objects in static images; after all, we inte- grated a modeling component able to integrate local visual features learning into the whole approach, lying to a couple layered represen- tation of the target appearance

    Object Tracking and Mensuration in Surveillance Videos

    Get PDF
    This thesis focuses on tracking and mensuration in surveillance videos. The first part of the thesis discusses several object tracking approaches based on the different properties of tracking targets. For airborne videos, where the targets are usually small and with low resolutions, an approach of building motion models for foreground/background proposed in which the foreground target is simplified as a rigid object. For relatively high resolution targets, the non-rigid models are applied. An active contour-based algorithm has been introduced. The algorithm is based on decomposing the tracking into three parts: estimate the affine transform parameters between successive frames using particle filters; detect the contour deformation using a probabilistic deformation map, and regulate the deformation by projecting the updated model onto a trained shape subspace. The active appearance Markov chain (AAMC). It integrates a statistical model of shape, appearance and motion. In the AAMC model, a Markov chain represents the switching of motion phases (poses), and several pairwise active appearance model (P-AAM) components characterize the shape, appearance and motion information for different motion phases. The second part of the thesis covers video mensuration, in which we have proposed a heightmeasuring algorithm with less human supervision, more flexibility and improved robustness. From videos acquired by an uncalibrated stationary camera, we first recover the vanishing line and the vertical point of the scene. We then apply a single view mensuration algorithm to each of the frames to obtain height measurements. Finally, using the LMedS as the cost function and the Robbins-Monro stochastic approximation (RMSA) technique to obtain the optimal estimate

    Visual Tracking Algorithms using Different Object Representation Schemes

    Get PDF
    Visual tracking, being one of the fundamental, most important and challenging areas in computer vision, has attracted much attention in the research community during the past decade due to its broad range of real-life applications. Even after three decades of research, it still remains a challenging problem in view of the complexities involved in the target searching due to intrinsic and extrinsic appearance variations of the object. The existing trackers fail to track the object when there are considerable amount of object appearance variations and when the object undergoes severe occlusion, scale change, out-of-plane rotation, motion blur, fast motion, in-plane rotation, out-of-view and illumination variation either individually or simultaneously. In order to have a reliable and improved tracking performance, the appearance variations should be handled carefully such that the appearance model should adapt to the intrinsic appearance variations and be robust enough for extrinsic appearance variations. The objective of this thesis is to develop visual object tracking algorithms by addressing the deficiencies of the existing algorithms to enhance the tracking performance by investigating the use of different object representation schemes to model the object appearance and then devising mechanisms to update the observation models. A tracking algorithm based on the global appearance model using robust coding and its collaboration with a local model is proposed. The global PCA subspace is used to model the global appearance of the object, and the optimum PCA basis coefficients and the global weight matrix are estimated by developing an iteratively reweighted robust coding (IRRC) technique. This global model is collaborated with the local model to exploit their individual merits. Global and local robust coding distances are introduced to find the candidate sample having similar appearance as that of the reconstructed sample from the subspace, and these distances are used to define the observation likelihood. A robust occlusion map generation scheme and a mechanism to update both the global and local observation models are developed. Quantitative and qualitative performance evaluations on OTB-50 and VOT2016, two popular benchmark datasets, demonstrate that the proposed algorithm with histogram of oriented gradient (HOG) features generally performs better than the state-of-the-art methods considered do. In spite of its good performance, there is a need to improve the tracking performance in some of the challenging attributes of OTB-50 and VOT2016. A second tracking algorithm is developed to provide an improved performance in situations for the above mentioned challenging attributes. The algorithms is designed based on a structural local 2DDCT sparse appearance model and an occlusion handling mechanism. In a structural local 2DDCT sparse appearance model, the energy compaction property of the transform is exploited to reduce the size of the dictionary as well as that of the candidate samples in the object representation so that the computational cost of the l_1-minimization used could be reduced. This strategy is in contrast to the existing models that use raw pixels. A holistic image reconstruction procedure is presented from the overlapped local patches that are obtained from the dictionary and the sparse codes, and then the reconstructed holistic image is used for robust occlusion detection and occlusion map generation. The occlusion map thus obtained is used for developing a novel observation model update mechanism to avoid the model degradation. A patch occlusion ratio is employed in the calculation of the confidence score to improve the tracking performance. Quantitative and qualitative performance evaluations on the two above mentioned benchmark datasets demonstrate that this second proposed tracking algorithm generally performs better than several state-of-the-art methods and the first proposed tracking method do. Despite the improved performance of this second proposed tracking algorithm, there are still some challenging attributes of OTB-50 and of VOT2016 for which the performance needs to be improved. Finally, a third tracking algorithm is proposed by developing a scheme for collaboration between the discriminative and generative appearance models. The discriminative model is explored to estimate the position of the target and a new generative model is used to find the remaining affine parameters of the target. In the generative model, robust coding is extended to two dimensions and employed in the bilateral two dimensional PCA (2DPCA) reconstruction procedure to handle the non-Gaussian or non-Laplacian residuals by developing an IRRC technique. A 2D robust coding distance is introduced to differentiate the candidate sample from the one reconstructed from the subspace and used to compute the observation likelihood in the generative model. A method of generating a robust occlusion map from the weights obtained during the IRRC technique and a novel update mechanism of the observation model for both the kernelized correlation filters and the bilateral 2DPCA subspace are developed. Quantitative and qualitative performance evaluations on the two datasets demonstrate that this algorithm with HOG features generally outperforms the state-of-the-art methods and the other two proposed algorithms for most of the challenging attributes

    Adaptive Kernel Density Approximation and Its Applications to Real-Time Computer Vision

    Get PDF
    Density-based modeling of visual features is very common in computer vision research due to the uncertainty of observed data; so accurate and simple density representation is essential to improve the quality of overall systems. Even though various methods, either parametric or non-parametric, are proposed for density modeling, there is a significant trade-off between flexibility and computational complexity. Therefore, a new compact and flexible density representation is necessary, and the dissertation provides a solution to alleviate the problems as follows. First, we describe a compact and flexible representation of probability density functions using a mixture of Gaussians which is called Kernel Density Approximation (KDA). In this framework, the number of Gaussians components as well as the weight, mean, and covariance of each Gaussian component are determined automatically by mean-shift mode-finding procedure and curvature fitting. An original density function estimated by kernel density estimation is simplified into a compact mixture of Gaussians by the proposed method; memory requirements are dramatically reduced while incurring only a small amount of error. In order to adapt to variations of visual features, sequential kernel density approximation is proposed in which a sequential update of the density function is performed in linear time. Second, kernel density approximation is incorporated into a Bayesian filtering framework, and we design a Kernel-based Bayesian Filter (KBF). Particle filters have inherent limitations such as degeneracy or loss of diversity which are mainly caused by sampling from discrete proposal distribution. In kernel-based Bayesian filtering, every relevant probability density function is continuous and the posterior is simplified by kernel density approximation so as to propagate a compact form of the density function from step to step. Since the proposal distribution is continuous in this framework, the problems in conventional particle filters are alleviated. The sequential kernel density approximation technique is naturally applied to background modeling, and target appearance modeling for object tracking. Also, the kernel-based Bayesian filtering framework is applied to object tracking, which shows improved performance with a smaller number of samples. We demonstrate the performance of kernel density approximation and its application through various simulations and experiments with real videos

    Robust Face Tracking in Video Sequences

    Get PDF
    Ce travail présente une analyse et une discussion détaillées d’un nouveau système de suivi des visages qui utilise plusieurs modèles d’apparence ainsi qu’un e approche suivi par détection. Ce système peut aider un système de reconnaissance de visages basé sur la vidéo en donnant des emplacements de visages d’individus spécifiques (région d’intérêt, ROI) pour chaque cadre. Un système de reconnaissance faciale peut utiliser les ROI fournis par le suivi du visage pour obtenir des preuves accumulées de la présence d’une personne d’une personne présente dans une vidéo, afin d’identifier une personne d’intérêt déjà inscrite dans le système de reconnaissance faciale. La tâche principale d’une méthode de suivi est de trouver l’emplacement d’un visage présent dans une image en utilisant des informations de localisation à partir de la trame précédente. Le processus de recherche se fait en trouvant la meilleure région qui maximise la possibilité d’un visage présent dans la trame en comparant la région avec un modèle d’apparence du visage. Cependant, au cours de ce processus, plusieurs facteurs externes nuisent aux performances d’une méthode de suivi. Ces facteurs externes sont qualifiés de nuisances et apparaissent habituellement sous la forme d’une variation d’éclairage, d’un encombrement de la scène, d’un flou de mouvement, d’une occlusion partielle, etc. Ainsi, le principal défi pour une méthode de suivi est de trouver la meilleure région malgré les changements d’apparence fréquents du visage pendant le processus de suivi. Étant donné qu’il n’est pas possible de contrôler ces nuisances, des modèles d’apparence faciale robustes sont conçus et développés de telle sorte qu’ils soient moins affectés par ces nuisances et peuvent encore suivre un visage avec succès lors de ces scénarios. Bien qu’un modèle d’apparence unique puisse être utilisé pour le suivi d’un visage, il ne peut pas s’attaquer à toutes les nuisances de suivi. Par conséquent, la méthode proposée utilise plusieurs modèles d’apparence faciale pour s’attaquer à ces nuisances. En outre, la méthode proposée combine la méthodologie du suivi par détection en employant un détecteur de visage qui fournit des rectangles englobants pour chaque image. Par conséquent, le détecteur de visage aide la méthode de suivi à aborder les nuisances de suivi. De plus, un détecteur de visage contribue à la réinitialisation du suivi pendant un cas de dérive. Cependant, la précision suivi peut encore être améliorée en générant des candidats additionnels autour de l’estimation de la position de l’objet par la méthode de suivi et en choisissant le meilleur parmi eux. Ainsi, dans la méthode proposée, le suivi du visage est formulé comme le visage candidat qui maximise la similitude de tous les modèles d’apparence.----------ABSTRACT: This work presents a detailed analysis and discussion of a novel face tracking system that utilizes multiple appearance models along with a tracking-by-detection framework that can aid a video-based face recognition system by giving face locations of specific individuals (Region Of Interest, ROI) for every frame. A face recognition system can utilize the ROIs provided by the face tracker to get accumulated evidence of a person being present in a video, in order to identify a person of interest that is already enrolled in the face recognition system. The primary task of a face tracker is to find the location of a face present in an image by utilizing its location information from the previous frame. The searching process is done by finding the best region that maximizes the possibility of a face being present in the frame by comparing the region with a face appearance model. However, during this face search, several external factors inhibit the performance of a face tracker. These external factors are termed as tracking nuisances, and usually appear in the form of illumination variation, background clutter, motion blur, partial occlusion, etc. Thus, the main challenge for a face tracker is to find the best region in spite of frequent appearance changes of the face during the tracking process. Since, it is not possible to control these nuisances. Robust face appearance models are designed and developed such that they do not too much affected by these nuisances and still can track a face successfully during such scenarios. Although a single face appearance model can be used for tracking a face, it cannot tackle all the tracking nuisances. Hence, the proposed method utilizes multiple face appearance models. By doing this, different appearance models can facilitate tracking in the presence of tracking nuisances. In addition, the proposed method, combines the tracking-by-detection methodology by employing a face detector that outputs a bounding box for every frame. Therefore, the face detector aids the face tracker in tackling the tracking nuisances. In addition, a face detector aids in the re-initialization of the tracker during tracking drift. However, the precision of the tracker can further be improved by generating face candidates around the face tracking output and choosing the best among them. Thus, in the proposed method, face tracking is formulated as the face candidate that maximizes the similarity of all the appearance models

    Selected topics in video coding and computer vision

    Get PDF
    Video applications ranging from multimedia communication to computer vision have been extensively studied in the past decades. However, the emergence of new applications continues to raise questions that are only partially answered by existing techniques. This thesis studies three selected topics related to video: intra prediction in block-based video coding, pedestrian detection and tracking in infrared imagery, and multi-view video alignment.;In the state-of-art video coding standard H.264/AVC, intra prediction is defined on the hierarchical quad-tree based block partitioning structure which fails to exploit the geometric constraint of edges. We propose a geometry-adaptive block partitioning structure and a new intra prediction algorithm named geometry-adaptive intra prediction (GAIP). A new texture prediction algorithm named geometry-adaptive intra displacement prediction (GAIDP) is also developed by extending the original intra displacement prediction (IDP) algorithm with the geometry-adaptive block partitions. Simulations on various test sequences demonstrate that intra coding performance of H.264/AVC can be significantly improved by incorporating the proposed geometry adaptive algorithms.;In recent years, due to the decreasing cost of thermal sensors, pedestrian detection and tracking in infrared imagery has become a topic of interest for night vision and all weather surveillance applications. We propose a novel approach for detecting and tracking pedestrians in infrared imagery based on a layered representation of infrared images. Pedestrians are detected from the foreground layer by a Principle Component Analysis (PCA) based scheme using the appearance cue. To facilitate the task of pedestrian tracking, we formulate the problem of shot segmentation and present a graph matching-based tracking algorithm. Simulations with both OSU Infrared Image Database and WVU Infrared Video Database are reported to demonstrate the accuracy and robustness of our algorithms.;Multi-view video alignment is a process to facilitate the fusion of non-synchronized multi-view video sequences for various applications including automatic video based surveillance and video metrology. In this thesis, we propose an accurate multi-view video alignment algorithm that iteratively aligns two sequences in space and time. To achieve an accurate sub-frame temporal alignment, we generalize the existing phase-correlation algorithm to 3-D case. We also present a novel method to obtain the ground-truth of the temporal alignment by using supplementary audio signals sampled at a much higher rate. The accuracy of our algorithm is verified by simulations using real-world sequences
    • …
    corecore