1,292 research outputs found

    Extended Object Tracking: Introduction, Overview and Applications

    Full text link
    This article provides an elaborate overview of current research in extended object tracking. We provide a clear definition of the extended object tracking problem and discuss its delimitation to other types of object tracking. Next, different aspects of extended object modelling are extensively discussed. Subsequently, we give a tutorial introduction to two basic and well used extended object tracking approaches - the random matrix approach and the Kalman filter-based approach for star-convex shapes. The next part treats the tracking of multiple extended objects and elaborates how the large number of feasible association hypotheses can be tackled using both Random Finite Set (RFS) and Non-RFS multi-object trackers. The article concludes with a summary of current applications, where four example applications involving camera, X-band radar, light detection and ranging (lidar), red-green-blue-depth (RGB-D) sensors are highlighted.Comment: 30 pages, 19 figure

    On the Hardware/Software Design and Implementation of a High Definition Multiview Video Surveillance System

    Get PDF
    published_or_final_versio

    Object Tracking in Distributed Video Networks Using Multi-Dimentional Signatures

    Get PDF
    From being an expensive toy in the hands of governmental agencies, computers have evolved a long way from the huge vacuum tube-based machines to today\u27s small but more than thousand times powerful personal computers. Computers have long been investigated as the foundation for an artificial vision system. The computer vision discipline has seen a rapid development over the past few decades from rudimentary motion detection systems to complex modekbased object motion analyzing algorithms. Our work is one such improvement over previous algorithms developed for the purpose of object motion analysis in video feeds. Our work is based on the principle of multi-dimensional object signatures. Object signatures are constructed from individual attributes extracted through video processing. While past work has proceeded on similar lines, the lack of a comprehensive object definition model severely restricts the application of such algorithms to controlled situations. In conditions with varying external factors, such algorithms perform less efficiently due to inherent assumptions of constancy of attribute values. Our approach assumes a variable environment where the attribute values recorded of an object are deemed prone to variability. The variations in the accuracy in object attribute values has been addressed by incorporating weights for each attribute that vary according to local conditions at a sensor location. This ensures that attribute values with higher accuracy can be accorded more credibility in the object matching process. Variations in attribute values (such as surface color of the object) were also addressed by means of applying error corrections such as shadow elimination from the detected object profile. Experiments were conducted to verify our hypothesis. The results established the validity of our approach as higher matching accuracy was obtained with our multi-dimensional approach than with a single-attribute based comparison

    Real time multiple camera person detection and tracking

    Get PDF
    As the amount of video data grows larger every day, the efforts to create intelligent systems able to perceive, understand and extrapolate useful information from this data grow larger, namely object detection and tracking systems have been a widely researched area in the past few years. In the present work we develop a real time, multiple camera, multiple person detection and tracking system prototype, using static, overlapped, sh-eye top view cameras. The goal is to create a system able to intelligently and automatically extrapolate object trajectories from surveillance footage. To solve these problems we employ different types of techniques, namely a combination of the representational power of deep neural networks, which have been yielding outstanding results in computer vision problems over the last few years, and more classical, already established object tracking algorithms in order to represent and track the target objects. In particular, we split the problem in two sub-problems: single camera multiple object tracking and multiple camera multiple object tracking, which we tackle in a modular manner. Our long-term motivation is to deploy this system in a commercial application, such as commercial areas or airports, so that we can build upon intelligent visual surveillance systems.À medida que a quantidade de dados de vídeo cresce, os esforços para criar sistemas inteligentes capazes de observar, entender e extrapolar informação útil destes dados intensifcam-se. Nomeadamente, sistemas de detecção e tracking de objectos têm sido uma àrea amplamente investigada nos últimos anos. No presente trabalho, desenvolvemos um protótipo de tracking multi-câmara, multi-objecto que corre em tempo real, e que usa várias câmaras fish-eye estáticas de topo, com sobreposição entre elas. O objetivo é criar um sistema capaz de extrapolar de modo inteligente e automático as trajetórias de pessoas a partir de imagens de vigilância. Para resolver estes problemas, utilizamos diferentes tipos de técnicas, nomeadamente, uma combinação do poder representacional das redes neurais, que têm produzido excelentes resultados em problemas de visão computacional nos últimos anos, e algoritmos de tracking mais clássicos e já estabelecidos, para representar e seguir o percurso dos objectos de interesse. Em particular, dividimos o problema maior em dois sub-problemas: tracking de objetos de uma única câmera e tracking de objetos de múltiplas câmeras, que abordamos de modo modular. A nossa motivação a longo prazo é implmentar este tipo de sistema em aplicações comerciais, como áreas comerciais ou aeroportos, para que possamos dar mais um passo em direcção a sistemas de vigilância visual inteligentes

    Robust real-time tracking in smart camera networks

    Get PDF

    Occlusion reasoning for multiple object visual tracking

    Full text link
    Thesis (Ph.D.)--Boston UniversityOcclusion reasoning for visual object tracking in uncontrolled environments is a challenging problem. It becomes significantly more difficult when dense groups of indistinguishable objects are present in the scene that cause frequent inter-object interactions and occlusions. We present several practical solutions that tackle the inter-object occlusions for video surveillance applications. In particular, this thesis proposes three methods. First, we propose "reconstruction-tracking," an online multi-camera spatial-temporal data association method for tracking large groups of objects imaged with low resolution. As a variant of the well-known Multiple-Hypothesis-Tracker, our approach localizes the positions of objects in 3D space with possibly occluded observations from multiple camera views and performs temporal data association in 3D. Second, we develop "track linking," a class of offline batch processing algorithms for long-term occlusions, where the decision has to be made based on the observations from the entire tracking sequence. We construct a graph representation to characterize occlusion events and propose an efficient graph-based/combinatorial algorithm to resolve occlusions. Third, we propose a novel Bayesian framework where detection and data association are combined into a single module and solved jointly. Almost all traditional tracking systems address the detection and data association tasks separately in sequential order. Such a design implies that the output of the detector has to be reliable in order to make the data association work. Our framework takes advantage of the often complementary nature of the two subproblems, which not only avoids the error propagation issue from which traditional "detection-tracking approaches" suffer but also eschews common heuristics such as "nonmaximum suppression" of hypotheses by modeling the likelihood of the entire image. The thesis describes a substantial number of experiments, involving challenging, notably distinct simulated and real data, including infrared and visible-light data sets recorded ourselves or taken from data sets publicly available. In these videos, the number of objects ranges from a dozen to a hundred per frame in both monocular and multiple views. The experiments demonstrate that our approaches achieve results comparable to those of state-of-the-art approaches

    Detection-assisted Object Tracking by Mobile Cameras

    Get PDF
    Tracking-by-detection is a class of new tracking approaches that utilizes recent development of object detection algorithms. This type of approach performs object detection for each frame and uses data association algorithms to associate new observations to existing targets. Inspired by the core idea of the tracking-by-detection framework, we propose a new framework called detection-assisted tracking where object detection algorithm provides help to the tracking algorithm when such help is necessary; thus object detection, a very time consuming task, is performed only when needed. The proposed framework is also able to handle complicated scenarios where cameras are allowed to move, and occlusion or multiple similar objects exist. We also port the core component of the proposed framework, the detector, onto embedded smart cameras. Contrary to traditional scenarios where the smart cameras are assumed to be static, we allow the smart cameras to move around in the scene. Our approach employs histogram of oriented gradients (HOG) object detector for foreground detection, to enable more robust detection on mobile platform. Traditional background subtraction methods are not suitable for mobile platforms where the background changes constantly. Adviser: Senem Velipasalar and Mustafa Cenk Gurso
    corecore