87,661 research outputs found

    Person tracking with non-overlapping multiple cameras

    Get PDF
    Monitoring and tracking of any target in a surveillance system is an important task. When these targets are human then this problem comes under person identification and tracking. At present, large scale smart video surveillance system is an essential component for any commercial or public campus. Since field of view (FOV) of a camera is limited; for large area monitoring, multiple cameras are needed at different locations. This paper proposes a novel model for tracking a person under multiple non-overlapping cameras. It builds the reference signature of the person at the beginning of the tracking system to match with the upcoming signatures captured by other cameras within the specified area of observation with the help of trained support vector machine (SVM) between two cameras. For experiments, wide area re-identification dataset (WARD) and a real-time scenario have been used with color, shape and texture features for person's re-identification

    Real-time people tracking in a camera network

    Get PDF
    Visual tracking is a fundamental key to the recognition and analysis of human behaviour. In this thesis we present an approach to track several subjects using multiple cameras in real time. The tracking framework employs a numerical Bayesian estimator, also known as a particle lter, which has been developed for parallel implementation on a Graphics Processing Unit (GPU). In order to integrate multiple cameras into a single tracking unit we represent the human body by a parametric ellipsoid in a 3D world. The elliptical boundary can be projected rapidly, several hundred times per subject per frame, onto any image for comparison with the image data within a likelihood model. Adding variables to encode visibility and persistence into the state vector, we tackle the problems of distraction and short-period occlusion. However, subjects may also disappear for longer periods due to blind spots between cameras elds of view. To recognise a desired subject after such a long-period, we add coloured texture to the ellipsoid surface, which is learnt and retained during the tracking process. This texture signature improves the recall rate from 60% to 70-80% when compared to state only data association. Compared to a standard Central Processing Unit (CPU) implementation, there is a signi cant speed-up ratio

    Real-Time Hand Tracking Using a Sum of Anisotropic Gaussians Model

    Full text link
    Real-time marker-less hand tracking is of increasing importance in human-computer interaction. Robust and accurate tracking of arbitrary hand motion is a challenging problem due to the many degrees of freedom, frequent self-occlusions, fast motions, and uniform skin color. In this paper, we propose a new approach that tracks the full skeleton motion of the hand from multiple RGB cameras in real-time. The main contributions include a new generative tracking method which employs an implicit hand shape representation based on Sum of Anisotropic Gaussians (SAG), and a pose fitting energy that is smooth and analytically differentiable making fast gradient based pose optimization possible. This shape representation, together with a full perspective projection model, enables more accurate hand modeling than a related baseline method from literature. Our method achieves better accuracy than previous methods and runs at 25 fps. We show these improvements both qualitatively and quantitatively on publicly available datasets.Comment: 8 pages, Accepted version of paper published at 3DV 201

    Tracking by Prediction: A Deep Generative Model for Mutli-Person localisation and Tracking

    Full text link
    Current multi-person localisation and tracking systems have an over reliance on the use of appearance models for target re-identification and almost no approaches employ a complete deep learning solution for both objectives. We present a novel, complete deep learning framework for multi-person localisation and tracking. In this context we first introduce a light weight sequential Generative Adversarial Network architecture for person localisation, which overcomes issues related to occlusions and noisy detections, typically found in a multi person environment. In the proposed tracking framework we build upon recent advances in pedestrian trajectory prediction approaches and propose a novel data association scheme based on predicted trajectories. This removes the need for computationally expensive person re-identification systems based on appearance features and generates human like trajectories with minimal fragmentation. The proposed method is evaluated on multiple public benchmarks including both static and dynamic cameras and is capable of generating outstanding performance, especially among other recently proposed deep neural network based approaches.Comment: To appear in IEEE Winter Conference on Applications of Computer Vision (WACV), 201

    Human Centered Hardware Modeling and Collaboration

    Get PDF
    In order to collaborate engineering designs among NASA Centers and customers, to in clude hardware and human activities from multiple remote locations, live human-centered modeling and collaboration across several sites has been successfully facilitated by Kennedy Space Center. The focus of this paper includes innovative a pproaches to engineering design analyses and training, along with research being conducted to apply new technologies for tracking, immersing, and evaluating humans as well as rocket, vehic le, component, or faci lity hardware utilizing high resolution cameras, motion tracking, ergonomic analysis, biomedical monitoring, wor k instruction integration, head-mounted displays, and other innovative human-system integration modeling, simulation, and collaboration applications

    Parametric tracking with spatial extraction across an array of cameras

    Get PDF
    Video surveillance is a rapidly growing area that has been fuelled by an increase in the concerns of security and safety in both public and private areas. With heighten security concerns, the utilization of video surveillance systems spread over a large area is becoming the norm. Surveillance of a large area requires a number of cameras to be deployed, which presents problems for human operators. In the surveillance of a large area, the need to monitor numerous screens makes an operator less effective in monitoring, observing or tracking groups or targets of interest. In such situations, the application of computer systems can prove highly effective in assisting human operators. The overall aim of this thesis was to investigate different methods for tracking a target across an array of cameras. This required a set of parameters to be identified that could be passed between cameras as the target moved in and out of the fields of view. Initial investigations focussed on identifying the most effective colour space to use. A normalized cross correlation method was used initially with a reference image to track the target of interest. A second method investigated the use of histogram similarity in tracking targets. In this instance a reference target’s histogram or pixel distribution was used as a means for tracking. Finally a method was investigated that used the relationship between colour regions that make up a whole target. An experimental method was developed that used the information between colour regions such as the vector and colour difference as a means for tracking a target. This method was tested on a single camera configuration and multiple camera configuration and shown to be effective. In addition to the experimental tracking method investigated, additional data can be extracted to estimate a spatial map of a target as the target of interest is tracked across an array of cameras. For each method investigated the experimental results are presented in this thesis and it has been demonstrated that minimal data exchange can be used in order to track a target across an array of cameras. In addition to tracking a target, the spatial position of the target of interest could be estimated as it moves across the array
    • …
    corecore