5 research outputs found

    Multi-person tracking with overlapping cameras in complex, dynamic environments

    No full text

    Multi-person tracking with overlapping cameras in complex, dynamic environments

    No full text
    This paper presents a multi-camera system to track multiple persons in complex, dynamic environments. Position measurements are obtained by carving out the space defined by foreground regions in the overlapping camera views and projecting these onto blobs on the ground plane. Person appearance is described in terms of the colour histograms in the various camera views of three vertical body regions (head-shoulder, torso, legs). The assignment of measurements to tracks (modelled by Kalman filters) is done in a non-greedy, global fashion based on ground plane position and colour appearance. The advantage of the proposed approach is that the decision on correspondences across cameras is delayed until it can be performed at the object-level, where it is more robust. We demonstrate the effectiveness of the proposed approach using data from three cameras overlooking a complex outdoor setting (train platform), containing a significant amount of lighting and background changes

    Global Optimisation of Multiā€Camera Moving Object Detection

    Get PDF
    An important task in intelligent video surveillance is to detect multiple pedestrians. These pedestrians may be occluded by each other in a camera view. To overcome this problem, multiple cameras can be deployed to provide complementary information, and homography mapping has been widely used for the association and fusion of multiā€camera observations. The intersection regions of the foreground projections usually indicate the locations of moving objects. However, many false positives may be generated from the intersections of nonā€corresponding foreground regions. In this thesis, an algorithm for multiā€camera pedestrian detection is proposed. The first stage of this work is to propose pedestrian candidate locations on the top view. Two approaches are proposed in this stage. The first approach is a topā€down approach which is based on the probabilistic occupancy map framework. The ground plane is discretized into a grid, and the likelihood of pedestrian presence at each location is estimated by comparing a rectangle, of the average size of the pedestrians standing there, with the foreground silhouettes in all camera views. The second approach is a bottomā€up approach, which is based on the multiā€plane homography mapping. The foreground regions in all camera views are projected and overlaid in the top view according to the multiā€plane homographies and the potential locations of pedestrians are estimated from the intersection regions. In the second stage, where we borrowed the idea from the Quineā€McCluskey (QM) method for logic function minimisation, essential candidates are initially identified, each of which covers at least a significant part of the foreground that is not covered by the other candidates. Then nonā€essential candidates are selected to cover the remaining foregrounds by following a repeated process, which alternates between merging redundant candidates and finding emerging essential candidates. Then, an alternative approach to the QM method, the Petrickā€™s method, is used for finding the minimum set of pedestrian candidates to cover all the foreground regions. These two methods are nonā€iterative and can greatly increase the computational speed. No similar work has been proposed before. Experiments on benchmark video datasets have demonstrated the good performance of the proposed algorithm in comparison with other stateā€ofā€theā€art methods for pedestrian detection

    Robust moving object detection by information fusion from multiple cameras

    Get PDF
    Moving object detection is an essential process before tracking and event recognition in video surveillance can take place. To monitor a wider field of view and avoid occlusions in pedestrian tracking, multiple cameras are usually used and homography can be employed to associate multiple camera views. Foreground regions detected from each of the multiple camera views are projected into a virtual top view according to the homography for a plane. The intersection regions of the foreground projections indicate the locations of moving objects on that plane. The homography mapping for a set of parallel planes at different heights can increase the robustness of the detection. However, homography mapping is very time consuming and the intersections of non-corresponding foreground regions can cause false-positive detections. In this thesis, a real-time moving object detection algorithm using multiple cameras is proposed. Unlike the pixelwise homography mapping which projects binary foreground images, the approach used in the research described in this thesis was to approximate the contour of each foreground region with a polygon and only transmit and project the polygon vertices. The foreground projections are rebuilt from the projected polygons in the reference view. The experimental results have shown that this method can be run in real time and generate results similar to those using foreground images. To identify the false-positive detections, both geometrical information and colour cues are utilized. The former is a height matching algorithm based on the geometry between the camera views. The latter is a colour matching algorithm based on the Mahalanobis distance of the colour distributions of two foreground regions. Since the height matching is uncertain in the scenarios with the adjacent pedestrian and colour matching cannot handle occluded pedestrians, the two algorithms are combined to improve the robustness of the foreground intersection classification. The robustness of the proposed algorithm is demonstrated in real-world image sequences
    corecore