5,501 research outputs found

    Semantic Background Subtraction

    Full text link
    peer reviewedWe introduce the notion of semantic background subtraction, a novel framework for motion detection in video sequences. The key innovation consists to leverage object-level semantics to address the variety of challenging scenarios for background subtraction. Our framework combines the information of a semantic segmentation algorithm, expressed by a probability for each pixel, with the output of any background subtraction algorithm to reduce false positive detections produced by illumination changes, dynamic backgrounds, strong shadows, and ghosts. In addition, it maintains a fully semantic background model to improve the detection of camouflaged foreground objects. Experiments led on the CDNet dataset show that we managed to improve, significantly, almost all background subtraction algorithms of the CDNet leaderboard, and reduce the mean overall error rate of all the 34 algorithms (resp. of the best 5 algorithms) by roughly 50% (resp. 20%). Note that a C++ implementation of the framework is available at http://www.telecom.ulg.ac.be/semantic

    An edge-based approach for robust foreground detection

    Get PDF
    Foreground segmentation is an essential task in many image processing applications and a commonly used approach to obtain foreground objects from the background. Many techniques exist, but due to shadows and changes in illumination the segmentation of foreground objects from the background remains challenging. In this paper, we present a powerful framework for detections of moving objects in real-time video processing applications under various lighting changes. The novel approach is based on a combination of edge detection and recursive smoothing techniques.We use edge dependencies as statistical features of foreground and background regions and define the foreground as regions containing moving edges. The background is described by short- and long-term estimates. Experiments prove the robustness of our method in the presence of lighting changes in sequences compared to other widely used background subtraction techniques

    Background Subtraction with Real-time Semantic Segmentation

    Full text link
    Accurate and fast foreground object extraction is very important for object tracking and recognition in video surveillance. Although many background subtraction (BGS) methods have been proposed in the recent past, it is still regarded as a tough problem due to the variety of challenging situations that occur in real-world scenarios. In this paper, we explore this problem from a new perspective and propose a novel background subtraction framework with real-time semantic segmentation (RTSS). Our proposed framework consists of two components, a traditional BGS segmenter B\mathcal{B} and a real-time semantic segmenter S\mathcal{S}. The BGS segmenter B\mathcal{B} aims to construct background models and segments foreground objects. The real-time semantic segmenter S\mathcal{S} is used to refine the foreground segmentation outputs as feedbacks for improving the model updating accuracy. B\mathcal{B} and S\mathcal{S} work in parallel on two threads. For each input frame ItI_t, the BGS segmenter B\mathcal{B} computes a preliminary foreground/background (FG/BG) mask BtB_t. At the same time, the real-time semantic segmenter S\mathcal{S} extracts the object-level semantics St{S}_t. Then, some specific rules are applied on Bt{B}_t and St{S}_t to generate the final detection Dt{D}_t. Finally, the refined FG/BG mask Dt{D}_t is fed back to update the background model. Comprehensive experiments evaluated on the CDnet 2014 dataset demonstrate that our proposed method achieves state-of-the-art performance among all unsupervised background subtraction methods while operating at real-time, and even performs better than some deep learning based supervised algorithms. In addition, our proposed framework is very flexible and has the potential for generalization

    Total Variation Regularized Tensor RPCA for Background Subtraction from Compressive Measurements

    Full text link
    Background subtraction has been a fundamental and widely studied task in video analysis, with a wide range of applications in video surveillance, teleconferencing and 3D modeling. Recently, motivated by compressive imaging, background subtraction from compressive measurements (BSCM) is becoming an active research task in video surveillance. In this paper, we propose a novel tensor-based robust PCA (TenRPCA) approach for BSCM by decomposing video frames into backgrounds with spatial-temporal correlations and foregrounds with spatio-temporal continuity in a tensor framework. In this approach, we use 3D total variation (TV) to enhance the spatio-temporal continuity of foregrounds, and Tucker decomposition to model the spatio-temporal correlations of video background. Based on this idea, we design a basic tensor RPCA model over the video frames, dubbed as the holistic TenRPCA model (H-TenRPCA). To characterize the correlations among the groups of similar 3D patches of video background, we further design a patch-group-based tensor RPCA model (PG-TenRPCA) by joint tensor Tucker decompositions of 3D patch groups for modeling the video background. Efficient algorithms using alternating direction method of multipliers (ADMM) are developed to solve the proposed models. Extensive experiments on simulated and real-world videos demonstrate the superiority of the proposed approaches over the existing state-of-the-art approaches.Comment: To appear in IEEE TI
    • …
    corecore