3,616 research outputs found

    Background Modelling with Associated Confidence

    Get PDF
    Non

    CVABS: Moving Object Segmentation with Common Vector Approach for Videos

    Full text link
    Background modelling is a fundamental step for several real-time computer vision applications that requires security systems and monitoring. An accurate background model helps detecting activity of moving objects in the video. In this work, we have developed a new subspace based background modelling algorithm using the concept of Common Vector Approach with Gram-Schmidt orthogonalization. Once the background model that involves the common characteristic of different views corresponding to the same scene is acquired, a smart foreground detection and background updating procedure is applied based on dynamic control parameters. A variety of experiments is conducted on different problem types related to dynamic backgrounds. Several types of metrics are utilized as objective measures and the obtained visual results are judged subjectively. It was observed that the proposed method stands successfully for all problem types reported on CDNet2014 dataset by updating the background frames with a self-learning feedback mechanism.Comment: 12 Pages, 4 Figures, 1 Tabl

    A unified 2D-3D video scene change detection framework for mobile camera platforms

    Get PDF
    In this paper, we present a novel scene change detection algorithm for mobile camera platforms. Our approach integrates sparse 3D scene background modelling and dense 2D image background modelling into a unified framework. The 3D scene background modelling identifies inconsistent clusters over time in a set of 3D cloud points as the scene changes. The 2D image background modelling further confirms the scene changes by finding inconsistent appearances in a set of aligned images using the classical MRF background subtraction technique. We evaluate the performance of our proposed system on a number of challenging video datasets obtained from a camera placed on a moving vehicle and the experiments show that our proposed method outperforms previous works in scene change detection, which suggested the feasibility of our approach.<br /

    Mean-shift background image modelling

    Full text link
    Background modelling is widely used in computer vision for the detection of foreground objects in a frame sequence. The more accurate the background model, the more correct is the detection of the foreground objects. In this paper, we present an approach to background modelling based on a mean-shift procedure. The mean shift vector convergence properties enable the system to achieve reliable background modelling. In addition, histogram-based computation and the new concept of local basins of attraction allow us to meet the stringent real-time requirements of video processing. ©2004 IEEE

    Integrated region- and pixel-based approach to background modelling

    Get PDF
    In this paper a new probabilistic method for background modelling is proposed, aimed at the application in video surveillance tasks using a monitoring static camera. Recently, methods employing Time-Adaptive, Per Pixel, Mixture of Gaussians (TAPPMOG) modelling have become popular due to their intrinsic appealing properties. Nevertheless, they are not able per se to monitor global changes in the scene, because they model the background as a set of independent pixel processes. In this paper, we propose to integrate this kind of pixel-based information with higher level region-based information, that permits to manage also sudden changes of the background. These pixel- and regionbased modules are naturally and effectively embedded in a probabilistic Bayesian framework called particle filtering, that allows a multi-object tracking. Experimental comparison with a classic pixel-based approach reveals that the proposed method is really effective in recovering from situations of sudden global illumination changes of the background, as well as limited non-uniform changes of the scene illumination.

    rSVDdpd: A Robust Scalable Video Surveillance Background Modelling Algorithm

    Full text link
    A basic algorithmic task in automated video surveillance is to separate background and foreground objects. Camera tampering, noisy videos, low frame rate, etc., pose difficulties in solving the problem. A general approach that classifies the tampered frames, and performs subsequent analysis on the remaining frames after discarding the tampered ones, results in loss of information. Several robust methods based on robust principal component analysis (PCA) have been introduced to solve this problem. To date, considerable effort has been expended to develop robust PCA via Principal Component Pursuit (PCP) methods with reduced computational cost and visually appealing foreground detection. However, the convex optimizations used in these algorithms do not scale well to real-world large datasets due to large matrix inversion steps. Also, an integral component of these foreground detection algorithms is singular value decomposition which is nonrobust. In this paper, we present a new video surveillance background modelling algorithm based on a new robust singular value decomposition technique rSVDdpd which takes care of both these issues. We also demonstrate the superiority of our proposed algorithm on a benchmark dataset and a new real-life video surveillance dataset in the presence of camera tampering. Software codes and additional illustrations are made available at the accompanying website rSVDdpd Homepage (https://subroy13.github.io/rsvddpd-home/

    Non-parametric data-driven background modelling using conditional probabilities

    Get PDF
    Background modelling is one of the main challenges in particle physics data analysis. Commonly employed strategies include the use of simulated events of the background processes, and the fitting of parametric background models to the observed data. However, reliable simulations are not always available or may be extremely costly to produce. As a result, in many cases, uncertainties associated with the accuracy or sample size of the simulation are the limiting factor in the analysis sensitivity. At the same time, parametric models are limited by the a priori unknown functional form and parameter values of the background distribution. These issues become ever more pressing when large datasets become available, as it is already the case at the CERN Large Hadron Collider, and when studying exclusive signatures involving hadronic backgrounds. Two novel and widely applicable non-parametric data-driven background modelling techniques are presented, which address these issues for a broad class of searches and measurements. The first, relying on ancestral sampling, uses data from a relaxed event selection to estimate a graph of conditional probability density functions of the variables used in the analysis, accounting for significant correlations. A background model is then generated by sampling events from this graph, before the full event selection is applied. In the second, a generative adversarial network is trained to estimate the joint probability density function of the variables used in the analysis. The training is performed on a relaxed event selection which excludes the signal region, and the network is conditioned on a blinding variable. Subsequently, the conditional probability density function is interpolated into the signal region to model the background. The application of each method on a benchmark analysis is presented in detail, and the performance is discussed.Comment: 33 pages, 18 figure

    Background modelling for γ\gamma-ray spectroscopy with INTEGRAL/SPI

    Full text link
    The coded-mask spectrometer-telescope SPI on board the INTEGRAL observatory records photons in the energy range between 20 and 8000 keV. A robust and versatile method to model the dominating instrumental background (BG) radiation is difficult to establish for such a telescope in the rapidly changing space environment. From long-term monitoring of SPI's Germanium detectors, we built up a spectral parameter data base, which characterises the instrument response as well as the BG behaviour. We aim to build a self-consistent and broadly applicable BG model for typical science cases of INTEGRAL/SPI, based on this data base. The general analysis method for SPI relies on distinguishing between illumination patterns on the 19-element Germanium detector array from BG and sky in a maximum likelihood framework. We illustrate how the complete set of measurements, even including the exposures of the sources of interest, can be used to define a BG model. We apply our method to different science cases, including point-like, diffuse, continuum, and line emission, and evaluate the adequacy in each case. From likelihood values and the number of fitted parameters, we determine how strong the impact of the unknown BG variability is. We find that the number of fitted parameters, i.e. how often the BG has to be re-normalised, depends on the emission type (diffuse with many observations over a large sky region, or point-like with concentrated exposure around one source), and the spectral energy range and bandwidth. A unique time scale, valid for all analysis issues, is not applicable for INTEGRAL/SPI, but must and can be inferred from the chosen data set. We conclude that our BG modelling method is usable in a large variety of INTEGRAL/SPI science cases, and provides nearly systematics-free and robust results.Comment: 11 pages, 2 appendix pages, 9 figures, 4 appendix figures, 4 tables; based on the work of Diehl et al. (2018), Siegert (2017), and Siegert (2013
    corecore