20,344 research outputs found

    Foreground detection enhancement using Pearson correlation filtering

    Get PDF
    Foreground detection algorithms are commonly employed as an initial module in video processing pipelines for automated surveillance. The resulting masks produced by these algorithms are usually postprocessed in order to improve their quality. In this work, a postprocessing filter based on the Pearson correlation among the pixels in a neighborhood of the pixel at hand is proposed. The flow of information among pixels is controlled by the correlation that exists among them. This way, the filtering performance is enhanced with respect to some state of the art proposals, as demonstrated with a selection of benchmark videos.Universidad de Málaga. Campus de Excelencia Internacional Andalucía Tech

    Foreground Detection in Camouflaged Scenes

    Full text link
    Foreground detection has been widely studied for decades due to its importance in many practical applications. Most of the existing methods assume foreground and background show visually distinct characteristics and thus the foreground can be detected once a good background model is obtained. However, there are many situations where this is not the case. Of particular interest in video surveillance is the camouflage case. For example, an active attacker camouflages by intentionally wearing clothes that are visually similar to the background. In such cases, even given a decent background model, it is not trivial to detect foreground objects. This paper proposes a texture guided weighted voting (TGWV) method which can efficiently detect foreground objects in camouflaged scenes. The proposed method employs the stationary wavelet transform to decompose the image into frequency bands. We show that the small and hardly noticeable differences between foreground and background in the image domain can be effectively captured in certain wavelet frequency bands. To make the final foreground decision, a weighted voting scheme is developed based on intensity and texture of all the wavelet bands with weights carefully designed. Experimental results demonstrate that the proposed method achieves superior performance compared to the current state-of-the-art results.Comment: IEEE International Conference on Image Processing, 201

    Full Reference Objective Quality Assessment for Reconstructed Background Images

    Full text link
    With an increased interest in applications that require a clean background image, such as video surveillance, object tracking, street view imaging and location-based services on web-based maps, multiple algorithms have been developed to reconstruct a background image from cluttered scenes. Traditionally, statistical measures and existing image quality techniques have been applied for evaluating the quality of the reconstructed background images. Though these quality assessment methods have been widely used in the past, their performance in evaluating the perceived quality of the reconstructed background image has not been verified. In this work, we discuss the shortcomings in existing metrics and propose a full reference Reconstructed Background image Quality Index (RBQI) that combines color and structural information at multiple scales using a probability summation model to predict the perceived quality in the reconstructed background image given a reference image. To compare the performance of the proposed quality index with existing image quality assessment measures, we construct two different datasets consisting of reconstructed background images and corresponding subjective scores. The quality assessment measures are evaluated by correlating their objective scores with human subjective ratings. The correlation results show that the proposed RBQI outperforms all the existing approaches. Additionally, the constructed datasets and the corresponding subjective scores provide a benchmark to evaluate the performance of future metrics that are developed to evaluate the perceived quality of reconstructed background images.Comment: Associated source code: https://github.com/ashrotre/RBQI, Associated Database: https://drive.google.com/drive/folders/1bg8YRPIBcxpKIF9BIPisULPBPcA5x-Bk?usp=sharing (Email for permissions at: ashrotreasuedu

    Constraining the epoch of reionization with the variance statistic: simulations of the LOFAR case

    Get PDF
    Several experiments are underway to detect the cosmic redshifted 21-cm signal from neutral hydrogen from the Epoch of Reionization (EoR). Due to their very low signal-to-noise ratio, these observations aim for a statistical detection of the signal by measuring its power spectrum. We investigate the extraction of the variance of the signal as a first step towards detecting and constraining the global history of the EoR. Signal variance is the integral of the signal's power spectrum, and it is expected to be measured with a high significance. We demonstrate this through results from a simulation and parameter estimation pipeline developed for the Low Frequency Array (LOFAR)-EoR experiment. We show that LOFAR should be able to detect the EoR in 600 hours of integration using the variance statistic. Additionally, the redshift (zrz_r) and duration (Δz\Delta z) of reionization can be constrained assuming a parametrization. We use an EoR simulation of zr=7.68z_r = 7.68 and Δz=0.43\Delta z = 0.43 to test the pipeline. We are able to detect the simulated signal with a significance of 4 standard deviations and extract the EoR parameters as zr=7.72−0.18+0.37z_r = 7.72^{+0.37}_{-0.18} and Δz=0.53−0.23+0.12\Delta z = 0.53^{+0.12}_{-0.23} in 600 hours, assuming that systematic errors can be adequately controlled. We further show that the significance of detection and constraints on EoR parameters can be improved by measuring the cross-variance of the signal by cross-correlating consecutive redshift bins.Comment: 13 pages, 14 figures, Accepted for publication in MNRA
    • …
    corecore