5 research outputs found

    Full Reference Objective Quality Assessment for Reconstructed Background Images

    Full text link
    With an increased interest in applications that require a clean background image, such as video surveillance, object tracking, street view imaging and location-based services on web-based maps, multiple algorithms have been developed to reconstruct a background image from cluttered scenes. Traditionally, statistical measures and existing image quality techniques have been applied for evaluating the quality of the reconstructed background images. Though these quality assessment methods have been widely used in the past, their performance in evaluating the perceived quality of the reconstructed background image has not been verified. In this work, we discuss the shortcomings in existing metrics and propose a full reference Reconstructed Background image Quality Index (RBQI) that combines color and structural information at multiple scales using a probability summation model to predict the perceived quality in the reconstructed background image given a reference image. To compare the performance of the proposed quality index with existing image quality assessment measures, we construct two different datasets consisting of reconstructed background images and corresponding subjective scores. The quality assessment measures are evaluated by correlating their objective scores with human subjective ratings. The correlation results show that the proposed RBQI outperforms all the existing approaches. Additionally, the constructed datasets and the corresponding subjective scores provide a benchmark to evaluate the performance of future metrics that are developed to evaluate the perceived quality of reconstructed background images.Comment: Associated source code: https://github.com/ashrotre/RBQI, Associated Database: https://drive.google.com/drive/folders/1bg8YRPIBcxpKIF9BIPisULPBPcA5x-Bk?usp=sharing (Email for permissions at: ashrotreasuedu

    Motion-Aware Graph Regularized RPCA for background modeling of complex scenes

    No full text
    Computing a background model from a given sequence of video frames is a prerequisite for many computer vision applications. Recently, this problem has been posed as learning a low-dimensional subspace from high dimensional data. Many contemporary subspace segmentation methods have been proposed to overcome the limitations of the methods developed for simple background scenes. Unfortunately, because of the absence of motion information and without preserving intrinsic geometric structure of video data, most existing algorithms do not provide promising nature of the low-rank component for complex scenes. Such as largely occluded background by foreground objects, superfluity in video frames in order to cope with intermittent motion of foreground objects, sudden lighting condition variation, and camera jitter sequences. To overcome these difficulties, we propose a motion-aware regularization of graphs on low-rank component for video background modeling. We compute optical flow and use this information to make a motion-aware matrix. In order to learn the locality and similarity information within a video we compute inter-frame and intra-frame graphs which we use to preserve geometric information in the low-rank component. Finally, we use linearized alternating direction method with parallel splitting and adaptive penalty to incorporate the preceding steps to recover the model of the background. Experimental evaluations on challenging sequences demonstrate promising results over state-of-the-art methods.This research is supported by the MSIP (Ministry of Science, ICT and Future Planning), Korea, under the ITRC (Information Technology Research Center) (IITP-2016-H8601-16-1002) supervised by the IITP (Institute for Information & communications Technology Promotion).Scopu
    corecore