2,718 research outputs found

    Color Space Selection for Self-Organizing Map Based Foreground Detection in Video Sequences

    Get PDF
    The selection of the best color space is a fundamental task in detecting foreground objects on scenes. In many situations, especially on dynamic backgrounds, neither grayscale nor RGB color spaces represent the best solution to detect foreground objects. Other standard color spaces, such as YCbCr or HSV, have been proposed for background modeling in the literature; although the best results have been achieved using diverse color spaces according to the application, scene, algorithm, etc. In this work, a color space and color component weighting selection process is proposed to detect foreground objects in video sequences using self-organizing maps. Experimental results are also provided using well known benchmark videos.Universidad de Málaga. Campus de Excelencia Internacional Andalucía Tech

    Video browsing interfaces and applications: a review

    Get PDF
    We present a comprehensive review of the state of the art in video browsing and retrieval systems, with special emphasis on interfaces and applications. There has been a significant increase in activity (e.g., storage, retrieval, and sharing) employing video data in the past decade, both for personal and professional use. The ever-growing amount of video content available for human consumption and the inherent characteristics of video data—which, if presented in its raw format, is rather unwieldy and costly—have become driving forces for the development of more effective solutions to present video contents and allow rich user interaction. As a result, there are many contemporary research efforts toward developing better video browsing solutions, which we summarize. We review more than 40 different video browsing and retrieval interfaces and classify them into three groups: applications that use video-player-like interaction, video retrieval applications, and browsing solutions based on video surrogates. For each category, we present a summary of existing work, highlight the technical aspects of each solution, and compare them against each other

    Foreground segmentation in depth imagery using depth and spatial dynamic models for video surveillance applications

    Get PDF
    Low-cost systems that can obtain a high-quality foreground segmentation almostindependently of the existing illumination conditions for indoor environments are verydesirable, especially for security and surveillance applications. In this paper, a novelforeground segmentation algorithm that uses only a Kinect depth sensor is proposedto satisfy the aforementioned system characteristics. This is achieved by combininga mixture of Gaussians-based background subtraction algorithm with a new Bayesiannetwork that robustly predicts the foreground/background regions between consecutivetime steps. The Bayesian network explicitly exploits the intrinsic characteristics ofthe depth data by means of two dynamic models that estimate the spatial and depthevolution of the foreground/background regions. The most remarkable contribution is thedepth-based dynamic model that predicts the changes in the foreground depth distributionbetween consecutive time steps. This is a key difference with regard to visible imagery,where the color/gray distribution of the foreground is typically assumed to be constant.Experiments carried out on two different depth-based databases demonstrate that theproposed combination of algorithms is able to obtain a more accurate segmentation of theforeground/background than other state-of-the art approaches

    Recent Developments in Video Surveillance

    Get PDF
    With surveillance cameras installed everywhere and continuously streaming thousands of hours of video, how can that huge amount of data be analyzed or even be useful? Is it possible to search those countless hours of videos for subjects or events of interest? Shouldn’t the presence of a car stopped at a railroad crossing trigger an alarm system to prevent a potential accident? In the chapters selected for this book, experts in video surveillance provide answers to these questions and other interesting problems, skillfully blending research experience with practical real life applications. Academic researchers will find a reliable compilation of relevant literature in addition to pointers to current advances in the field. Industry practitioners will find useful hints about state-of-the-art applications. The book also provides directions for open problems where further advances can be pursued

    Hierarchical improvement of foreground segmentation masks in background subtraction

    Full text link
    A plethora of algorithms have been defined for foreground segmentation, a fundamental stage for many computer vision applications. In this work, we propose a post-processing framework to improve foreground segmentation performance of background subtraction algorithms. We define a hierarchical framework for extending segmented foreground pixels to undetected foreground object areas and for removing erroneously segmented foreground. Firstly, we create a motion-aware hierarchical image segmentation of each frame that prevents merging foreground and background image regions. Then, we estimate the quality of the foreground mask through the fitness of the binary regions in the mask and the hierarchy of segmented regions. Finally, the improved foreground mask is obtained as an optimal labeling by jointly exploiting foreground quality and spatial color relations in a pixel-wise fully-connected Conditional Random Field. Experiments are conducted over four large and heterogeneous datasets with varied challenges (CDNET2014, LASIESTA, SABS and BMC) demonstrating the capability of the proposed framework to improve background subtraction resultsThis work was partially supported by the Spanish Government (HAVideo, TEC2014-53176-R

    ViBe: A universal background subtraction algorithm for video sequences

    Full text link
    This paper presents a technique for motion detection that incorporates several innovative mechanisms. For example, our proposed technique stores, for each pixel, a set of values taken in the past at the same location or in the neighborhood. It then compares this set to the current pixel value in order to determine whether that pixel belongs to the background, and adapts the model by choosing randomly which values to substitute from the background model. This approach differs from those based on the classical belief that the oldest values should be replaced first. Finally, when the pixel is found to be part of the background, its value is propagated into the background model of a neighboring pixel. We describe our method in full details (including pseudocode and the parameter values used) and compare it to other background subtraction techniques. Efficiency figures show that our method outperforms recent and proven state-of-the-art methods in terms of both computation speed and detection rate. We also analyze the performance of a downscaled version of our algorithm to the absolute minimum of one comparison and one byte of memory per pixel. It appears that even such a simplified version of our algorithm performs better than mainstream techniques. There is a dedicated web page for ViBe at http://www.telecom.ulg.ac.be/research/vibe

    LaBGen: A method based on motion detection for generating the background of a scene

    Full text link
    Given a video sequence acquired with a fixed camera, the generation of the stationary background of the scene is a challenging problem which aims at computing a reference image for a motionless background. For that purpose, we developed our method named LaBGen, which emerged as the best one during the Scene Background Modeling and Initialization (SBMI) workshop organized in 2015, and the IEEE Scene Background Modeling Contest (SBMC) organized in 2016. LaBGen combines a pixel-wise temporal median filter and a patch selection mechanism based on motion detection. To detect motion, a background subtraction algorithm decides, for each frame, which pixels belong to the background. In this paper, we describe the LaBGen method extensively, evaluate it on the SBI 2016 dataset and compare its performance with other background generation methods. We also study its computational complexity, the performance sensitivity with respect to its parameters, and the stability of the predicted background image over time with respect to the chosen background subtraction algorithm. We provide an open source C++ implementation at http://www.telecom.ulg.ac.be/labgen

    Detection and recovery from camera bump during automated inspection of automobiles on an assembly line

    Get PDF
    This thesis details the steps taken to detect and compensate for camera bumps while per- forming part identification using VI at the BMW manufacturing plant and on the simulation testbed. For the system presented here to work, the user is required to record a video from the camera before the camera is bumped and one after the camera has been bumped. The premise behind the method suggested here is that the transformation between the background in the pre- and post-bump video will be equal to the transformation in the foreground. A Background extraction program is used to generate a background image from the pre- and post-bump videos. Feature tracking and matching is performed on the background images to find the transformation between them. This transfor- mation is then applied to the templates extracted from the pre-bump video. An additional manual compensation step is needed in cases where the transformation in the background is not equal to the transformation in the foreground. The resultant transformation is applied to all the templates of the pre-bump video and VI is seen to successfully identify parts with sufficient accuracy
    corecore