110,903 research outputs found

    Video object detection using fast and accurate change detection and thresholding

    Get PDF
    Video object detection is an important video processing technique. Change detection and thresholding based video object detection techniques are widely used due to their efficiency. However, change detection and thresholding in real-world video sequences is challenging due to the complexity of video contents and of environmental artifacts. This thesis proposes a color-based change detection and a video-content adaptive thresholding method for accurate and fast video object detection. The proposed color-based change detection algorithm is based on the YUV color model, which has been proved as the most effective color model for object detection. First, frame-differencing is carried out in each channel of a video frame. Then, the pixel intensities in both gray-level channel Y and the color channels U and V of the difference frames are statistically modeled. Second, based on the statistical model of the gray-levels in Y channel, an entropy-based blocks-of-interest scatter estimation algorithm is proposed for locating the frame blocks potentially containing moving objects; and based on the statistical models of the color intensities in color channels, a statistical model of the maximum-intensity between U and V channels are obtained. Third, significance test is applied to the detected blocks-of-interest in both gray-level channel and color channels based on the gray-level statistical model of Y channel and the maximum-intensity statistical model of U and V channels. The gray-levels of the non-significant pixels in Y channel but significant in the U or the V channels are then compensated according to their significance probabilities in the color channels. Finally, change masks can be obtained by a thresholding algorithm. The proposed thresholding algorithm for change detection is based on a change region scatter estimation algorithm and a video-content assessment algorithm to detect the empty frames and estimate the strength of local unimportant changes. According to the proposed video-content assessment, the global threshold of a difference frame is discriminatively computed. For an empty frame, a noise-statistic based thresholding algorithm with a low false alarm is applied to obtain the threshold. Otherwise, the global threshold is obtained by an optimum-thresholding based artifact-robust thresholding algorithm. Experimental results show that (1) with the support from the scatter estimation of the blocks-of-interest, the proposed change detection algorithm is efficient and robust to multiple video contents; (2) the proposed thresholding algorithm clearly outperforms the widely used intensity-distribution based thresholding methods and more efficient and more stable than the state-of-the-art spatial-property based thresholding methods for change detection; and (3) the video object detection technique consisting of the proposed change detection and the proposed thresholding algorithms is robust to artifacts and multiple video contents, and is especially suitable for real-world on-line video applications such as video surveillanc

    Detecting and Shadows in the HSV Color Space Using Dynamic Thresholds

    Get PDF
    The detection of moving objects in a video sequence is an essential step in almost all the systems of vision by computer. However, because of the dynamic change in natural scenes, the detection of movement becomes a more difficult task. In this work, we propose a new method for the detection moving objects that is robust to shadows, noise and illumination changes. For this purpose, the detection phase of the proposed method is an adaptation of the MOG approach where the foreground is extracted by considering the HSV color space. To allow the method not to take shadows into consideration during the detection process, we developed a new shade removal technique based on a dynamic thresholding of detected pixels of the foreground. The calculation model of the threshold is established by two statistical analysis tools that take into account the degree of the shadow in the scene and the robustness to noise.  Experiments undertaken on a set of video sequences showed that the method put forward provides better results compared to existing methods that are limited to using static thresholds

    Detecting and Shadows in the HSV Color Space using Dynamic Thresholds

    Get PDF
    The detection of moving objects in a video sequence is an essential step in almost all the systems of vision by computer. However, because of the dynamic change in natural scenes, the detection of movement becomes a more difficult task. In this work, we propose a new method for the detection moving objects that is robust to shadows, noise and illumination changes. For this purpose, the detection phase of the proposed method is an adaptation of the MOG approach where the foreground is extracted by considering the HSV color space. To allow the method not to take shadows into consideration during the detection process, we developed a new shade removal technique based on a dynamic thresholding of detected pixels of the foreground. The calculation model of the threshold is established by two statistical analysis tools that take into account the degree of the shadow in the scene and the robustness to noise.  Experiments undertaken on a set of video sequences showed that the method put forward provides better results compared to existing methods that are limited to using static thresholds

    Hand and face segmentation using motion and colour cues in digital image sequences

    Get PDF
    © 2001 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.In this paper, we present a hand and face segmentation algorithm using motion and color cues. The algorithm is proposed for the content based representation of sign language image sequences, where the hands and face constitute a video object. Our hand and face segmentation algorithm consists of three stages, namely color segmentation, temporal segmentation, and video object plane generation. In color segmentation, we model the skin color as a normal distribution and classify each pixel as skin or non-skin based on its Mahalanobis distance. The aim of temporal segmentation is to localize moving objects in image sequences. A statistical variance test is employed to detect object motion between two consecutive images. Finally, the results from color and temporal segmentation are analyzed to yield a change detection mask. The performance of the algorithm is illustrated by simulation carried out on the silent test sequence.Nariman Habili ; Cheng-Chew Lim ; Alireza Moin

    A Fusion Framework for Camouflaged Moving Foreground Detection in the Wavelet Domain

    Full text link
    Detecting camouflaged moving foreground objects has been known to be difficult due to the similarity between the foreground objects and the background. Conventional methods cannot distinguish the foreground from background due to the small differences between them and thus suffer from under-detection of the camouflaged foreground objects. In this paper, we present a fusion framework to address this problem in the wavelet domain. We first show that the small differences in the image domain can be highlighted in certain wavelet bands. Then the likelihood of each wavelet coefficient being foreground is estimated by formulating foreground and background models for each wavelet band. The proposed framework effectively aggregates the likelihoods from different wavelet bands based on the characteristics of the wavelet transform. Experimental results demonstrated that the proposed method significantly outperformed existing methods in detecting camouflaged foreground objects. Specifically, the average F-measure for the proposed algorithm was 0.87, compared to 0.71 to 0.8 for the other state-of-the-art methods.Comment: 13 pages, accepted by IEEE TI
    • …
    corecore