5 research outputs found

    Novel Approach for Detection and Removal of Moving Cast Shadows Based on RGB, HSV and YUV Color Spaces

    Get PDF
    Cast shadow affects computer vision tasks such as image segmentation, object detection and tracking since objects and shadows share the same visual motion characteristics. This unavoidable problem decreases video surveillance system performance. The basic idea of this paper is to exploit the evidence that shadows darken the surface which they are cast upon. For this reason, we propose a simple and accurate method for detection of moving cast shadows based on chromatic properties in RGB, HSV and YUV color spaces. The method requires no a priori assumptions regarding the scene or lighting source. Starting from a normalization step, we apply canny filter to detect the boundary between self-shadow and cast shadow. This treatment is devoted only for the first sequence. Then, we separate between background and moving objects using an improved version of Gaussian mixture model. In order to remove these unwanted shadows completely, we use three change estimators calculated according to the intensity ratio in HSV color space, chromaticity properties in RGB color space, and brightness ratio in YUV color space. Only pixels that satisfy threshold of the three estimators are labeled as shadow and will be removed. Experiments carried out on various video databases prove that the proposed system is robust and efficient and can precisely remove shadows for a wide class of environment and without any assumptions. Experimental results also show that our approach outperforms existing methods and can run in real-time systems

    A statistical approach for shadow detection using spatio-temporal contexts

    Get PDF
    Background subtraction is an important step used to segment moving regions in surveillance videos. However, cast shadows are often falsely labeled as foreground objects, which may severely degrade the accuracy of object localization and detection. Effective shadow detection is necessary for accurate foreground segmentation, especially for outdoor scenes. Based on the characteristics of shadows, such as luminance reduction, chromaticity consistency and texture consistency, we introduce a nonparametric framework for modeling surface behavior under cast shadows. To each pixel, we assign a potential shadow value with a confidence weight, indicating the probability that the pixel location is an actual shadow point. Given an observed RGB value for a pixel in a new frame, we use its recent spatio-temporal context to compute an expected shadow RGB value. The similarity between the observed and the expected shadow RGB values determines whether a pixel position is a true shadow. Experimental results show the performance of the proposed method on a suite of standard indoor and outdoor video sequences

    Shadow removal utilizing multiplicative fusion of texture and colour features for surveillance image

    Get PDF
    Automated surveillance systems often identify shadows as parts of a moving object which jeopardized subsequent image processing tasks such as object identification and tracking. In this thesis, an improved shadow elimination method for an indoor surveillance system is presented. This developed method is a fusion of several image processing methods. Firstly, the image is segmented using the Statistical Region Merging algorithm to obtain the segmented potential shadow regions. Next, multiple shadow identification features which include Normalized Cross-Correlation, Local Color Constancy and Hue-Saturation-Value shadow cues are applied on the images to generate feature maps. These feature maps are used for identifying and removing cast shadows according to the segmented regions. The video dataset used is the Autonomous Agents for On-Scene Networked Incident Management which covers both indoor and outdoor video scenes. The benchmarking result indicates that the developed method is on-par with several normally used shadow detection methods. The developed method yields a mean score of 85.17% for the video sequence in which the strongest shadow is present and a mean score of 89.93% for the video having the most complex textured background. This research contributes to the development and improvement of a functioning shadow eliminator method that is able to cope with image noise and various illumination changes

    Vision-Based 2D and 3D Human Activity Recognition

    Get PDF
    corecore