4,293 research outputs found

    Object's shadow removal with removal validation

    Get PDF
    We introduce in this paper, a shadow detection and removal method for moving objects especially for humans and vehicles. An effective method is presented for detecting and removing shadows from foreground figures. We assume that the foreground figures have been extracted from the input image by some background subtraction method. A figure may contain only one moving object with or without shadow. The homogeneity property of shadows is explored in a novel way for shadow detection and image division technique is used. The process is followed by filtering, removal, boundary removal and removal validation

    AN ADAPTIVE BACKGROUND UPDATION AND GRADIENT BASED SHADOW REMOVAL METHOD

    Get PDF
    Moving object segmentation has its own niche as an important topic in computer vision. It has avidly being pursued by researchers. Background subtraction method is generally used for segmenting moving objects. This method may also classify shadows as part of detected moving objects. Therefore, shadow detection and removal is an important step employed after moving object segmentation. However, these methods are adversely affected by changing environmental conditions. They are vulnerable to sudden illumination changes, and shadowing effects. Therefore, in this work we propose a faster, efficient and adaptive background subtraction method, which periodically updates the background frame and gives better results, and a shadow elimination method which removes shadows from the segmented objects with good discriminative power. Keywords- Moving object segmentation

    Silhouette coverage analysis for multi-modal video surveillance

    Get PDF
    In order to improve the accuracy in video-based object detection, the proposed multi-modal video surveillance system takes advantage of the different kinds of information represented by visual, thermal and/or depth imaging sensors. The multi-modal object detector of the system can be split up in two consecutive parts: the registration and the coverage analysis. The multi-modal image registration is performed using a three step silhouette-mapping algorithm which detects the rotation, scale and translation between moving objects in the visual, (thermal) infrared and/or depth images. First, moving object silhouettes are extracted to separate the calibration objects, i.e., the foreground, from the static background. Key components are dynamic background subtraction, foreground enhancement and automatic thresholding. Then, 1D contour vectors are generated from the resulting multi-modal silhouettes using silhouette boundary extraction, cartesian to polar transform and radial vector analysis. Next, to retrieve the rotation angle and the scale factor between the multi-sensor image, these contours are mapped on each other using circular cross correlation and contour scaling. Finally, the translation between the images is calculated using maximization of binary correlation. The silhouette coverage analysis also starts with moving object silhouette extraction. Then, it uses the registration information, i.e., rotation angle, scale factor and translation vector, to map the thermal, depth and visual silhouette images on each other. Finally, the coverage of the resulting multi-modal silhouette map is computed and is analyzed over time to reduce false alarms and to improve object detection. Prior experiments on real-world multi-sensor video sequences indicate that automated multi-modal video surveillance is promising. This paper shows that merging information from multi-modal video further increases the detection results

    Background subtraction based on Local Shape

    Full text link
    We present a novel approach to background subtraction that is based on the local shape of small image regions. In our approach, an image region centered on a pixel is mod-eled using the local self-similarity descriptor. We aim at obtaining a reliable change detection based on local shape change in an image when foreground objects are moving. The method first builds a background model and compares the local self-similarities between the background model and the subsequent frames to distinguish background and foreground objects. Post-processing is then used to refine the boundaries of moving objects. Results show that this approach is promising as the foregrounds obtained are com-plete, although they often include shadows.Comment: 4 pages, 5 figures, 3 tabl

    Carried baggage detection and recognition in video surveillance with foreground segmentation

    Get PDF
    Security cameras installed in public spaces or in private organizations continuously record video data with the aim of detecting and preventing crime. For that reason, video content analysis applications, either for real time (i.e. analytic) or post-event (i.e. forensic) analysis, have gained high interest in recent years. In this thesis, the primary focus is on two key aspects of video analysis, reliable moving object segmentation and carried object detection & identification. A novel moving object segmentation scheme by background subtraction is presented in this thesis. The scheme relies on background modelling which is based on multi-directional gradient and phase congruency. As a post processing step, the detected foreground contours are refined by classifying the edge segments as either belonging to the foreground or background. Further contour completion technique by anisotropic diffusion is first introduced in this area. The proposed method targets cast shadow removal, gradual illumination change invariance, and closed contour extraction. A state of the art carried object detection method is employed as a benchmark algorithm. This method includes silhouette analysis by comparing human temporal templates with unencumbered human models. The implementation aspects of the algorithm are improved by automatically estimating the viewing direction of the pedestrian and are extended by a carried luggage identification module. As the temporal template is a frequency template and the information that it provides is not sufficient, a colour temporal template is introduced. The standard steps followed by the state of the art algorithm are approached from a different extended (by colour information) perspective, resulting in more accurate carried object segmentation. The experiments conducted in this research show that the proposed closed foreground segmentation technique attains all the aforementioned goals. The incremental improvements applied to the state of the art carried object detection algorithm revealed the full potential of the scheme. The experiments demonstrate the ability of the proposed carried object detection algorithm to supersede the state of the art method
    corecore