13,481 research outputs found
Feature-based image patch classification for moving shadow detection
Moving object detection is a first step towards many computer vision applications, such as human interaction and tracking, video surveillance, and traffic monitoring systems. Accurate estimation of the target object’s size and shape is often required before higher-level tasks (e.g., object tracking or recog nition) can be performed. However, these properties can be derived only when the foreground object is detected precisely. Background subtraction is a common technique to extract foreground objects from image sequences. The purpose of background subtraction is to detect changes in pixel values within a given frame. The main problem with background subtraction and other related object detection techniques is that cast shadows tend to be misclassified as either parts of the foreground objects (if objects and their cast shadows are bonded together) or independent foreground objects (if objects and shadows are separated). The reason for this phenomenon is the presence of similar characteristics between the target object and its cast shadow, i.e., shadows have similar motion, attitude, and intensity changes as the moving objects that cast them. Detecting shadows of moving objects is challenging because of problem atic situations related to shadows, for example, chromatic shadows, shadow color blending, foreground-background camouflage, nontextured surfaces and dark surfaces. Various methods for shadow detection have been proposed in the liter ature to address these problems. Many of these methods use general-purpose image feature descriptors to detect shadows. These feature descriptors may be effective in distinguishing shadow points from the foreground object in a specific problematic situation; however, such methods often fail to distinguish shadow points from the foreground object in other situations. In addition, many of these moving shadow detection methods require prior knowledge of the scene condi tions and/or impose strong assumptions, which make them excessively restrictive in practice. The aim of this research is to develop an efficient method capable of addressing possible environmental problems associated with shadow detection while simultaneously improving the overall accuracy and detection stability. In this research study, possible problematic situations for dynamic shad ows are addressed and discussed in detail. On the basis of the analysis, a ro bust method, including change detection and shadow detection, is proposed to address these environmental problems. A new set of two local feature descrip tors, namely, binary patterns of local color constancy (BPLCC) and light-based gradient orientation (LGO), is introduced to address the identified problematic situations by incorporating intensity, color, texture, and gradient information. The feature vectors are concatenated in a column-by-column manner to con struct one dictionary for the objects and another dictionary for the shadows. A new sparse representation framework is then applied to find the nearest neighbor of the test image segment by computing a weighted linear combination of the reference dictionary. Image segment classification is then performed based on the similarity between the test image and the sparse representations of the two classes. The performance of the proposed framework on common shadow detec tion datasets is evaluated, and the method shows improved performance com pared with state-of-the-art methods in terms of the shadow detection rate, dis crimination rate, accuracy, and stability. By achieving these significant improve ments, the proposed method demonstrates its ability to handle various problems associated with image processing and accomplishes the aim of this thesis
Spatio-Temporal Shadow Segmentation and Tracking
Shadow segmentation is a critical issue for systems aiming at extracting, tracking or recognizing objects in a given scene. Shadows can in fact modify the shape and colour of objects and therefore affect scene analysis and interpretation systems in many applications, such as video database search and retrieval, as well as video analysis in applications such as video surveillance. We present a shadow segmentation algorithm which includes two stages. The first stage extracts moving cast shadows in each frame of the sequence. The second stage tracks the extracted shadows in the subsequent frames. Tentative moving shadow regions are first identified based on spectral and geometrical properties of shadows. In order to confirm this tentative identification, shadow regions are then tracked over time. This second stage aims at exploiting the prior knowledge of a shadow detected in previous frames by evaluating its temporal behaviour. Shadow tracking is a difficult task, since colour, texture, and motion features in shadow regions cannot be used for solving the correspondence problem. Colour and texture change according to changes in the background's characteristics. The measurement of motion cannot be reliably computed for shadows. Therefore shadows may be described only by a limited amount of information. The proposed tracking algorithm makes use of this information and provides a reliability estimation of shadow recognition results of the first stage over time. This temporal analysis eliminates the possible ambiguities of the first stage and improves the efficiency of the overall shadow detection algorithm. The benefit of the proposed shadow segmentation and tracking algorithm is evaluated on both indoor and outdoor scenes. The obtained results are validated based on subjective as well as objective comparisons
Foreground Detection in Camouflaged Scenes
Foreground detection has been widely studied for decades due to its
importance in many practical applications. Most of the existing methods assume
foreground and background show visually distinct characteristics and thus the
foreground can be detected once a good background model is obtained. However,
there are many situations where this is not the case. Of particular interest in
video surveillance is the camouflage case. For example, an active attacker
camouflages by intentionally wearing clothes that are visually similar to the
background. In such cases, even given a decent background model, it is not
trivial to detect foreground objects. This paper proposes a texture guided
weighted voting (TGWV) method which can efficiently detect foreground objects
in camouflaged scenes. The proposed method employs the stationary wavelet
transform to decompose the image into frequency bands. We show that the small
and hardly noticeable differences between foreground and background in the
image domain can be effectively captured in certain wavelet frequency bands. To
make the final foreground decision, a weighted voting scheme is developed based
on intensity and texture of all the wavelet bands with weights carefully
designed. Experimental results demonstrate that the proposed method achieves
superior performance compared to the current state-of-the-art results.Comment: IEEE International Conference on Image Processing, 201
A Fusion Framework for Camouflaged Moving Foreground Detection in the Wavelet Domain
Detecting camouflaged moving foreground objects has been known to be
difficult due to the similarity between the foreground objects and the
background. Conventional methods cannot distinguish the foreground from
background due to the small differences between them and thus suffer from
under-detection of the camouflaged foreground objects. In this paper, we
present a fusion framework to address this problem in the wavelet domain. We
first show that the small differences in the image domain can be highlighted in
certain wavelet bands. Then the likelihood of each wavelet coefficient being
foreground is estimated by formulating foreground and background models for
each wavelet band. The proposed framework effectively aggregates the
likelihoods from different wavelet bands based on the characteristics of the
wavelet transform. Experimental results demonstrated that the proposed method
significantly outperformed existing methods in detecting camouflaged foreground
objects. Specifically, the average F-measure for the proposed algorithm was
0.87, compared to 0.71 to 0.8 for the other state-of-the-art methods.Comment: 13 pages, accepted by IEEE TI
Interaction between high-level and low-level image analysis for semantic video object extraction
Authors of articles published in EURASIP Journal on Advances in Signal Processing are the copyright holders of their articles and have granted to any third party, in advance and in perpetuity, the right to use, reproduce or disseminate the article, according to the SpringerOpen copyright and license agreement (http://www.springeropen.com/authors/license)
Survey of Object Detection Methods in Camouflaged Image
Camouflage is an attempt to conceal the signature of a target object into the background image. Camouflage detection
methods or Decamouflaging method is basically used to detect foreground object hidden in the background image. In this
research paper authors presented survey of camouflage detection methods for different applications and areas
- …