12,313 research outputs found
A Fusion Framework for Camouflaged Moving Foreground Detection in the Wavelet Domain
Detecting camouflaged moving foreground objects has been known to be
difficult due to the similarity between the foreground objects and the
background. Conventional methods cannot distinguish the foreground from
background due to the small differences between them and thus suffer from
under-detection of the camouflaged foreground objects. In this paper, we
present a fusion framework to address this problem in the wavelet domain. We
first show that the small differences in the image domain can be highlighted in
certain wavelet bands. Then the likelihood of each wavelet coefficient being
foreground is estimated by formulating foreground and background models for
each wavelet band. The proposed framework effectively aggregates the
likelihoods from different wavelet bands based on the characteristics of the
wavelet transform. Experimental results demonstrated that the proposed method
significantly outperformed existing methods in detecting camouflaged foreground
objects. Specifically, the average F-measure for the proposed algorithm was
0.87, compared to 0.71 to 0.8 for the other state-of-the-art methods.Comment: 13 pages, accepted by IEEE TI
Full Reference Objective Quality Assessment for Reconstructed Background Images
With an increased interest in applications that require a clean background
image, such as video surveillance, object tracking, street view imaging and
location-based services on web-based maps, multiple algorithms have been
developed to reconstruct a background image from cluttered scenes.
Traditionally, statistical measures and existing image quality techniques have
been applied for evaluating the quality of the reconstructed background images.
Though these quality assessment methods have been widely used in the past,
their performance in evaluating the perceived quality of the reconstructed
background image has not been verified. In this work, we discuss the
shortcomings in existing metrics and propose a full reference Reconstructed
Background image Quality Index (RBQI) that combines color and structural
information at multiple scales using a probability summation model to predict
the perceived quality in the reconstructed background image given a reference
image. To compare the performance of the proposed quality index with existing
image quality assessment measures, we construct two different datasets
consisting of reconstructed background images and corresponding subjective
scores. The quality assessment measures are evaluated by correlating their
objective scores with human subjective ratings. The correlation results show
that the proposed RBQI outperforms all the existing approaches. Additionally,
the constructed datasets and the corresponding subjective scores provide a
benchmark to evaluate the performance of future metrics that are developed to
evaluate the perceived quality of reconstructed background images.Comment: Associated source code: https://github.com/ashrotre/RBQI, Associated
Database:
https://drive.google.com/drive/folders/1bg8YRPIBcxpKIF9BIPisULPBPcA5x-Bk?usp=sharing
(Email for permissions at: ashrotreasuedu
Challenges in video based object detection in maritime scenario using computer vision
This paper discusses the technical challenges in maritime image processing
and machine vision problems for video streams generated by cameras. Even well
documented problems of horizon detection and registration of frames in a video
are very challenging in maritime scenarios. More advanced problems of
background subtraction and object detection in video streams are very
challenging. Challenges arising from the dynamic nature of the background,
unavailability of static cues, presence of small objects at distant
backgrounds, illumination effects, all contribute to the challenges as
discussed here
Evaluation of video based pedestrian and vehicle detection algorithms
Video based detection systems rely on the ability to detect moving objects in video streams. Video based detection systems have applications in many fields like, intelligent transportation, automated surveillance etc. There are many approaches adopted for video based detection. Evaluation and selecting a suitable approach for pedestrian and vehicle detection is a challenging task. While evaluating the object detection algorithms, many factors should be considered in order to cope with unconstrained environments, non stationary background, different object motion patterns and the variation in types of object being detected.
In this thesis, we implement and evaluate different video based detection algorithms used for pedestrian and vehicle detection. Video based pedestrian and vehicle detection involves object detection through background foreground segmentation and object tracking. For background foreground segmentation, frame differencing, background averaging, mixture of Gaussians and codebook methods were implemented. For object tracking, Mean-Shift tracking and Lucas Kanade optical flow tracking algorithms were implemented.
The performance of each of these algorithms is evaluated by a comparative study; based on their performance such as ability to get good detection and tracking, CodeBook algorithm is selected as a candidate algorithm for background foreground segmentation and Mean-Shift tracking is used to track the detected objects for pedestrian and vehicle detection
Foreground Detection in Camouflaged Scenes
Foreground detection has been widely studied for decades due to its
importance in many practical applications. Most of the existing methods assume
foreground and background show visually distinct characteristics and thus the
foreground can be detected once a good background model is obtained. However,
there are many situations where this is not the case. Of particular interest in
video surveillance is the camouflage case. For example, an active attacker
camouflages by intentionally wearing clothes that are visually similar to the
background. In such cases, even given a decent background model, it is not
trivial to detect foreground objects. This paper proposes a texture guided
weighted voting (TGWV) method which can efficiently detect foreground objects
in camouflaged scenes. The proposed method employs the stationary wavelet
transform to decompose the image into frequency bands. We show that the small
and hardly noticeable differences between foreground and background in the
image domain can be effectively captured in certain wavelet frequency bands. To
make the final foreground decision, a weighted voting scheme is developed based
on intensity and texture of all the wavelet bands with weights carefully
designed. Experimental results demonstrate that the proposed method achieves
superior performance compared to the current state-of-the-art results.Comment: IEEE International Conference on Image Processing, 201
- …