893 research outputs found
Hierarchical improvement of foreground segmentation masks in background subtraction
A plethora of algorithms have been defined for foreground
segmentation, a fundamental stage for many computer
vision applications. In this work, we propose a post-processing
framework to improve foreground segmentation performance of
background subtraction algorithms. We define a hierarchical
framework for extending segmented foreground pixels to undetected
foreground object areas and for removing erroneously
segmented foreground. Firstly, we create a motion-aware hierarchical
image segmentation of each frame that prevents merging
foreground and background image regions. Then, we estimate
the quality of the foreground mask through the fitness of the
binary regions in the mask and the hierarchy of segmented
regions. Finally, the improved foreground mask is obtained as
an optimal labeling by jointly exploiting foreground quality and
spatial color relations in a pixel-wise fully-connected Conditional
Random Field. Experiments are conducted over four large and
heterogeneous datasets with varied challenges (CDNET2014,
LASIESTA, SABS and BMC) demonstrating the capability of the
proposed framework to improve background subtraction resultsThis work was partially supported by the Spanish Government
(HAVideo, TEC2014-53176-R
Online Adaptation of Convolutional Neural Networks for Video Object Segmentation
We tackle the task of semi-supervised video object segmentation, i.e.
segmenting the pixels belonging to an object in the video using the ground
truth pixel mask for the first frame. We build on the recently introduced
one-shot video object segmentation (OSVOS) approach which uses a pretrained
network and fine-tunes it on the first frame. While achieving impressive
performance, at test time OSVOS uses the fine-tuned network in unchanged form
and is not able to adapt to large changes in object appearance. To overcome
this limitation, we propose Online Adaptive Video Object Segmentation (OnAVOS)
which updates the network online using training examples selected based on the
confidence of the network and the spatial configuration. Additionally, we add a
pretraining step based on objectness, which is learned on PASCAL. Our
experiments show that both extensions are highly effective and improve the
state of the art on DAVIS to an intersection-over-union score of 85.7%.Comment: Accepted at BMVC 2017. This version contains minor changes for the
camera ready versio
Online Mutual Foreground Segmentation for Multispectral Stereo Videos
The segmentation of video sequences into foreground and background regions is
a low-level process commonly used in video content analysis and smart
surveillance applications. Using a multispectral camera setup can improve this
process by providing more diverse data to help identify objects despite adverse
imaging conditions. The registration of several data sources is however not
trivial if the appearance of objects produced by each sensor differs
substantially. This problem is further complicated when parallax effects cannot
be ignored when using close-range stereo pairs. In this work, we present a new
method to simultaneously tackle multispectral segmentation and stereo
registration. Using an iterative procedure, we estimate the labeling result for
one problem using the provisional result of the other. Our approach is based on
the alternating minimization of two energy functions that are linked through
the use of dynamic priors. We rely on the integration of shape and appearance
cues to find proper multispectral correspondences, and to properly segment
objects in low contrast regions. We also formulate our model as a frame
processing pipeline using higher order terms to improve the temporal coherence
of our results. Our method is evaluated under different configurations on
multiple multispectral datasets, and our implementation is available online.Comment: Preprint accepted for publication in IJCV (December 2018
Improved foreground detection via block-based classifier cascade with probabilistic decision integration
Background subtraction is a fundamental low-level processing task in numerous computer vision applications. The vast majority of algorithms process images on a pixel-by-pixel basis, where an independent decision is made for each pixel. A general limitation of such processing is that rich contextual information is not taken into account. We propose a block-based method capable of dealing with noise, illumination variations, and dynamic backgrounds, while still obtaining smooth contours of foreground objects. Specifically, image sequences are analyzed on an overlapping block-by-block basis. A low-dimensional texture descriptor obtained from each block is passed through an adaptive classifier cascade, where each stage handles a distinct problem. A probabilistic foreground mask generation approach then exploits block overlaps to integrate interim block-level decisions into final pixel-level foreground segmentation. Unlike many pixel-based methods, ad-hoc postprocessing of foreground masks is not required. Experiments on the difficult Wallflower and I2R datasets show that the proposed approach obtains on average better results (both qualitatively and quantitatively) than several prominent methods. We furthermore propose the use of tracking performance as an unbiased approach for assessing the practical usefulness of foreground segmentation methods, and show that the proposed approach leads to considerable improvements in tracking accuracy on the CAVIAR dataset
Combining Background Subtraction Algorithms with Convolutional Neural Network
Accurate and fast extraction of foreground object is a key prerequisite for a
wide range of computer vision applications such as object tracking and
recognition. Thus, enormous background subtraction methods for foreground
object detection have been proposed in recent decades. However, it is still
regarded as a tough problem due to a variety of challenges such as illumination
variations, camera jitter, dynamic backgrounds, shadows, and so on. Currently,
there is no single method that can handle all the challenges in a robust way.
In this letter, we try to solve this problem from a new perspective by
combining different state-of-the-art background subtraction algorithms to
create a more robust and more advanced foreground detection algorithm. More
specifically, an encoder-decoder fully convolutional neural network
architecture is trained to automatically learn how to leverage the
characteristics of different algorithms to fuse the results produced by
different background subtraction algorithms and output a more precise result.
Comprehensive experiments evaluated on the CDnet 2014 dataset demonstrate that
the proposed method outperforms all the considered single background
subtraction algorithm. And we show that our solution is more efficient than
other combination strategies
- …