18,221 research outputs found
Shadow Optimization from Structured Deep Edge Detection
Local structures of shadow boundaries as well as complex interactions of
image regions remain largely unexploited by previous shadow detection
approaches. In this paper, we present a novel learning-based framework for
shadow region recovery from a single image. We exploit the local structures of
shadow edges by using a structured CNN learning framework. We show that using
the structured label information in the classification can improve the local
consistency of the results and avoid spurious labelling. We further propose and
formulate a shadow/bright measure to model the complex interactions among image
regions. The shadow and bright measures of each patch are computed from the
shadow edges detected in the image. Using the global interaction constraints on
patches, we formulate a least-square optimization problem for shadow recovery
that can be solved efficiently. Our shadow recovery method achieves
state-of-the-art results on the major shadow benchmark databases collected
under various conditions.Comment: 8 pages. CVPR 201
Direction-aware Spatial Context Features for Shadow Detection
Shadow detection is a fundamental and challenging task, since it requires an
understanding of global image semantics and there are various backgrounds
around shadows. This paper presents a novel network for shadow detection by
analyzing image context in a direction-aware manner. To achieve this, we first
formulate the direction-aware attention mechanism in a spatial recurrent neural
network (RNN) by introducing attention weights when aggregating spatial context
features in the RNN. By learning these weights through training, we can recover
direction-aware spatial context (DSC) for detecting shadows. This design is
developed into the DSC module and embedded in a CNN to learn DSC features at
different levels. Moreover, a weighted cross entropy loss is designed to make
the training more effective. We employ two common shadow detection benchmark
datasets and perform various experiments to evaluate our network. Experimental
results show that our network outperforms state-of-the-art methods and achieves
97% accuracy and 38% reduction on balance error rate.Comment: Accepted for oral presentation in CVPR 2018. The journal version of
this paper is arXiv:1805.0463
Fast Shadow Detection from a Single Image Using a Patched Convolutional Neural Network
In recent years, various shadow detection methods from a single image have
been proposed and used in vision systems; however, most of them are not
appropriate for the robotic applications due to the expensive time complexity.
This paper introduces a fast shadow detection method using a deep learning
framework, with a time cost that is appropriate for robotic applications. In
our solution, we first obtain a shadow prior map with the help of multi-class
support vector machine using statistical features. Then, we use a semantic-
aware patch-level Convolutional Neural Network that efficiently trains on
shadow examples by combining the original image and the shadow prior map.
Experiments on benchmark datasets demonstrate the proposed method significantly
decreases the time complexity of shadow detection, by one or two orders of
magnitude compared with state-of-the-art methods, without losing accuracy.Comment: 6 pages, 5 figures, Submitted to IROS 201
A fully-convolutional neural network for background subtraction of unseen videos
Background subtraction is a basic task in computer vision
and video processing often applied as a pre-processing step
for object tracking, people recognition, etc. Recently, a number of successful background-subtraction algorithms have
been proposed, however nearly all of the top-performing
ones are supervised. Crucially, their success relies upon
the availability of some annotated frames of the test video
during training. Consequently, their performance on completely “unseen” videos is undocumented in the literature.
In this work, we propose a new, supervised, backgroundsubtraction algorithm for unseen videos (BSUV-Net) based
on a fully-convolutional neural network. The input to our
network consists of the current frame and two background
frames captured at different time scales along with their semantic segmentation maps. In order to reduce the chance
of overfitting, we also introduce a new data-augmentation
technique which mitigates the impact of illumination difference between the background frames and the current frame.
On the CDNet-2014 dataset, BSUV-Net outperforms stateof-the-art algorithms evaluated on unseen videos in terms of
several metrics including F-measure, recall and precision.Accepted manuscrip
- …