26,341 research outputs found
Drone Shadow Tracking
Aerial videos taken by a drone not too far above the surface may contain the
drone's shadow projected on the scene. This deteriorates the aesthetic quality
of videos. With the presence of other shadows, shadow removal cannot be
directly applied, and the shadow of the drone must be tracked. Tracking a
drone's shadow in a video is, however, challenging. The varying size, shape,
change of orientation and drone altitude pose difficulties. The shadow can also
easily disappear over dark areas. However, a shadow has specific properties
that can be leveraged, besides its geometric shape. In this paper, we
incorporate knowledge of the shadow's physical properties, in the form of
shadow detection masks, into a correlation-based tracking algorithm. We capture
a test set of aerial videos taken with different settings and compare our
results to those of a state-of-the-art tracking algorithm.Comment: 5 pages, 4 figure
Learning Spatial-Aware Regressions for Visual Tracking
In this paper, we analyze the spatial information of deep features, and
propose two complementary regressions for robust visual tracking. First, we
propose a kernelized ridge regression model wherein the kernel value is defined
as the weighted sum of similarity scores of all pairs of patches between two
samples. We show that this model can be formulated as a neural network and thus
can be efficiently solved. Second, we propose a fully convolutional neural
network with spatially regularized kernels, through which the filter kernel
corresponding to each output channel is forced to focus on a specific region of
the target. Distance transform pooling is further exploited to determine the
effectiveness of each output channel of the convolution layer. The outputs from
the kernelized ridge regression model and the fully convolutional neural
network are combined to obtain the ultimate response. Experimental results on
two benchmark datasets validate the effectiveness of the proposed method.Comment: To appear in CVPR201
A Fusion Framework for Camouflaged Moving Foreground Detection in the Wavelet Domain
Detecting camouflaged moving foreground objects has been known to be
difficult due to the similarity between the foreground objects and the
background. Conventional methods cannot distinguish the foreground from
background due to the small differences between them and thus suffer from
under-detection of the camouflaged foreground objects. In this paper, we
present a fusion framework to address this problem in the wavelet domain. We
first show that the small differences in the image domain can be highlighted in
certain wavelet bands. Then the likelihood of each wavelet coefficient being
foreground is estimated by formulating foreground and background models for
each wavelet band. The proposed framework effectively aggregates the
likelihoods from different wavelet bands based on the characteristics of the
wavelet transform. Experimental results demonstrated that the proposed method
significantly outperformed existing methods in detecting camouflaged foreground
objects. Specifically, the average F-measure for the proposed algorithm was
0.87, compared to 0.71 to 0.8 for the other state-of-the-art methods.Comment: 13 pages, accepted by IEEE TI
- …