1,958 research outputs found
Sparsely Aggregated Convolutional Networks
We explore a key architectural aspect of deep convolutional neural networks:
the pattern of internal skip connections used to aggregate outputs of earlier
layers for consumption by deeper layers. Such aggregation is critical to
facilitate training of very deep networks in an end-to-end manner. This is a
primary reason for the widespread adoption of residual networks, which
aggregate outputs via cumulative summation. While subsequent works investigate
alternative aggregation operations (e.g. concatenation), we focus on an
orthogonal question: which outputs to aggregate at a particular point in the
network. We propose a new internal connection structure which aggregates only a
sparse set of previous outputs at any given depth. Our experiments demonstrate
this simple design change offers superior performance with fewer parameters and
lower computational requirements. Moreover, we show that sparse aggregation
allows networks to scale more robustly to 1000+ layers, thereby opening future
avenues for training long-running visual processes.Comment: Accepted to ECCV 201
Towards High Performance Video Object Detection
There has been significant progresses for image object detection in recent
years. Nevertheless, video object detection has received little attention,
although it is more challenging and more important in practical scenarios.
Built upon the recent works, this work proposes a unified approach based on
the principle of multi-frame end-to-end learning of features and cross-frame
motion. Our approach extends prior works with three new techniques and steadily
pushes forward the performance envelope (speed-accuracy tradeoff), towards high
performance video object detection
Deep Contrast Learning for Salient Object Detection
Salient object detection has recently witnessed substantial progress due to
powerful features extracted using deep convolutional neural networks (CNNs).
However, existing CNN-based methods operate at the patch level instead of the
pixel level. Resulting saliency maps are typically blurry, especially near the
boundary of salient objects. Furthermore, image patches are treated as
independent samples even when they are overlapping, giving rise to significant
redundancy in computation and storage. In this CVPR 2016 paper, we propose an
end-to-end deep contrast network to overcome the aforementioned limitations.
Our deep network consists of two complementary components, a pixel-level fully
convolutional stream and a segment-wise spatial pooling stream. The first
stream directly produces a saliency map with pixel-level accuracy from an input
image. The second stream extracts segment-wise features very efficiently, and
better models saliency discontinuities along object boundaries. Finally, a
fully connected CRF model can be optionally incorporated to improve spatial
coherence and contour localization in the fused result from these two streams.
Experimental results demonstrate that our deep model significantly improves the
state of the art.Comment: To appear in CVPR 201
- …