7,701 research outputs found
Driving Scene Perception Network: Real-time Joint Detection, Depth Estimation and Semantic Segmentation
As the demand for enabling high-level autonomous driving has increased in
recent years and visual perception is one of the critical features to enable
fully autonomous driving, in this paper, we introduce an efficient approach for
simultaneous object detection, depth estimation and pixel-level semantic
segmentation using a shared convolutional architecture. The proposed network
model, which we named Driving Scene Perception Network (DSPNet), uses
multi-level feature maps and multi-task learning to improve the accuracy and
efficiency of object detection, depth estimation and image segmentation tasks
from a single input image. Hence, the resulting network model uses less than
850 MiB of GPU memory and achieves 14.0 fps on NVIDIA GeForce GTX 1080 with a
1024x512 input image, and both precision and efficiency have been improved over
combination of single tasks.Comment: 9 pages, 7 figures, WACV'1
Learning Spectral-Spatial-Temporal Features via a Recurrent Convolutional Neural Network for Change Detection in Multispectral Imagery
Change detection is one of the central problems in earth observation and was
extensively investigated over recent decades. In this paper, we propose a novel
recurrent convolutional neural network (ReCNN) architecture, which is trained
to learn a joint spectral-spatial-temporal feature representation in a unified
framework for change detection in multispectral images. To this end, we bring
together a convolutional neural network (CNN) and a recurrent neural network
(RNN) into one end-to-end network. The former is able to generate rich
spectral-spatial feature representations, while the latter effectively analyzes
temporal dependency in bi-temporal images. In comparison with previous
approaches to change detection, the proposed network architecture possesses
three distinctive properties: 1) It is end-to-end trainable, in contrast to
most existing methods whose components are separately trained or computed; 2)
it naturally harnesses spatial information that has been proven to be
beneficial to change detection task; 3) it is capable of adaptively learning
the temporal dependency between multitemporal images, unlike most of algorithms
that use fairly simple operation like image differencing or stacking. As far as
we know, this is the first time that a recurrent convolutional network
architecture has been proposed for multitemporal remote sensing image analysis.
The proposed network is validated on real multispectral data sets. Both visual
and quantitative analysis of experimental results demonstrates competitive
performance in the proposed mode
PDANet: Pyramid Density-aware Attention Net for Accurate Crowd Counting
Crowd counting, i.e., estimating the number of people in a crowded area, has
attracted much interest in the research community. Although many attempts have
been reported, crowd counting remains an open real-world problem due to the
vast scale variations in crowd density within the interested area, and severe
occlusion among the crowd. In this paper, we propose a novel Pyramid
Density-Aware Attention-based network, abbreviated as PDANet, that leverages
the attention, pyramid scale feature and two branch decoder modules for
density-aware crowd counting. The PDANet utilizes these modules to extract
different scale features, focus on the relevant information, and suppress the
misleading ones. We also address the variation of crowdedness levels among
different images with an exclusive Density-Aware Decoder (DAD). For this
purpose, a classifier evaluates the density level of the input features and
then passes them to the corresponding high and low crowded DAD modules.
Finally, we generate an overall density map by considering the summation of low
and high crowded density maps as spatial attention. Meanwhile, we employ two
losses to create a precise density map for the input scene. Extensive
evaluations conducted on the challenging benchmark datasets well demonstrate
the superior performance of the proposed PDANet in terms of the accuracy of
counting and generated density maps over the well-known state of the arts
- …