21,051 research outputs found
Deep Network for Simultaneous Decomposition and Classification in UWB-SAR Imagery
Classifying buried and obscured targets of interest from other natural and
manmade clutter objects in the scene is an important problem for the U.S. Army.
Targets of interest are often represented by signals captured using
low-frequency (UHF to L-band) ultra-wideband (UWB) synthetic aperture radar
(SAR) technology. This technology has been used in various applications,
including ground penetration and sensing-through-the-wall. However, the
technology still faces a significant issues regarding low-resolution SAR
imagery in this particular frequency band, low radar cross sections (RCS),
small objects compared to radar signal wavelengths, and heavy interference. The
classification problem has been firstly, and partially, addressed by sparse
representation-based classification (SRC) method which can extract noise from
signals and exploit the cross-channel information. Despite providing potential
results, SRC-related methods have drawbacks in representing nonlinear relations
and dealing with larger training sets. In this paper, we propose a Simultaneous
Decomposition and Classification Network (SDCN) to alleviate noise inferences
and enhance classification accuracy. The network contains two jointly trained
sub-networks: the decomposition sub-network handles denoising, while the
classification sub-network discriminates targets from confusers. Experimental
results show significant improvements over a network without decomposition and
SRC-related methods
Dense semantic labeling of sub-decimeter resolution images with convolutional neural networks
Semantic labeling (or pixel-level land-cover classification) in ultra-high
resolution imagery (< 10cm) requires statistical models able to learn high
level concepts from spatial data, with large appearance variations.
Convolutional Neural Networks (CNNs) achieve this goal by learning
discriminatively a hierarchy of representations of increasing abstraction.
In this paper we present a CNN-based system relying on an
downsample-then-upsample architecture. Specifically, it first learns a rough
spatial map of high-level representations by means of convolutions and then
learns to upsample them back to the original resolution by deconvolutions. By
doing so, the CNN learns to densely label every pixel at the original
resolution of the image. This results in many advantages, including i)
state-of-the-art numerical accuracy, ii) improved geometric accuracy of
predictions and iii) high efficiency at inference time.
We test the proposed system on the Vaihingen and Potsdam sub-decimeter
resolution datasets, involving semantic labeling of aerial images of 9cm and
5cm resolution, respectively. These datasets are composed by many large and
fully annotated tiles allowing an unbiased evaluation of models making use of
spatial information. We do so by comparing two standard CNN architectures to
the proposed one: standard patch classification, prediction of local label
patches by employing only convolutions and full patch labeling by employing
deconvolutions. All the systems compare favorably or outperform a
state-of-the-art baseline relying on superpixels and powerful appearance
descriptors. The proposed full patch labeling CNN outperforms these models by a
large margin, also showing a very appealing inference time.Comment: Accepted in IEEE Transactions on Geoscience and Remote Sensing, 201
- …