28,664 research outputs found
Semantics-Aligned Representation Learning for Person Re-identification
Person re-identification (reID) aims to match person images to retrieve the
ones with the same identity. This is a challenging task, as the images to be
matched are generally semantically misaligned due to the diversity of human
poses and capture viewpoints, incompleteness of the visible bodies (due to
occlusion), etc. In this paper, we propose a framework that drives the reID
network to learn semantics-aligned feature representation through delicate
supervision designs. Specifically, we build a Semantics Aligning Network (SAN)
which consists of a base network as encoder (SA-Enc) for re-ID, and a decoder
(SA-Dec) for reconstructing/regressing the densely semantics aligned full
texture image. We jointly train the SAN under the supervisions of person
re-identification and aligned texture generation. Moreover, at the decoder,
besides the reconstruction loss, we add Triplet ReID constraints over the
feature maps as the perceptual losses. The decoder is discarded in the
inference and thus our scheme is computationally efficient. Ablation studies
demonstrate the effectiveness of our design. We achieve the state-of-the-art
performances on the benchmark datasets CUHK03, Market1501, MSMT17, and the
partial person reID dataset Partial REID. Code for our proposed method is
available at:
https://github.com/microsoft/Semantics-Aligned-Representation-Learning-for-Person-Re-identification.Comment: Thirty-Fourth AAAI Conference on Artificial Intelligence (AAAI-20),
code has been release
Recurrent Convolutional Neural Networks for Scene Parsing
Scene parsing is a technique that consist on giving a label to all pixels in
an image according to the class they belong to. To ensure a good visual
coherence and a high class accuracy, it is essential for a scene parser to
capture image long range dependencies. In a feed-forward architecture, this can
be simply achieved by considering a sufficiently large input context patch,
around each pixel to be labeled. We propose an approach consisting of a
recurrent convolutional neural network which allows us to consider a large
input context, while limiting the capacity of the model. Contrary to most
standard approaches, our method does not rely on any segmentation methods, nor
any task-specific features. The system is trained in an end-to-end manner over
raw pixels, and models complex spatial dependencies with low inference cost. As
the context size increases with the built-in recurrence, the system identifies
and corrects its own errors. Our approach yields state-of-the-art performance
on both the Stanford Background Dataset and the SIFT Flow Dataset, while
remaining very fast at test time
Learning to Segment Breast Biopsy Whole Slide Images
We trained and applied an encoder-decoder model to semantically segment
breast biopsy images into biologically meaningful tissue labels. Since
conventional encoder-decoder networks cannot be applied directly on large
biopsy images and the different sized structures in biopsies present novel
challenges, we propose four modifications: (1) an input-aware encoding block to
compensate for information loss, (2) a new dense connection pattern between
encoder and decoder, (3) dense and sparse decoders to combine multi-level
features, (4) a multi-resolution network that fuses the results of
encoder-decoders run on different resolutions. Our model outperforms a
feature-based approach and conventional encoder-decoders from the literature.
We use semantic segmentations produced with our model in an automated diagnosis
task and obtain higher accuracies than a baseline approach that employs an SVM
for feature-based segmentation, both using the same segmentation-based
diagnostic features.Comment: Added more WSI images in appendi
Dense semantic labeling of sub-decimeter resolution images with convolutional neural networks
Semantic labeling (or pixel-level land-cover classification) in ultra-high
resolution imagery (< 10cm) requires statistical models able to learn high
level concepts from spatial data, with large appearance variations.
Convolutional Neural Networks (CNNs) achieve this goal by learning
discriminatively a hierarchy of representations of increasing abstraction.
In this paper we present a CNN-based system relying on an
downsample-then-upsample architecture. Specifically, it first learns a rough
spatial map of high-level representations by means of convolutions and then
learns to upsample them back to the original resolution by deconvolutions. By
doing so, the CNN learns to densely label every pixel at the original
resolution of the image. This results in many advantages, including i)
state-of-the-art numerical accuracy, ii) improved geometric accuracy of
predictions and iii) high efficiency at inference time.
We test the proposed system on the Vaihingen and Potsdam sub-decimeter
resolution datasets, involving semantic labeling of aerial images of 9cm and
5cm resolution, respectively. These datasets are composed by many large and
fully annotated tiles allowing an unbiased evaluation of models making use of
spatial information. We do so by comparing two standard CNN architectures to
the proposed one: standard patch classification, prediction of local label
patches by employing only convolutions and full patch labeling by employing
deconvolutions. All the systems compare favorably or outperform a
state-of-the-art baseline relying on superpixels and powerful appearance
descriptors. The proposed full patch labeling CNN outperforms these models by a
large margin, also showing a very appealing inference time.Comment: Accepted in IEEE Transactions on Geoscience and Remote Sensing, 201
- …