10,280 research outputs found
Exploring Context with Deep Structured models for Semantic Segmentation
State-of-the-art semantic image segmentation methods are mostly based on
training deep convolutional neural networks (CNNs). In this work, we proffer to
improve semantic segmentation with the use of contextual information. In
particular, we explore `patch-patch' context and `patch-background' context in
deep CNNs. We formulate deep structured models by combining CNNs and
Conditional Random Fields (CRFs) for learning the patch-patch context between
image regions. Specifically, we formulate CNN-based pairwise potential
functions to capture semantic correlations between neighboring patches.
Efficient piecewise training of the proposed deep structured model is then
applied in order to avoid repeated expensive CRF inference during the course of
back propagation. For capturing the patch-background context, we show that a
network design with traditional multi-scale image inputs and sliding pyramid
pooling is very effective for improving performance. We perform comprehensive
evaluation of the proposed method. We achieve new state-of-the-art performance
on a number of challenging semantic segmentation datasets including ,
-, , -, -,
-, and datasets. Particularly, we report an
intersection-over-union score of on the - dataset.Comment: 16 pages. Accepted to IEEE T. Pattern Analysis & Machine
Intelligence, 2017. Extended version of arXiv:1504.0101
GFF: Gated Fully Fusion for Semantic Segmentation
Semantic segmentation generates comprehensive understanding of scenes through
densely predicting the category for each pixel. High-level features from Deep
Convolutional Neural Networks already demonstrate their effectiveness in
semantic segmentation tasks, however the coarse resolution of high-level
features often leads to inferior results for small/thin objects where detailed
information is important. It is natural to consider importing low level
features to compensate for the lost detailed information in high-level
features.Unfortunately, simply combining multi-level features suffers from the
semantic gap among them. In this paper, we propose a new architecture, named
Gated Fully Fusion (GFF), to selectively fuse features from multiple levels
using gates in a fully connected way. Specifically, features at each level are
enhanced by higher-level features with stronger semantics and lower-level
features with more details, and gates are used to control the propagation of
useful information which significantly reduces the noises during fusion. We
achieve the state of the art results on four challenging scene parsing datasets
including Cityscapes, Pascal Context, COCO-stuff and ADE20K.Comment: accepted by AAAI-2020(oral
Improving Spatial Codification in Semantic Segmentation
This paper explores novel approaches for improving the spatial codification
for the pooling of local descriptors to solve the semantic segmentation
problem. We propose to partition the image into three regions for each object
to be described: Figure, Border and Ground. This partition aims at minimizing
the influence of the image context on the object description and vice versa by
introducing an intermediate zone around the object contour. Furthermore, we
also propose a richer visual descriptor of the object by applying a Spatial
Pyramid over the Figure region. Two novel Spatial Pyramid configurations are
explored: Cartesian-based and crown-based Spatial Pyramids. We test these
approaches with state-of-the-art techniques and show that they improve the
Figure-Ground based pooling in the Pascal VOC 2011 and 2012 semantic
segmentation challenges.Comment: Paper accepted at the IEEE International Conference on Image
Processing, ICIP 2015. Quebec City, 27-30 September. Project page:
https://imatge.upc.edu/web/publications/improving-spatial-codification-semantic-segmentatio
- …