2,058 research outputs found
Deformable Convolutional Networks
Convolutional neural networks (CNNs) are inherently limited to model
geometric transformations due to the fixed geometric structures in its building
modules. In this work, we introduce two new modules to enhance the
transformation modeling capacity of CNNs, namely, deformable convolution and
deformable RoI pooling. Both are based on the idea of augmenting the spatial
sampling locations in the modules with additional offsets and learning the
offsets from target tasks, without additional supervision. The new modules can
readily replace their plain counterparts in existing CNNs and can be easily
trained end-to-end by standard back-propagation, giving rise to deformable
convolutional networks. Extensive experiments validate the effectiveness of our
approach on sophisticated vision tasks of object detection and semantic
segmentation. The code would be released
SPGNet: Semantic Prediction Guidance for Scene Parsing
Multi-scale context module and single-stage encoder-decoder structure are
commonly employed for semantic segmentation. The multi-scale context module
refers to the operations to aggregate feature responses from a large spatial
extent, while the single-stage encoder-decoder structure encodes the high-level
semantic information in the encoder path and recovers the boundary information
in the decoder path. In contrast, multi-stage encoder-decoder networks have
been widely used in human pose estimation and show superior performance than
their single-stage counterpart. However, few efforts have been attempted to
bring this effective design to semantic segmentation. In this work, we propose
a Semantic Prediction Guidance (SPG) module which learns to re-weight the local
features through the guidance from pixel-wise semantic prediction. We find that
by carefully re-weighting features across stages, a two-stage encoder-decoder
network coupled with our proposed SPG module can significantly outperform its
one-stage counterpart with similar parameters and computations. Finally, we
report experimental results on the semantic segmentation benchmark Cityscapes,
in which our SPGNet attains 81.1% on the test set using only 'fine'
annotations.Comment: ICCV 201
DEEP FULLY RESIDUAL CONVOLUTIONAL NEURAL NETWORK FOR SEMANTIC IMAGE SEGMENTATION
Department of Computer Science and EngineeringThe goal of semantic image segmentation is to partition the pixels of an image into semantically meaningful parts and classifying those parts according to a predefined label set. Although object recognition
models achieved remarkable performance recently and they even surpass human???s ability to recognize
objects, but semantic segmentation models are still behind. One of the reason that makes semantic
segmentation relatively a hard problem is the image understanding at pixel level by considering global
context as oppose to object recognition. One other challenge is transferring the knowledge of an object
recognition model for the task of semantic segmentation. In this thesis, we are delineating some of the
main challenges we faced approaching semantic image segmentation with machine learning algorithms.
Our main focus was how we can use deep learning algorithms for this task since they require the
least amount of feature engineering and also it was shown that such models can be applied to large scale
datasets and exhibit remarkable performance. More precisely, we worked on a variation of convolutional
neural networks (CNN) suitable for the semantic segmentation task. We proposed a model called deep
fully residual convolutional networks (DFRCN) to tackle this problem. Utilizing residual learning makes
training of deep models feasible which ultimately leads to having a rich powerful visual representation.
Our model also benefits from skip-connections which ease the propagation of information from the
encoder module to the decoder module. This would enable our model to have less parameters in the
decoder module while it also achieves better performance. We also benchmarked the effective variation
of the proposed model on a semantic segmentation benchmark.
We first make a thorough review of current high-performance models and the problems one might
face when trying to replicate such models which mainly arose from the lack of sufficient provided
information. Then, we describe our own novel method which we called deep fully residual convolutional
network (DFRCN). We showed that our method exhibits state of the art performance on a challenging
benchmark for aerial image segmentation.clos
- …