1,286 research outputs found
Representation Separation for Semantic Segmentation with Vision Transformers
Vision transformers (ViTs) encoding an image as a sequence of patches bring
new paradigms for semantic segmentation.We present an efficient framework of
representation separation in local-patch level and global-region level for
semantic segmentation with ViTs. It is targeted for the peculiar
over-smoothness of ViTs in semantic segmentation, and therefore differs from
current popular paradigms of context modeling and most existing related methods
reinforcing the advantage of attention. We first deliver the decoupled
two-pathway network in which another pathway enhances and passes down
local-patch discrepancy complementary to global representations of
transformers. We then propose the spatially adaptive separation module to
obtain more separate deep representations and the discriminative
cross-attention which yields more discriminative region representations through
novel auxiliary supervisions. The proposed methods achieve some impressive
results: 1) incorporated with large-scale plain ViTs, our methods achieve new
state-of-the-art performances on five widely used benchmarks; 2) using masked
pre-trained plain ViTs, we achieve 68.9% mIoU on Pascal Context, setting a new
record; 3) pyramid ViTs integrated with the decoupled two-pathway network even
surpass the well-designed high-resolution ViTs on Cityscapes; 4) the improved
representations by our framework have favorable transferability in images with
natural corruptions. The codes will be released publicly.Comment: 17 pages, 13 figures. This work has been submitted to the IEEE for
possible publication. Copyright may be transferred without notice, after
which this version may no longer be accessibl
Recommended from our members
Convolutional CRFs for semantic segmentation
For the challenging semantic image segmentation task the best performing models
have traditionally combined the structured modelling capabilities of Conditional Random
Fields (CRFs) with the feature extraction power of CNNs. In more recent works however,
CRF post-processing has fallen out of favour. We argue that this is mainly due to the slow
training and inference speeds of CRFs, as well as the difficulty of learning the internal
CRF parameters. To overcome both issues we propose to add the assumption of conditional
independence to the framework of fully-connected CRFs. This allows us to reformulate the
inference in terms of convolutions, which can be implemented highly efficiently on GPUs.
Doing so speeds up inference and training by two orders of magnitude. All parameters of
the convolutional CRFs can easily be optimized using backpropagation. Towards the goal
of facilitating further CRF research we have made our implementations publicly available
- …