35 research outputs found
Distance Guided Channel Weighting for Semantic Segmentation
Recent works have achieved great success in improving the performance of
multiple computer vision tasks by capturing features with a high channel number
utilizing deep neural networks. However, many channels of extracted features
are not discriminative and contain a lot of redundant information. In this
paper, we address above issue by introducing the Distance Guided Channel
Weighting (DGCW) Module. The DGCW module is constructed in a pixel-wise context
extraction manner, which enhances the discriminativeness of features by
weighting different channels of each pixel's feature vector when modeling its
relationship with other pixels. It can make full use of the high-discriminative
information while ignore the low-discriminative information containing in
feature maps, as well as capture the long-range dependencies. Furthermore, by
incorporating the DGCW module with a baseline segmentation network, we propose
the Distance Guided Channel Weighting Network (DGCWNet). We conduct extensive
experiments to demonstrate the effectiveness of DGCWNet. In particular, it
achieves 81.6% mIoU on Cityscapes with only fine annotated data for training,
and also gains satisfactory performance on another two semantic segmentation
datasets, i.e. Pascal Context and ADE20K. Code will be available soon at
https://github.com/LanyunZhu/DGCWNet
Patch-based Progressive 3D Point Set Upsampling
We present a detail-driven deep neural network for point set upsampling. A
high-resolution point set is essential for point-based rendering and surface
reconstruction. Inspired by the recent success of neural image super-resolution
techniques, we progressively train a cascade of patch-based upsampling networks
on different levels of detail end-to-end. We propose a series of architectural
design contributions that lead to a substantial performance boost. The effect
of each technical contribution is demonstrated in an ablation study.
Qualitative and quantitative experiments show that our method significantly
outperforms the state-of-the-art learning-based and optimazation-based
approaches, both in terms of handling low-resolution inputs and revealing
high-fidelity details.Comment: accepted to cvpr2019, code available at https://github.com/yifita/P3
GFF: Gated Fully Fusion for Semantic Segmentation
Semantic segmentation generates comprehensive understanding of scenes through
densely predicting the category for each pixel. High-level features from Deep
Convolutional Neural Networks already demonstrate their effectiveness in
semantic segmentation tasks, however the coarse resolution of high-level
features often leads to inferior results for small/thin objects where detailed
information is important. It is natural to consider importing low level
features to compensate for the lost detailed information in high-level
features.Unfortunately, simply combining multi-level features suffers from the
semantic gap among them. In this paper, we propose a new architecture, named
Gated Fully Fusion (GFF), to selectively fuse features from multiple levels
using gates in a fully connected way. Specifically, features at each level are
enhanced by higher-level features with stronger semantics and lower-level
features with more details, and gates are used to control the propagation of
useful information which significantly reduces the noises during fusion. We
achieve the state of the art results on four challenging scene parsing datasets
including Cityscapes, Pascal Context, COCO-stuff and ADE20K.Comment: accepted by AAAI-2020(oral
Attention Mechanisms for Object Recognition with Event-Based Cameras
Event-based cameras are neuromorphic sensors capable of efficiently encoding
visual information in the form of sparse sequences of events. Being
biologically inspired, they are commonly used to exploit some of the
computational and power consumption benefits of biological vision. In this
paper we focus on a specific feature of vision: visual attention. We propose
two attentive models for event based vision: an algorithm that tracks events
activity within the field of view to locate regions of interest and a
fully-differentiable attention procedure based on DRAW neural model. We
highlight the strengths and weaknesses of the proposed methods on four
datasets, the Shifted N-MNIST, Shifted MNIST-DVS, CIFAR10-DVS and N-Caltech101
collections, using the Phased LSTM recognition network as a baseline reference
model obtaining improvements in terms of both translation and scale invariance.Comment: WACV2019 camera-ready submissio