6,453 research outputs found
Weakly- and Self-Supervised Learning for Content-Aware Deep Image Retargeting
This paper proposes a weakly- and self-supervised deep convolutional neural
network (WSSDCNN) for content-aware image retargeting. Our network takes a
source image and a target aspect ratio, and then directly outputs a retargeted
image. Retargeting is performed through a shift map, which is a pixel-wise
mapping from the source to the target grid. Our method implicitly learns an
attention map, which leads to a content-aware shift map for image retargeting.
As a result, discriminative parts in an image are preserved, while background
regions are adjusted seamlessly. In the training phase, pairs of an image and
its image-level annotation are used to compute content and structure losses. We
demonstrate the effectiveness of our proposed method for a retargeting
application with insightful analyses.Comment: 10 pages, 11 figures. To appear in ICCV 2017, Spotlight Presentatio
Efficient Yet Deep Convolutional Neural Networks for Semantic Segmentation
Semantic Segmentation using deep convolutional neural network pose more
complex challenge for any GPU intensive task. As it has to compute million of
parameters, it results to huge memory consumption. Moreover, extracting finer
features and conducting supervised training tends to increase the complexity.
With the introduction of Fully Convolutional Neural Network, which uses finer
strides and utilizes deconvolutional layers for upsampling, it has been a go to
for any image segmentation task. In this paper, we propose two segmentation
architecture which not only needs one-third the parameters to compute but also
gives better accuracy than the similar architectures. The model weights were
transferred from the popular neural net like VGG19 and VGG16 which were trained
on Imagenet classification data-set. Then we transform all the fully connected
layers to convolutional layers and use dilated convolution for decreasing the
parameters. Lastly, we add finer strides and attach four skip architectures
which are element-wise summed with the deconvolutional layers in steps. We
train and test on different sparse and fine data-sets like Pascal VOC2012,
Pascal-Context and NYUDv2 and show how better our model performs in this tasks.
On the other hand our model has a faster inference time and consumes less
memory for training and testing on NVIDIA Pascal GPUs, making it more efficient
and less memory consuming architecture for pixel-wise segmentation.Comment: 8 page
- …