5,724 research outputs found
Contour Detection from Deep Patch-level Boundary Prediction
In this paper, we present a novel approach for contour detection with
Convolutional Neural Networks. A multi-scale CNN learning framework is designed
to automatically learn the most relevant features for contour patch detection.
Our method uses patch-level measurements to create contour maps with
overlapping patches. We show the proposed CNN is able to to detect large-scale
contours in an image efficienly. We further propose a guided filtering method
to refine the contour maps produced from large-scale contours. Experimental
results on the major contour benchmark databases demonstrate the effectiveness
of the proposed technique. We show our method can achieve good detection of
both fine-scale and large-scale contours.Comment: IEEE International Conference on Signal and Image Processing 201
Image interpolation using Shearlet based iterative refinement
This paper proposes an image interpolation algorithm exploiting sparse
representation for natural images. It involves three main steps: (a) obtaining
an initial estimate of the high resolution image using linear methods like FIR
filtering, (b) promoting sparsity in a selected dictionary through iterative
thresholding, and (c) extracting high frequency information from the
approximation to refine the initial estimate. For the sparse modeling, a
shearlet dictionary is chosen to yield a multiscale directional representation.
The proposed algorithm is compared to several state-of-the-art methods to
assess its objective as well as subjective performance. Compared to the cubic
spline interpolation method, an average PSNR gain of around 0.8 dB is observed
over a dataset of 200 images
Learning to Extract Motion from Videos in Convolutional Neural Networks
This paper shows how to extract dense optical flow from videos with a
convolutional neural network (CNN). The proposed model constitutes a potential
building block for deeper architectures to allow using motion without resorting
to an external algorithm, \eg for recognition in videos. We derive our network
architecture from signal processing principles to provide desired invariances
to image contrast, phase and texture. We constrain weights within the network
to enforce strict rotation invariance and substantially reduce the number of
parameters to learn. We demonstrate end-to-end training on only 8 sequences of
the Middlebury dataset, orders of magnitude less than competing CNN-based
motion estimation methods, and obtain comparable performance to classical
methods on the Middlebury benchmark. Importantly, our method outputs a
distributed representation of motion that allows representing multiple,
transparent motions, and dynamic textures. Our contributions on network design
and rotation invariance offer insights nonspecific to motion estimation
- …