33,105 research outputs found
SketchyGAN: Towards Diverse and Realistic Sketch to Image Synthesis
Synthesizing realistic images from human drawn sketches is a challenging
problem in computer graphics and vision. Existing approaches either need exact
edge maps, or rely on retrieval of existing photographs. In this work, we
propose a novel Generative Adversarial Network (GAN) approach that synthesizes
plausible images from 50 categories including motorcycles, horses and couches.
We demonstrate a data augmentation technique for sketches which is fully
automatic, and we show that the augmented data is helpful to our task. We
introduce a new network building block suitable for both the generator and
discriminator which improves the information flow by injecting the input image
at multiple scales. Compared to state-of-the-art image translation methods, our
approach generates more realistic images and achieves significantly higher
Inception Scores.Comment: Accepted to CVPR 201
Creation of virtual worlds from 3D models retrieved from content aware networks based on sketch and image queries
The recent emergence of user generated content requires new content creation tools that will be both easy to learn and easy to use. These new tools should enable the user to construct new high-quality content with minimum effort; it is essential to allow existing multimedia content to be reused as building blocks when creating new content. In this work we present a new tool for automatically constructing virtual worlds with minimum user intervention. Users can create these worlds by drawing a simple sketch, or by using interactively segmented 2D objects from larger images. The system receives as a query the sketch or the segmented image, and uses it to find similar 3D models that are stored in a Content Centric Network. The user selects a suitable model from the retrieved models, and the system uses it to automatically construct a virtual 3D world
Low-rank SIFT: An Affine Invariant Feature for Place Recognition
In this paper, we present a novel affine-invariant feature based on SIFT,
leveraging the regular appearance of man-made objects. The feature achieves
full affine invariance without needing to simulate over affine parameter space.
Low-rank SIFT, as we name the feature, is based on our observation that local
tilt, which are caused by changes of camera axis orientation, could be
normalized by converting local patches to standard low-rank forms. Rotation,
translation and scaling invariance could be achieved in ways similar to SIFT.
As an extension of SIFT, our method seeks to add prior to solve the ill-posed
affine parameter estimation problem and normalizes them directly, and is
applicable to objects with regular structures. Furthermore, owing to recent
breakthrough in convex optimization, such parameter could be computed
efficiently. We will demonstrate its effectiveness in place recognition as our
major application. As extra contributions, we also describe our pipeline of
constructing geotagged building database from the ground up, as well as an
efficient scheme for automatic feature selection
Pointwise Convolutional Neural Networks
Deep learning with 3D data such as reconstructed point clouds and CAD models
has received great research interests recently. However, the capability of
using point clouds with convolutional neural network has been so far not fully
explored. In this paper, we present a convolutional neural network for semantic
segmentation and object recognition with 3D point clouds. At the core of our
network is pointwise convolution, a new convolution operator that can be
applied at each point of a point cloud. Our fully convolutional network design,
while being surprisingly simple to implement, can yield competitive accuracy in
both semantic segmentation and object recognition task.Comment: 10 pages, 6 figures, 10 tables. Paper accepted to CVPR 201
- …