8 research outputs found
Polygonal Building Segmentation by Frame Field Learning
While state of the art image segmentation models typically output
segmentations in raster format, applications in geographic information systems
often require vector polygons. To help bridge the gap between deep network
output and the format used in downstream tasks, we add a frame field output to
a deep segmentation model for extracting buildings from remote sensing images.
We train a deep neural network that aligns a predicted frame field to ground
truth contours. This additional objective improves segmentation quality by
leveraging multi-task learning and provides structural information that later
facilitates polygonization; we also introduce a polygonization algorithm that
utilizes the frame field along with the raster segmentation. Our code is
available at https://github.com/Lydorn/Polygonization-by-Frame-Field-Learning.Comment: CVPR 2021 - IEEE Conference on Computer Vision and Pattern
Recognition, Jun 2021, Pittsburg / Virtual, United State
Im2Vec: Synthesizing Vector Graphics without Vector Supervision
Vector graphics are widely used to represent fonts, logos, digital artworks, and graphic designs. But, while a vast body of work has focused on generative algorithms for raster images, only a handful of options exists for vector graphics. One can always rasterize the input graphic and resort to image-based generative approaches, but this negates the advantages of the vector representation. The current alternative is to use specialized models that require explicit supervision on the vector graphics representation at training time. This is not ideal because large-scale high quality vector-graphics datasets are difficult to obtain. Furthermore, the vector representation for a given design is not unique, so models that supervise on the vector representation are unnecessarily constrained. Instead, we propose a new neural network that can generate complex vector graphics with varying topologies, and only requires indirect supervision from readily-available raster training images (i.e., with no vector counterparts). To enable this, we use a differentiable rasterization pipeline that renders the generated vector shapes and composites them together onto a raster canvas. We demonstrate our method on a range of datasets, and provide comparison with state-of-the-art SVG-VAE and DeepSVG, both of which require explicit vector graphics supervision. Finally, we also demonstrate our approach on the MNIST dataset, for which no groundtruth vector representation is available. Source code, datasets, and more results are available at geometry.cs.ucl.ac.uk/projects/2021/Im2Vec
Differential operators on sketches via alpha contours
A vector sketch is a popular and natural geometry representation depicting
a 2D shape. When viewed from afar, the disconnected vector strokes of a
sketch and the empty space around them visually merge into positive space
and negative space, respectively. Positive and negative spaces are the key
elements in the composition of a sketch and define what we perceive as the
shape. Nevertheless, the notion of positive or negative space is mathematically ambiguous: While the strokes unambiguously indicate the interior
or boundary of a 2D shape, the empty space may or may not belong to the
shape’s exterior.
For standard discrete geometry representations, such as meshes or point
clouds, some of the most robust pipelines rely on discretizations of differential operators, such as Laplace-Beltrami. Such discretizations are not
available for vector sketches; defining them may enable numerous applications of classical methods on vector sketches. However, to do so, one needs
to define the positive space of a vector sketch, or the sketch shape.
Even though extracting this 2D sketch shape is mathematically ambiguous,
we propose a robust algorithm, Alpha Contours, constructing its conservative
estimate: a 2D shape containing all the input strokes, which lie in its interior
or on its boundary, and aligning tightly to a sketch. This allows us to define
popular differential operators on vector sketches, such as Laplacian and
Steklov operators.
We demonstrate that our construction enables robust tools for vector
sketches, such as As-Rigid-As-Possible sketch deformation and functional
maps between sketches, as well as solving partial differential equations on a
vector sketch
Reconstruction of machine-made shapes from bitmap sketches
We propose a method of reconstructing 3D machine-made shapes from
bitmap sketches by separating an input image into individual patches and
jointly optimizing their geometry. We rely on two main observations: (1)
human observers interpret sketches of man-made shapes as a collection of
simple geometric primitives, and (2) sketch strokes often indicate occlusion
contours or sharp ridges between those primitives. Using these main observations we design a system that takes a single bitmap image of a shape, estimates image depth and segmentation into primitives with neural networks,
then fits primitives to the predicted depth while determining occlusion contours and aligning intersections with the input drawing via optimization.
Unlike previous work, our approach does not require additional input, annotation, or templates, and does not require retraining for a new category
of man-made shapes. Our method produces triangular meshes that display
sharp geometric features and are suitable for downstream applications, such
as editing, rendering, and shading
Polygonal Building Segmentation by Frame Field Learning
International audienceWhile state of the art image segmentation models typically output segmentations in raster format, applications in geographic information systems often require vector polygons. To help bridge the gap between deep network output and the format used in downstream tasks, we add a frame field output to a deep segmentation model for extracting buildings from remote sensing images. We train a deep neural network that aligns a predicted frame field to ground truth contours. This additional objective improves segmentation quality by leveraging multi-task learning and provides structural information that later facilitates polygonization; we also introduce a polygonization algorithm that utilizes the frame field along with the raster segmentation. Our code is available at https://github.com/Lydorn/Polygonization-by-Frame-Field-Learning
Vectorization of Line Drawings via Polyvector Fields
© 2019 Association for Computing Machinery. Image tracing is a foundational component of the workflow in graphic design, engineering, and computer animation, linking hand-drawn concept images to collections of smooth curves needed for geometry processing and editing. Even for clean line drawings, modern algorithms often fail to faithfully vectorize junctions, or points at which curves meet; this produces vector drawings with incorrect connectivity. This subtle issue undermines the practical application of vectorization tools and accounts for hesitance among artists and engineers to use automatic vectorization software. To address this issue, we propose a novel image vectorization method based on state-of-the-art mathematical algorithms for frame field processing. Our algorithm is tailored specifically to disambiguate junctions without sacrificing quality