2,972 research outputs found
Dictionary Learning-based Inpainting on Triangular Meshes
The problem of inpainting consists of filling missing or damaged regions in
images and videos in such a way that the filling pattern does not produce
artifacts that deviate from the original data. In addition to restoring the
missing data, the inpainting technique can also be used to remove undesired
objects. In this work, we address the problem of inpainting on surfaces through
a new method based on dictionary learning and sparse coding. Our method learns
the dictionary through the subdivision of the mesh into patches and rebuilds
the mesh via a method of reconstruction inspired by the Non-local Means method
on the computed sparse codes. One of the advantages of our method is that it is
capable of filling the missing regions and simultaneously removes noise and
enhances important features of the mesh. Moreover, the inpainting result is
globally coherent as the representation based on the dictionaries captures all
the geometric information in the transformed domain. We present two variations
of the method: a direct one, in which the model is reconstructed and restored
directly from the representation in the transformed domain and a second one,
adaptive, in which the missing regions are recreated iteratively through the
successive propagation of the sparse code computed in the hole boundaries,
which guides the local reconstructions. The second method produces better
results for large regions because the sparse codes of the patches are adapted
according to the sparse codes of the boundary patches. Finally, we present and
analyze experimental results that demonstrate the performance of our method
compared to the literature
Single-picture reconstruction and rendering of trees for plausible vegetation synthesis
State-of-the-art approaches for tree reconstruction either put limiting constraints on the input side (requiring multiple photographs, a scanned point cloud or intensive user input) or provide a representation only suitable for front views of the tree. In this paper we present a complete pipeline for synthesizing and rendering detailed trees from a single photograph with minimal user effort. Since the overall shape and appearance of each tree is recovered from a single photograph of the tree crown, artists can benefit from georeferenced images to populate landscapes with native tree species. A key element of our approach is a compact representation of dense tree crowns through a radial distance map. Our first contribution is an automatic algorithm for generating such representations from a single exemplar image of a tree. We create a rough estimate of the crown shape by solving a thin-plate energy minimization problem, and then add detail through a simplified shape-from-shading approach. The use of seamless texture synthesis results in an image-based representation that can be rendered from arbitrary view directions at different levels of detail. Distant trees benefit from an output-sensitive algorithm inspired on relief mapping. For close-up trees we use a billboard cloud where leaflets are distributed inside the crown shape through a space colonization algorithm. In both cases our representation ensures efficient preservation of the crown shape. Major benefits of our approach include: it recovers the overall shape from a single tree image, involves no tree modeling knowledge and minimal authoring effort, and the associated image-based representation is easy to compress and thus suitable for network streaming.Peer ReviewedPostprint (author's final draft
- …