95 research outputs found
SnakeVoxFormer: Transformer-based Single Image\\Voxel Reconstruction with Run Length Encoding
Deep learning-based 3D object reconstruction has achieved unprecedented
results. Among those, the transformer deep neural model showed outstanding
performance in many applications of computer vision. We introduce
SnakeVoxFormer, a novel, 3D object reconstruction in voxel space from a single
image using the transformer. The input to SnakeVoxFormer is a 2D image, and the
result is a 3D voxel model. The key novelty of our approach is in using the
run-length encoding that traverses (like a snake) the voxel space and encodes
wide spatial differences into a 1D structure that is suitable for transformer
encoding. We then use dictionary encoding to convert the discovered RLE blocks
into tokens that are used for the transformer. The 1D representation is a
lossless 3D shape data compression method that converts to 1D data that use
only about 1% of the original data size. We show how different voxel traversing
strategies affect the effect of encoding and reconstruction. We compare our
method with the state-of-the-art for 3D voxel reconstruction from images and
our method improves the state-of-the-art methods by at least 2.8% and up to
19.8%
Error-Bounded and Feature Preserving Surface Remeshing with Minimal Angle Improvement
The typical goal of surface remeshing consists in finding a mesh that is (1)
geometrically faithful to the original geometry, (2) as coarse as possible to
obtain a low-complexity representation and (3) free of bad elements that would
hamper the desired application. In this paper, we design an algorithm to
address all three optimization goals simultaneously. The user specifies desired
bounds on approximation error {\delta}, minimal interior angle {\theta} and
maximum mesh complexity N (number of vertices). Since such a desired mesh might
not even exist, our optimization framework treats only the approximation error
bound {\delta} as a hard constraint and the other two criteria as optimization
goals. More specifically, we iteratively perform carefully prioritized local
operators, whenever they do not violate the approximation error bound and
improve the mesh otherwise. In this way our optimization framework greedily
searches for the coarsest mesh with minimal interior angle above {\theta} and
approximation error bounded by {\delta}. Fast runtime is enabled by a local
approximation error estimation, while implicit feature preservation is obtained
by specifically designed vertex relocation operators. Experiments show that our
approach delivers high-quality meshes with implicitly preserved features and
better balances between geometric fidelity, mesh complexity and element quality
than the state-of-the-art.Comment: 14 pages, 20 figures. Submitted to IEEE Transactions on Visualization
and Computer Graphic
StyleDEM: a Versatile Model for Authoring Terrains
Many terrain modelling methods have been proposed for the past decades,
providing efficient and often interactive authoring tools. However, they
generally do not include any notion of style, which is a critical aspect for
designers in the entertainment industry. We introduce StyleDEM, a new
generative adversarial network method for terrain synthesis and authoring, with
a versatile toolbox of authoring methods with style. This method starts from an
input sketch or an existing terrain. It outputs a terrain with features that
can be authored using interactive brushes and enhanced with additional tools
such as style manipulation or super-resolution. The strength of our approach
resides in the versatility and interoperability of the toolbox
QuadStack: An Efficient Representation and Direct Rendering of Layered Datasets
We introduce QuadStack, a novel algorithm for volumetric data compression and direct rendering. Our algorithm exploits the data redundancy often found in layered datasets which are common in science and engineering fields such as geology, biology, mechanical engineering, medicine, etc. QuadStack first compresses the volumetric data into vertical stacks which are then compressed into a quadtree that identifies and represents the layered structures at the internal nodes. The associated data (color, material, density, etc.) and shape of these layer structures are decoupled and encoded independently, leading to high compression rates (4Ă— to 54Ă— of the original voxel model memory footprint in our experiments). We also introduce an algorithm for value retrieving from the QuadStack representation and we show that the access has logarithmic complexity. Because of the fast access, QuadStack is suitable for efficient data representation and direct rendering. We show that our GPU implementation performs comparably in speed with the state-of-the-art algorithms (18-79 MRays/s in our implementation), while maintaining a significantly smaller memory footprint
Environmental Objects for Authoring Procedural Scenes
International audienceWe propose a novel approach for authoring large scenes with automatic enhancement of objects to create geometric decoration details such as snow cover, icicles, fallen leaves, grass tufts or even trash. We introduce environmental objects that extend an input object geometry with a set of procedural effects that defines how the object reacts to the environment, and by a set of scalar fields that defines the influence of the object over of the environment. The user controls the scene by modifying environmental variables, such as temperature or humidity fields. The scene definition is hierarchical: objects can be grouped and their behaviours can be set at each level of the hierarchy. Our per object definition allows us to optimize and accelerate the effects computation, which also enables us to generate large scenes with many geometric details at a very high level of detail. In our implementation, a complex urban scene of 10 000 m², represented with details of less than 1 cm, can be locally modified and entirely regenerated in a few seconds
WorldBrush: Interactive Example-based Synthesis of Procedural Virtual Worlds
International audienceWe present a novel approach for the interactive synthesis and editing of virtual worlds. Our method is inspired by painting operations and uses methods for statistical example-based synthesis to automate content synthesis and deformation. Our real-time approach takes a form of local inverse procedural modeling based on intermediate statistical models: selected regions of procedurally and manually constructed example scenes are analyzed, and their parameters are stored as distributions in a palette, similar to colors on a painter’s palette. These distributions can then be interactively applied with brushes and combined in various ways, like in painting systems. Selected regions can also be moved or stretched while maintaining the consistency of their content. Our method captures both distributions of elements and structured objects, and models their interactions. Results range from the interactive editing of 2D artwork maps to the design of 3D virtual worlds, where constraints set by the terrain’s slope are also taken into account
Large Scale Terrain Generation from Tectonic Uplift and Fluvial Erosion
International audienceAt large scale, landscapes result from the combination of two major processes: tectonics which generate the main relief through crust uplift, and weather which accounts for erosion. This paper presents the first method in computer graphics that combines uplift and hydraulic erosion to generate visually plausible terrains. Given a user-painted uplift map, we generate a stream graph over the entire domain embedding elevation information and stream flow. Our approach relies on the stream power equation introduced in geology for hydraulic erosion. By combining crust uplift and stream power erosion we generate large realistic terrains at a low computational cost. Finally, we convert this graph into a digital elevation model by blending landform feature kernels whose parameters are derived from the information in the graph. Our method gives high-level control over the large scale dendritic structures of the resulting river networks, watersheds, and mountains ridges
Dr.Bokeh: DiffeRentiable Occlusion-aware Bokeh Rendering
Bokeh is widely used in photography to draw attention to the subject while
effectively isolating distractions in the background. Computational methods
simulate bokeh effects without relying on a physical camera lens. However, in
the realm of digital bokeh synthesis, the two main challenges for bokeh
synthesis are color bleeding and partial occlusion at object boundaries. Our
primary goal is to overcome these two major challenges using physics principles
that define bokeh formation. To achieve this, we propose a novel and accurate
filtering-based bokeh rendering equation and a physically-based occlusion-aware
bokeh renderer, dubbed Dr.Bokeh, which addresses the aforementioned challenges
during the rendering stage without the need of post-processing or data-driven
approaches. Our rendering algorithm first preprocesses the input RGBD to obtain
a layered scene representation. Dr.Bokeh then takes the layered representation
and user-defined lens parameters to render photo-realistic lens blur. By
softening non-differentiable operations, we make Dr.Bokeh differentiable such
that it can be plugged into a machine-learning framework. We perform
quantitative and qualitative evaluations on synthetic and real-world images to
validate the effectiveness of the rendering quality and the differentiability
of our method. We show Dr.Bokeh not only outperforms state-of-the-art bokeh
rendering algorithms in terms of photo-realism but also improves the depth
quality from depth-from-defocus
- …