913 research outputs found
ControlMat: A Controlled Generative Approach to Material Capture
Material reconstruction from a photograph is a key component of 3D content
creation democratization. We propose to formulate this ill-posed problem as a
controlled synthesis one, leveraging the recent progress in generative deep
networks. We present ControlMat, a method which, given a single photograph with
uncontrolled illumination as input, conditions a diffusion model to generate
plausible, tileable, high-resolution physically-based digital materials. We
carefully analyze the behavior of diffusion models for multi-channel outputs,
adapt the sampling process to fuse multi-scale information and introduce rolled
diffusion to enable both tileability and patched diffusion for high-resolution
outputs. Our generative approach further permits exploration of a variety of
materials which could correspond to the input image, mitigating the unknown
lighting conditions. We show that our approach outperforms recent inference and
latent-space-optimization methods, and carefully validate our diffusion process
design choices. Supplemental materials and additional details are available at:
https://gvecchio.com/controlmat/
Foundry: Hierarchical Material Design for Multi-Material Fabrication
We demonstrate a new approach for designing functional material definitions for multi-material fabrication using our system called Foundry. Foundry provides an interactive and visual process for hierarchically designing spatially-varying material properties (e.g., appearance, mechanical, optical). The resulting meta-materials exhibit structure at the micro and macro level and can surpass the qualities of traditional composites. The material definitions are created by composing a set of operators into an operator graph. Each operator performs a volume decomposition operation, remaps space, or constructs and assigns a material composition. The operators are implemented using a domain-specific language for multi-material fabrication; users can easily extend the library by writing their own operators. Foundry can be used to build operator graphs that describe complex, parameterized, resolution-independent, and reusable material definitions. We also describe how to stage the evaluation of the final material definition which in conjunction with progressive refinement, allows for interactive material evaluation even for complex designs. We show sophisticated and functional parts designed with our system.National Science Foundation (U.S.) (1138967)National Science Foundation (U.S.) (1409310)National Science Foundation (U.S.) (1547088)National Science Foundation (U.S.). Graduate Research Fellowship ProgramMassachusetts Institute of Technology. Undergraduate Research Opportunities Progra
Procedural feature generation for volumetric terrains using voxel grammars
© 2018 Terrain generation is a fundamental requirement of many computer graphics simulations, including computer games, flight simulators and environments in feature films. There has been a considerable amount of research in this domain, which ranges between fully automated and semi-automated methods. Voxel representations of 3D terrains can create rich features that are not found in other forms of terrain generation techniques, such as caves and overhangs. In this article, we introduce a semi-automated method of generating features for volumetric terrains using a rule-based procedural generation system. Features are generated by selecting subsets of a voxel grid as input symbols to a grammar, composed of user-created operators. This results in overhangs and caves generated from a set of simple rules. The feature generation runs on the CPU and the GPU is utilised to extract a robust mesh from the volumetric dataset
Visual modeling and simulation of multiscale phenomena
Many large-scale systems seen in real life, such as human crowds, fluids, and granular materials, exhibit complicated motion at many different scales, from a characteristic global behavior to important small-scale detail. Such multiscale systems are computationally expensive for traditional simulation techniques to capture over the full range of scales. In this dissertation, I present novel techniques for scalable and efficient simulation of these large, complex phenomena for visual computing applications. These techniques are based on a new approach of representing a complex system by coupling together separate models for its large-scale and fine-scale dynamics. In fluid simulation, it remains a challenge to efficiently simulate fine local detail such as foam, ripples, and turbulence without compromising the accuracy of the large-scale flow. I present two techniques for this problem that combine physically-based numerical simulation for the global flow with efficient local models for detail. For surface features, I propose the use of texture synthesis, guided by the physical characteristics of the macroscopic flow. For turbulence in the fluid motion itself, I present a technique that tracks the transfer of energy from the mean flow to the turbulent fluctuations and synthesizes these fluctuations procedurally, allowing extremely efficient visual simulation of turbulent fluids. Another large class of problems which are not easily handled by traditional approaches is the simulation of very large aggregates of discrete entities, such as dense pedestrian crowds and granular materials. I present a technique for crowd simulation that couples a discrete per-agent model of individual navigation with a novel continuum formulation for the collective motion of pedestrians. This approach allows simulation of dense crowds of a hundred thousand agents at near-real-time rates on desktop computers. I also present a technique for simulating granular materials, which generalizes this model and introduces a novel computational scheme for friction. This method efficiently reproduces a wide range of granular behavior and allows two-way interaction with simulated solid bodies. In all of these cases, the proposed techniques are typically an order of magnitude faster than comparable existing methods. Through these applications to a diverse set of challenging simulation problems, I demonstrate the benefits of the proposed approach, showing that it is a powerful and versatile technique for the simulation of a broad range of large and complex systems
Recommended from our members
Time-Varying Textures
Essentially all computer graphics rendering assumes that the reflectance and texture of surfaces is a static phenomenon. Yet, there is an abundance of materials in nature whose appearance varies dramatically with time, such as cracking paint, growing grass, or ripening banana skins. In this paper, we take a significant step towards addressing this problem, investigating a new class of time-varying textures. We make three contributions. First, we describe the carefully controlled acquisition of datasets of a variety of natural processes including the growth of grass, the accumulation of snow, and the oxidation of copper. Second, we show how to adapt quilting-based methods to time-varying texture synthesis, addressing the important challenges of maintaining temporal coherence, efficient synthesis on large time-varying datasets, and reducing visual artifacts specific to time-varying textures. Finally, we show how simple procedural techniques can be used to control the evolution of the results, such as allowing for a faster growth of grass in well lit (as opposed to shadowed) areas
Real-Time Rendering of Glinty Appearances using Distributed Binomial Laws on Anisotropic Grids
In this work, we render in real-time glittery materials caused by discrete
flakes on the surface. To achieve this, one has to count the number of flakes
reflecting the light towards the camera within every texel covered by a given
pixel footprint. To do so, we derive a counting method for arbitrary footprints
that, unlike previous work, outputs the correct statistics. We combine this
counting method with an anisotropic parameterization of the texture space that
reduces the number of texels falling under a pixel footprint. This allows our
method to run with both stable performance and 1.5X to 5X faster than the
state-of-the-art.Comment: 9 page
Real-time transition texture synthesis for terrains.
Depicting the transitions where differing material textures meet on a terrain surface presents a particularly unique set of challenges in the field of real-time rendering. Natural landscapes are inherently irregular and composed of complex interactions between many different material types of effectively endless detail and variation. Although consumer grade graphics hardware is becoming ever increasingly powerful with each successive generation, terrain texturing remains a trade-off between realism and the computational resources available. Technological constraints aside, there is still the challenge of generating the texture resources to represent terrain surfaces which can often span many hundreds or even thousands of square kilometres. To produce such textures by hand is often impractical when operating on a restricted budget of time and funding. This thesis presents two novel algorithms for generating texture transitions in realtime using automated processes. The first algorithm, Feature-Based Probability Blending (FBPB), automates the task of generating transitions between material textures containing salient features. As such features protrude through the terrain surface FBPB ensures that the topography of these features is maintained at transitions in a realistic manner. The transitions themselves are generated using a probabilistic process that also dynamically adds wear and tear to introduce high frequency detail and irregularity at the transition contour. The second algorithm, Dynamic Patch Transitions (DPT), extends FBPB by applying the probabilistic transition approach to material textures that contain no salient features. By breaking up texture space into a series of layered patches that are either rendered or discarded on a probabilistic basis, the contour of the transition is greatly increased in resolution and irregularity. When used in conjunction with high frequency detail techniques, such as alpha masking, DPT is capable of producing endless, detailed, irregular transitions without the need for artistic input
Gigavoxels: ray-guided streaming for efficient and detailed voxel rendering
Figure 1: Images show volume data that consist of billions of voxels rendered with our dynamic sparse octree approach. Our algorithm achieves real-time to interactive rates on volumes exceeding the GPU memory capacities by far, tanks to an efficient streaming based on a ray-casting solution. Basically, the volume is only used at the resolution that is needed to produce the final image. Besides the gain in memory and speed, our rendering is inherently anti-aliased. We propose a new approach to efficiently render large volumetric data sets. The system achieves interactive to real-time rendering performance for several billion voxels. Our solution is based on an adaptive data representation depending on the current view and occlusion information, coupled to an efficient ray-casting rendering algorithm. One key element of our method is to guide data production and streaming directly based on information extracted during rendering. Our data structure exploits the fact that in CG scenes, details are often concentrated on the interface between free space and clusters of density and shows that volumetric models might become a valuable alternative as a rendering primitive for real-time applications. In this spirit, we allow a quality/performance trade-off and exploit temporal coherence. We also introduce a mipmapping-like process that allows for an increased display rate and better quality through high quality filtering. To further enrich the data set, we create additional details through a variety of procedural methods. We demonstrate our approach in several scenarios, like the exploration of a 3D scan (8192 3 resolution), of hypertextured meshes (16384 3 virtual resolution), or of a fractal (theoretically infinite resolution). All examples are rendered on current generation hardware at 20-90 fps and respect the limited GPU memory budget. This is the author’s version of the paper. The ultimate version has been published in the I3D 2009 conference proceedings.
Recommended from our members
Synthesis, Editing, and Rendering of Multiscale Textures
The study of textures---images with repeated visual content---has produced a number of useful tools and algorithms for analysis, synthesis, editing, rendering, and a variety of other applications. However, the recent rapid growth in data storage and computational abilities has expanded the notion of what constitutes a texture. Modern textures can often outstrip traditional assumptions on input size by several orders of magnitude. Additionally, these multiscale textures typically contain features at not just one scale but rather across a wide range of scales, further violating existing assumptions. In order to meaningfully capture the large-scale features present in multiscale textures, we introduce a new example-based input representation, the exemplar graph. This representation enables allows us to efficiently define textures spanning a large--or possibly infinite--range of visual scales. We develop a hierarchical, parallelizable algorithm for performing texture synthesis from an input exemplar graph. In addition to automated generation, an increasingly important application of texture synthesis is in interactive tools for guiding texture design. This modality is especially important for multiscale textures, as they offer special perceptual challenges to artists. We examine algorithmic and engineering optimizations to enable real-time analysis and synthesis of multiscale textures, and explore potential implications for editing tools. Finally, we study the issue of display. To accurately view a large image at distance, some filtering operation must be performed. In many cases, such as traditional color images, the filtering operations are well-known. However, other texture representations, such as normal or displacement maps, present special difficulties for filtering. We treat the former case, presenting a principled analysis and algorithms for filtering and display of large normal maps
Recommended from our members
Synthesis, Editing, and Rendering of Multiscale Textures
The study of textures---images with repeated visual content---has produced a number of useful tools and algorithms for analysis, synthesis, editing, rendering, and a variety of other applications. However, the recent rapid growth in data storage and computational abilities has expanded the notion of what constitutes a texture. Modern textures can often outstrip traditional assumptions on input size by several orders of magnitude. Additionally, these multiscale textures typically contain features at not just one scale but rather across a wide range of scales, further violating existing assumptions. In order to meaningfully capture the large-scale features present in multiscale textures, we introduce a new example-based input representation, the exemplar graph. This representation enables allows us to efficiently define textures spanning a large--or possibly infinite--range of visual scales. We develop a hierarchical, parallelizable algorithm for performing texture synthesis from an input exemplar graph. In addition to automated generation, an increasingly important application of texture synthesis is in interactive tools for guiding texture design. This modality is especially important for multiscale textures, as they offer special perceptual challenges to artists. We examine algorithmic and engineering optimizations to enable real-time analysis and synthesis of multiscale textures, and explore potential implications for editing tools. Finally, we study the issue of display. To accurately view a large image at distance, some filtering operation must be performed. In many cases, such as traditional color images, the filtering operations are well-known. However, other texture representations, such as normal or displacement maps, present special difficulties for filtering. We treat the former case, presenting a principled analysis and algorithms for filtering and display of large normal maps
- …