22 research outputs found

    Model synthesis

    Get PDF
    Three-dimensional models are extensively used in nearly all types of computer graphics applications. The demand for 3D models is large and growing. However, despite extensive work in modeling for over four decades, model generation remains a labor-intensive and difficult process even with the best available tools. We present a new procedural modeling technique called model synthesis that is designed to generate many classes of objects. Model synthesis is inspired by developments in texture synthesis. Model synthesis is designed to automatically generate a large model that resembles a small example model provided by the user. Every small part of the generated model is identical to a small part of the example model. By altering the example model, a wide variety of objects can be produced. We present several different model synthesis algorithms and analyze their strengths and weaknesses. Discrete model synthesis generates models built out of small building blocks or model pieces. Continuous model synthesis generates models on set of parallel planes. We also show how to incorporate several additional user-defined constraints to control the large-scale structure of the model, to control how the objects are distributed, and to generate symmetric models. The generality of the approach will be demonstrated by showing many models produced using each approach including cities, landscapes, spaceships, and castles. The models contain hundreds of thousands of model pieces and are generated in only a few minutes

    ShaDDR: Real-Time Example-Based Geometry and Texture Generation via 3D Shape Detailization and Differentiable Rendering

    Full text link
    We present ShaDDR, an example-based deep generative neural network which produces a high-resolution textured 3D shape through geometry detailization and conditional texture generation applied to an input coarse voxel shape. Trained on a small set of detailed and textured exemplar shapes, our method learns to detailize the geometry via multi-resolution voxel upsampling and generate textures on voxel surfaces via differentiable rendering against exemplar texture images from a few views. The generation is real-time, taking less than 1 second to produce a 3D model with voxel resolutions up to 512^3. The generated shape preserves the overall structure of the input coarse voxel model, while the style of the generated geometric details and textures can be manipulated through learned latent codes. In the experiments, we show that our method can generate higher-resolution shapes with plausible and improved geometric details and clean textures compared to prior works. Furthermore, we showcase the ability of our method to learn geometric details and textures from shapes reconstructed from real-world photos. In addition, we have developed an interactive modeling application to demonstrate the generalizability of our method to various user inputs and the controllability it offers, allowing users to interactively sculpt a coarse voxel shape to define the overall structure of the detailized 3D shape

    Learning to Generate 3D Shapes from a Single Example

    Full text link
    Existing generative models for 3D shapes are typically trained on a large 3D dataset, often of a specific object category. In this paper, we investigate the deep generative model that learns from only a single reference 3D shape. Specifically, we present a multi-scale GAN-based model designed to capture the input shape's geometric features across a range of spatial scales. To avoid large memory and computational cost induced by operating on the 3D volume, we build our generator atop the tri-plane hybrid representation, which requires only 2D convolutions. We train our generative model on a voxel pyramid of the reference shape, without the need of any external supervision or manual annotation. Once trained, our model can generate diverse and high-quality 3D shapes possibly of different sizes and aspect ratios. The resulting shapes present variations across different scales, and at the same time retain the global structure of the reference shape. Through extensive evaluation, both qualitative and quantitative, we demonstrate that our model can generate 3D shapes of various types.Comment: SIGGRAPH Asia 2022; 19 pages (including 6 pages appendix), 17 figures. Project page: http://www.cs.columbia.edu/cg/SingleShapeGen

    Volumetric Procedural Models for Shape Representation

    Full text link
    This article describes a volumetric approach for procedural shape modeling and a new Procedural Shape Modeling Language (PSML) that facilitates the specification of these models. PSML provides programmers the ability to describe shapes in terms of their 3D elements where each element may be a semantic group of 3D objects, e.g., a brick wall, or an indivisible object, e.g., an individual brick. Modeling shapes in this manner facilitates the creation of models that more closely approximate the organization and structure of their real-world counterparts. As such, users may query these models for volumetric information such as the number, position, orientation and volume of 3D elements which cannot be provided using surface based model-building techniques. PSML also provides a number of new language-specific capabilities that allow for a rich variety of context-sensitive behaviors and post-processing functions. These capabilities include an object-oriented approach for model design, methods for querying the model for component-based information and the ability to access model elements and components to perform Boolean operations on the model parts. PSML is open-source and includes freely available tutorial videos, demonstration code and an integrated development environment to support writing PSML programs

    Procedural facade variations from a single layout

    Full text link

    Der Semantic Building Modeler - Ein System zur prozeduralen Erzeugung von 3D-Gebäudemodellen

    Get PDF
    Computer generated 3d-models of buildings, cities and whole landscapes are constantly gaining importance throughout different fields of application. Starting with obvious domains like computer games or movies there are also lots of other areas, e.g. reconstructions of historic cities both for educational reasons and further research. The most widely used method for producing city models is the „manual“ creation. A 3d artist uses modeling software to design every single component by hand. Especially for city models consisting of hundreds or thousands of buildings this is a very time consuming and thus expensive method. Procedural modeling offers an alternative to this manual approach by using a computer to generate models automatically. The history of procedural modeling algorithms goes back to the 1980s when the first implementations for the automatic texture synthesis were developed and published by Ken Perlin. Other important applications are the generation of plants based on formalisms like L-systems, proposed by Aristid Lindenmayer or particle systems widely used within computer graphics first proposed by William Reeves. Research concerning the applicability of the developed formalisms and techniques led to the creation of systems dedicated to the automatical computation of building and city models. These systems are often differentiated between rule-based and procedural systems. Rule-based systems use formalisms like text replacement systems whereas procedural systems implement every step of the construction process within the program code. The Semantic Building Modeler is a procedural system, which is configured by using user-provided XML-parameters. The semantic meaning of these parameters is fixed through a tight coupling with their usage within the program code. In this point, the semantic of the Semantic Building Modeler differs from other systems on the today’s market. Besides, it facilitates the introduction for novice users making their first experiences with procedural modeling. Concerning the algorithmic aspect the system proposes two new algorithms for the automatic creation and variation of building footprints. These enable the software to create automatically varied building structures. Additionally, the prototype implementation can be seen as an extendable framework. It offers a wide range of algorithms and methods, which can be used for future extensions of the current system. The prototype also contains an implementation of the Weighted-Straight-Skeleton-Algorithm, techniques for the distributed storage of configuration-fragments, the procedural construction of building components like cornice and many more. The prototypical realization of the developed algorithms is a proof-of-concept-implementation. It demonstrates that the usage of semantically based parameters for the procedural creation of complex and visually appealing geometry can go hand-in-hand. This opens the powerful algorithmic construction of building and city models to a big group of users who have no experience neither in the field of programming nor in the manual design of 3d models

    Emergent nonlocal combinatorial design rules for multimodal metamaterials

    Get PDF
    Combinatorial mechanical metamaterials feature spatially textured soft modes that yield exotic and useful mechanical properties. While a single soft mode often can be rationally designed by following a set of tiling rules for the building blocks of the metamaterial, it is an open question what design rules are required to realize multiple soft modes. Multimodal metamaterials would allow for advanced mechanical functionalities that can be selected on the fly. Here we introduce a transfer matrix-like framework to design multiple soft modes in combinatorial metamaterials composed of aperiodic tilings of building blocks. We use this framework to derive rules for multimodal designs for a specific family of building blocks. We show that such designs require a large number of degeneracies between constraints, and find precise rules on the real space configuration that allow such degeneracies. These rules are significantly more complex than the simple tiling rules that emerge for single-mode metamaterials. For the specific example studied here, they can be expressed as local rules for tiles composed of pairs of building blocks in combination with a nonlocal rule in the form of a global constraint on the type of tiles that are allowed to appear together anywhere in the configuration. This nonlocal rule is exclusive to multimodal metamaterials and exemplifies the complexity of rational design of multimode metamaterials. Our framework is a first step towards a systematic design strategy of multimodal metamaterials with spatially textured soft modes

    Urban space simulation based on wave function collapse and convolutional neural network

    Get PDF
    In this paper, we propose a pipeline of urban space synthesis which leverages Wave Function Collapse (WFC) and Convolutional Neural Networks (CNNs) to train the computer how to design urban space. Firstly, we establish an urban design database. Then, the urban road networks, urban block spatial forms and urban building function layouts are generated by WFC and CNNs and evaluated by designer afterwards. Finally, the 3D models are generated. We demonstrate the feasibility of our pipeline through the case study of the North Extension of Central Green Axis in Wenzhou. This pipeline improves the efficiency of urban design and provides new ways of thinking for architecture and urban design

    Embodied Interactions for Spatial Design Ideation: Symbolic, Geometric, and Tangible Approaches

    Get PDF
    Computer interfaces are evolving from mere aids for number crunching into active partners in creative processes such as art and design. This is, to a great extent, the result of mass availability of new interaction technology such as depth sensing, sensor integration in mobile devices, and increasing computational power. We are now witnessing the emergence of maker culture that can elevate art and design beyond the purview of enterprises and professionals such as trained engineers and artists. Materializing this transformation is not trivial; everyone has ideas but only a select few can bring them to reality. The challenge is the recognition and the subsequent interpretation of human actions into design intent
    corecore