13 research outputs found
Image-based tree variations
The automatic generation of realistic vegetation closely reproducing the appearance of specific plant species is still a challenging topic in computer graphics. In this paper, we present a new approach to generate new tree models from a small collection of frontal RGBA images of trees. The new models are represented either as single billboards (suitable for still image generation in areas such as architecture rendering) or as billboard clouds (providing parallax effects in interactive applications). Key ingredients of our method include the synthesis of new contours through convex combinations of exemplar countours, the automatic segmentation into crown/trunk classes and the transfer of RGBA colour from the exemplar images to the synthetic target. We also describe a fully automatic approach to convert a single tree image into a billboard cloud by extracting superpixels and distributing them inside a silhouette-defined 3D volume. Our algorithm allows for the automatic generation of an arbitrary number of tree variations from minimal input, and thus provides a fast solution to add vegetation variety in outdoor scenes.Peer ReviewedPostprint (author's final draft
Modeling and generating moving trees from video
We present a probabilistic approach for the automatic production of tree models with convincing 3D appearance and motion. The only input is a video of a moving tree that provides us an initial dynamic tree model, which is used to generate new individual trees of the same type. Our approach combines global and local constraints to construct a dynamic 3D tree model from a 2D skeleton. Our modeling takes into account factors such as the shape of branches, the overall shape of the tree, and physically plausible motion. Furthermore, we provide a generative model that creates multiple trees in 3D, given a single example model. This means that users no longer have to make each tree individually, or specify rules to make new trees. Results with different species are presented and compared to both reference input data and state of the art alternatives
Modelos de brotes arbustivos o algas en arquitectura. O cómo replicar un vegetal mediante la Agregación Limitada por Difusión (DLA)
En el presente artÃculo se expone el desarrollo de un método de diseño de estructuras ramificadas del tipo algas marinas o formas arbustivas que se basa en la agregación limitada por difusión (DLA) para definir su geometrÃa. Se ha usado la DLA para reproducir unas reglas de crecimiento convincentes o verosÃmiles a partir de lo aprendido de visores programables como el NetLogo (Wilensky 1999). En concreto, las herramientas que reproducen la simulación aprendida de NetLogo son el software Grasshopper para generar las geometrÃas, el plug-in Exoskeleton para obtener superficies envolventes a dichas estructuras alámbricas, y el plug-in Weaverbird para suavizar transiciones entre caras de malla. Ésta última herramienta permite suavizar la malla mediante iteraciones que aumentan o no el número de caras, lo que permite entender algunas teorÃas sobre transiciones suaves en bifurcaciones de estructuras naturales (Mattheck 1990). Este artÃculo sirve además para reflexionar acerca de cómo modelos fÃsico cinéticos basados en mecanismos inspirados en la Inteligencia Artificial ayudan a compartir métodos de análisis con otras disciplinas como la cibernética o la dinámica de fluidos o las ciencias sociales y del medioambiente. ¿Por qué puede ocurrir esto? Por el rigor en el lenguaje que todo el rato pretende referirse a poblaciones de individuos, a ciclos de vida, a sistemas multivariables, a reglas de reciprocidad o a pactos con partÃculas próximas.This article discusses the development of a design method for branched structures with seaweed-like or shrub-like forms based on diffusion-limited aggregation (DLA) to define its geometry. DLA has been used to reproduce convincing or credible growth rules from what has been learned from programmable displays such as NetLogo (Wilenski 1999). In particular, the tools that reproduce the simulation learned from NetLogo are the Grasshopper software to generate the geometry, the Exoskeleton plug-in to get surrounding surfaces to these wireframe structures, and the Weaverbird plug-in to smooth transitions between mesh faces. This last tool allows smoothing the mesh by iterations that increase or not the number of faces, which allows to understand some theories about smooth transitions in forks of natural structures (Mattheck 1990). This article also serves to reflect on how kinetic-physical models based on mechanics inspired by Artificial Intelligence help to share methods of analysis with other disciplines such as cybernetics or fluid dynamics or the social and environmental sciences. Why can this happen? Because of the rigor in language that all the time tries to refer to populations of individuals, to life cycles, to multi-variable systems, to reciprocity rules or to pacts with near particles
TreeSketchNet: From Sketch To 3D Tree Parameters Generation
3D modeling of non-linear objects from stylized sketches is a challenge even
for experts in Computer Graphics (CG). The extrapolation of objects parameters
from a stylized sketch is a very complex and cumbersome task. In the present
study, we propose a broker system that mediates between the modeler and the 3D
modelling software and can transform a stylized sketch of a tree into a
complete 3D model. The input sketches do not need to be accurate or detailed,
and only need to represent a rudimentary outline of the tree that the modeler
wishes to 3D-model. Our approach is based on a well-defined Deep Neural Network
(DNN) architecture, we called TreeSketchNet (TSN), based on convolutions and
able to generate Weber and Penn parameters that can be interpreted by the
modelling software to generate a 3D model of a tree starting from a simple
sketch. The training dataset consists of Synthetically-Generated
\revision{(SG)} sketches that are associated with Weber-Penn parameters
generated by a dedicated Blender modelling software add-on. The accuracy of the
proposed method is demonstrated by testing the TSN with both synthetic and
hand-made sketches. Finally, we provide a qualitative analysis of our results,
by evaluating the coherence of the predicted parameters with several
distinguishing features
Interactive design of botanical trees using freehand sketches and example-based editing
We present a system for quickly and easily designing three-dimensional (3D) models of botanical trees using freehand sketches and additional example-based editing operations. The system generates a 3D geometry from a twodimensional (2D) sketch using the assumption that trees spread their branches so that the distances between the branches are as large as possible. The user can apply additional gesture-based editing operations such as adding, cutting, and erasing branches. Our system also supports example-based editing modes in which many branches and leaves are generated by using a manually designed tree as an example. User experience demonstrates that our interface lets novices design a variety of reasonably natural-looking trees interactively and quickly
Recommended from our members
Integration of sketch-based ideation and 3D modeling with CAD systems
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University.This thesis is concerned with the study of how sketch-based systems can be improved to enhance idea generation process in conceptual design stage. It is also concerned with achieving a kind of integration between sketch-based systems and CAD systems to complete the digitization of the design process as sketching phase is still not integrated with other phases due to the different nature of it and the incomplete digitization of sketching phase itself. Previous studies identified three main related issues: sketching process, sketch-based modeling, and the integration between the digitized design phases. Here, the thesis is motivated from the desire to improve sketch-based modeling to support idea generation process but unlike previous studies that only focused on the technical or drawing part of sketching, this thesis attempts to concentrate more on the mental part of the sketching process which play a key role in developing ideas in design. Another motivation of this thesis is to produce a kind of integration between sketch-based systems and CAD systems to enable 3D models produced by sketching to be edited in detailed design stage. As such, there are two main contributions have been addressed in this thesis. The first contribution is the presenting of a new approach in designing
sketch-based systems that enable more support for idea generation by separating thinking and developing ideas from the 3D modeling process. This kind of separation allows designers to think freely and concentrate more on their ideas rather than 3D modeling. the second contribution is achieving a kind of integration between gesture-based systems and CAD systems by using an IGES file in exchanging data between systems and a new method to organize data within the file in an order that make it more understood by feature recognition embedded in commercial CAD systems.This study is funded by the Ministry of Higher Education of Egypt