13,412 research outputs found
Parametric Procedural Models for 3D Object Retrieval, Classification and Parameterization
The amount of 3D objects has grown over the last decades, but we can expect that it will grow much further in the future. 3D objects are also becoming more and more accessible to non-expert users. The growing amount of available 3D data is welcome for everyone working with this type of data, as the creation and acquisition of many 3D objects is still costly. However, the vast majority of available 3D objects are only present as pure polygon meshes. We arguably can not assume to get meta-data and additional semantics delivered together with 3D objects stemming from non-expert or 3D scans of real objects from automatic systems. For this reason content-based retrieval and classification techniques for 3D objects has been developed.
Many systems are based on the completely unsupervised case. However, previous work has shown that there are strong possibilities of highly increasing the performance of these tasks by using any type of previous knowledge. In this thesis I use procedural models as previous knowledge. Procedural models describe the construction process of a 3D object instead of explicitly describing the components of the surface. These models can include parameters into the construction process to generate variations of the resulting 3D object. Procedural representations are present in many domains, as these implicit representations are vastly superior to any explicit representation in terms of content generation, flexibility and reusability. Therefore, using a procedural representation always has the potential of outclassing other approaches in many aspects. The usage of procedural models in 3D object retrieval and classification is not highly researched as this powerful representation can be arbitrary complex to create and handle. In the 3D object domain, procedural models are mostly used for highly regularized structures like buildings and trees.
However, Procedural models can deeply improve 3D object retrieval and classification, as this representation is able to offer a persistent and reusable full description of a type of object. This description can be used for queries and class definitions without any additional data. Furthermore, the initial classification can be improved further by using a procedural model: A procedural model allows to completely parameterize an unknown object and further identify characteristics of different class members. The only drawback is that the manual design and creation of specialized procedural models itself is very costly. In this thesis I concentrate on the generalization and automation of procedural models for the application in 3D object retrieval and 3D object classification.
For the generalization and automation of procedural models I propose to offer different levels of interaction for a user to fulfill the possible needs of control and automation. This thesis presents new approaches for different levels of automation: the automatic generation of procedural models from a single exemplary 3D object. The semi-automatic creation of a procedural model with a sketch-based modeling tool. And the manual definition a procedural model with restricted variation space. The second important step is the insertion of parameters into the procedural model, to define the variations of the resulting 3D object. For this step I also propose several possibilities for the optimal level of control and automation: An automatic parameter detection technique. A semi-automatic deformation based insertion. And an interface for manually inserting parameters by choosing one of the offered insertion principles. It is also possible to manually insert parameters into the procedures if the user needs the full control on the lowest level.
To enable the usage of procedural models directly for 3D object retrieval and classification techniques I propose descriptor-based and deep learning based approaches. Descriptors measure the difference of 3D objects. By using descriptors as comparison algorithm, we can define the distance between procedural models and other objects and order these by similarity. The procedural models are sampled and compared to retrieve an optimal object retrieval list. We can also directly use procedural models as data basis for a retraining of a convolutional neural network. By deep learning a set of procedural models we can directly classify new unknown objects without any further large learning database. Additionally, I propose a new multi-layered parameter estimation approach using three different comparison measures to parameterize an unknown object. Hence, an unknown object is not only classified with a procedural model but the approach is also able to gather new information about the characteristics of the object by using the procedural model for the parameterization of the unknown object.
As a result, the combination of procedural models with the tasks of 3D object retrieval and classification lead to a meta concept of a holistically seamless system of defining, generating, comparing, identifying, retrieving, recombining, editing and reusing 3D objects
Procedural Modeling and Physically Based Rendering for Synthetic Data Generation in Automotive Applications
We present an overview and evaluation of a new, systematic approach for
generation of highly realistic, annotated synthetic data for training of deep
neural networks in computer vision tasks. The main contribution is a procedural
world modeling approach enabling high variability coupled with physically
accurate image synthesis, and is a departure from the hand-modeled virtual
worlds and approximate image synthesis methods used in real-time applications.
The benefits of our approach include flexible, physically accurate and scalable
image synthesis, implicit wide coverage of classes and features, and complete
data introspection for annotations, which all contribute to quality and cost
efficiency. To evaluate our approach and the efficacy of the resulting data, we
use semantic segmentation for autonomous vehicles and robotic navigation as the
main application, and we train multiple deep learning architectures using
synthetic data with and without fine tuning on organic (i.e. real-world) data.
The evaluation shows that our approach improves the neural network's
performance and that even modest implementation efforts produce
state-of-the-art results.Comment: The project web page at
http://vcl.itn.liu.se/publications/2017/TKWU17/ contains a version of the
paper with high-resolution images as well as additional materia
Automatic Model Based Dataset Generation for Fast and Accurate Crop and Weeds Detection
Selective weeding is one of the key challenges in the field of agriculture
robotics. To accomplish this task, a farm robot should be able to accurately
detect plants and to distinguish them between crop and weeds. Most of the
promising state-of-the-art approaches make use of appearance-based models
trained on large annotated datasets. Unfortunately, creating large agricultural
datasets with pixel-level annotations is an extremely time consuming task,
actually penalizing the usage of data-driven techniques. In this paper, we face
this problem by proposing a novel and effective approach that aims to
dramatically minimize the human intervention needed to train the detection and
classification algorithms. The idea is to procedurally generate large synthetic
training datasets randomizing the key features of the target environment (i.e.,
crop and weed species, type of soil, light conditions). More specifically, by
tuning these model parameters, and exploiting a few real-world textures, it is
possible to render a large amount of realistic views of an artificial
agricultural scenario with no effort. The generated data can be directly used
to train the model or to supplement real-world images. We validate the proposed
methodology by using as testbed a modern deep learning based image segmentation
architecture. We compare the classification results obtained using both real
and synthetic images as training data. The reported results confirm the
effectiveness and the potentiality of our approach.Comment: To appear in IEEE/RSJ IROS 201
Procedural Historic Building Information Modelling (HBIM) For Recording and Documenting European Classical Architecture
Procedural Historic Building Information Modelling (HBIM) is a new approach for modelling historic buildings which develops full building information models from remotely sensed data. HBIM consists of a novel library of reusable parametric objects, based on historic architectural data and a system for mapping these library objects to survey data. Using concepts from procedural modelling, a new set of rules and algorithms have been developed to automatically combine HBIM library objects and generate different building arrangements by altering parameters. This is a semi-automatic process where the required building structure and objects are first automatically generated and then refined to match survey data.
The encoding of architectural rules and proportions into procedural modelling rules helps to reduce the amount of further manual editing that is required. The ability to transfer survey data such as building footprints or cut-sections directly into a procedural modelling rule also greatly reduces the amount of further editing required. These capabilities of procedural modelling enable a more automated and efficient overall workflow for reconstructing BIM geometry from point cloud data. This document outlines the research carried out to evaluate the suitability of a procedural modelling approach for improving the process of reconstructing building geometry from point clouds. To test this hypothesis, three procedural modelling prototypes were designed and implemented for BIM software. Quantitative accuracy testing and qualitative end-user scenario testing methods were used to evaluate the research hypothesis. The results obtained indicate that procedural modelling has potential for achieving more accurate, automated and easier generation of BIM geometry from point clouds
Procedural function-based modelling of volumetric microstructures
We propose a new approach to modelling heterogeneous objects containing internal volumetric structures with size of details orders of magnitude smaller than the overall size of the object. The proposed function-based procedural representation provides compact, precise, and arbitrarily parameterised models of coherent microstructures, which can undergo blending, deformations, and other geometric operations, and can be directly rendered and fabricated without generating any auxiliary representations (such as polygonal meshes and voxel arrays). In particular, modelling of regular lattices and cellular microstructures as well as irregular porous media is discussed and illustrated. We also present a method to estimate parameters of the given model by fitting it to microstructure data obtained with magnetic resonance imaging and other measurements of natural and artificial objects. Examples of rendering and digital fabrication of microstructure models are presented
- …