1,798 research outputs found

    User-adaptive sketch-based 3D CAD model retrieval

    Get PDF
    3D CAD models are an important digital resource in the manufacturing industry. 3D CAD model retrieval has become a key technology in product lifecycle management enabling the reuse of existing design data. In this paper, we propose a new method to retrieve 3D CAD models based on 2D pen-based sketch inputs. Sketching is a common and convenient method for communicating design intent during early stages of product design, e.g., conceptual design. However, converting sketched information into precise 3D engineering models is cumbersome, and much of this effort can be avoided by reuse of existing data. To achieve this purpose, we present a user-adaptive sketch-based retrieval method in this paper. The contributions of this work are twofold. Firstly, we propose a statistical measure for CAD model retrieval: the measure is based on sketch similarity and accounts for users’ drawing habits. Secondly, for 3D CAD models in the database, we propose a sketch generation pipeline that represents each 3D CAD model by a small yet sufficient set of sketches that are perceptually similar to human drawings. User studies and experiments that demonstrate the effectiveness of the proposed method in the design process are presented

    Beam-colored Sketch and Image-based 3D Continuous Wireframe Reconstruction with different Materials and Cross-Sections

    Get PDF
    The automated reverse engineering of wireframes is a common task in topology optimization, fast concept design, bionic and point cloud reconstruction. This article deals with the usage of skeleton-based reconstruction of sketches in 2D images. The result leads to a flexible at least C₁ continuous shape description

    3D object retrieval and segmentation: various approaches including 2D poisson histograms and 3D electrical charge distributions.

    Get PDF
    Nowadays 3D models play an important role in many applications: viz. games, cultural heritage, medical imaging etc. Due to the fast growth in the number of available 3D models, understanding, searching and retrieving such models have become interesting fields within computer vision. In order to search and retrieve 3D models, we present two different approaches: one is based on solving the Poisson Equation over 2D silhouettes of the models. This method uses 60 different silhouettes, which are automatically extracted from different viewangles. Solving the Poisson equation for each silhouette assigns a number to each pixel as its signature. Accumulating these signatures generates a final histogram-based descriptor for each silhouette, which we call a SilPH (Silhouette Poisson Histogram). For the second approach, we propose two new robust shape descriptors based on the distribution of charge density on the surface of a 3D model. The Finite Element Method is used to calculate the charge density on each triangular face of each model as a local feature. Then we utilize the Bag-of-Features and concentric sphere frameworks to perform global matching using these local features. In addition to examining the retrieval accuracy of the descriptors in comparison to the state-of-the-art approaches, the retrieval speeds as well as robustness to noise and deformation on different datasets are investigated. On the other hand, to understand new complex models, we have also utilized distribution of electrical charge for proposing a system to decompose models into meaningful parts. Our robust, efficient and fully-automatic segmentation approach is able to specify the segments attached to the main part of a model as well as locating the boundary parts of the segments. The segmentation ability of the proposed system is examined on the standard datasets and its timing and accuracy are compared with the existing state-of-the-art approaches

    Data-Driven Shape Analysis and Processing

    Full text link
    Data-driven methods play an increasingly important role in discovering geometric, structural, and semantic relationships between 3D shapes in collections, and applying this analysis to support intelligent modeling, editing, and visualization of geometric data. In contrast to traditional approaches, a key feature of data-driven approaches is that they aggregate information from a collection of shapes to improve the analysis and processing of individual shapes. In addition, they are able to learn models that reason about properties and relationships of shapes without relying on hard-coded rules or explicitly programmed instructions. We provide an overview of the main concepts and components of these techniques, and discuss their application to shape classification, segmentation, matching, reconstruction, modeling and exploration, as well as scene analysis and synthesis, through reviewing the literature and relating the existing works with both qualitative and numerical comparisons. We conclude our report with ideas that can inspire future research in data-driven shape analysis and processing.Comment: 10 pages, 19 figure

    Parametric Procedural Models for 3D Object Retrieval, Classification and Parameterization

    Get PDF
    The amount of 3D objects has grown over the last decades, but we can expect that it will grow much further in the future. 3D objects are also becoming more and more accessible to non-expert users. The growing amount of available 3D data is welcome for everyone working with this type of data, as the creation and acquisition of many 3D objects is still costly. However, the vast majority of available 3D objects are only present as pure polygon meshes. We arguably can not assume to get meta-data and additional semantics delivered together with 3D objects stemming from non-expert or 3D scans of real objects from automatic systems. For this reason content-based retrieval and classification techniques for 3D objects has been developed. Many systems are based on the completely unsupervised case. However, previous work has shown that there are strong possibilities of highly increasing the performance of these tasks by using any type of previous knowledge. In this thesis I use procedural models as previous knowledge. Procedural models describe the construction process of a 3D object instead of explicitly describing the components of the surface. These models can include parameters into the construction process to generate variations of the resulting 3D object. Procedural representations are present in many domains, as these implicit representations are vastly superior to any explicit representation in terms of content generation, flexibility and reusability. Therefore, using a procedural representation always has the potential of outclassing other approaches in many aspects. The usage of procedural models in 3D object retrieval and classification is not highly researched as this powerful representation can be arbitrary complex to create and handle. In the 3D object domain, procedural models are mostly used for highly regularized structures like buildings and trees. However, Procedural models can deeply improve 3D object retrieval and classification, as this representation is able to offer a persistent and reusable full description of a type of object. This description can be used for queries and class definitions without any additional data. Furthermore, the initial classification can be improved further by using a procedural model: A procedural model allows to completely parameterize an unknown object and further identify characteristics of different class members. The only drawback is that the manual design and creation of specialized procedural models itself is very costly. In this thesis I concentrate on the generalization and automation of procedural models for the application in 3D object retrieval and 3D object classification. For the generalization and automation of procedural models I propose to offer different levels of interaction for a user to fulfill the possible needs of control and automation. This thesis presents new approaches for different levels of automation: the automatic generation of procedural models from a single exemplary 3D object. The semi-automatic creation of a procedural model with a sketch-based modeling tool. And the manual definition a procedural model with restricted variation space. The second important step is the insertion of parameters into the procedural model, to define the variations of the resulting 3D object. For this step I also propose several possibilities for the optimal level of control and automation: An automatic parameter detection technique. A semi-automatic deformation based insertion. And an interface for manually inserting parameters by choosing one of the offered insertion principles. It is also possible to manually insert parameters into the procedures if the user needs the full control on the lowest level. To enable the usage of procedural models directly for 3D object retrieval and classification techniques I propose descriptor-based and deep learning based approaches. Descriptors measure the difference of 3D objects. By using descriptors as comparison algorithm, we can define the distance between procedural models and other objects and order these by similarity. The procedural models are sampled and compared to retrieve an optimal object retrieval list. We can also directly use procedural models as data basis for a retraining of a convolutional neural network. By deep learning a set of procedural models we can directly classify new unknown objects without any further large learning database. Additionally, I propose a new multi-layered parameter estimation approach using three different comparison measures to parameterize an unknown object. Hence, an unknown object is not only classified with a procedural model but the approach is also able to gather new information about the characteristics of the object by using the procedural model for the parameterization of the unknown object. As a result, the combination of procedural models with the tasks of 3D object retrieval and classification lead to a meta concept of a holistically seamless system of defining, generating, comparing, identifying, retrieving, recombining, editing and reusing 3D objects

    Efficient sketch-based 3D character modelling.

    Get PDF
    Sketch-based modelling (SBM) has undergone substantial research over the past two decades. In the early days, researchers aimed at developing techniques useful for modelling of architectural and mechanical models through sketching. With the advancement of technology used in designing visual effects for film, TV and games, the demand for highly realistic 3D character models has skyrocketed. To allow artists to create 3D character models quickly, researchers have proposed several techniques for efficient character modelling from sketched feature curves. Moreover several research groups have developed 3D shape databases to retrieve 3D models from sketched inputs. Unfortunately, the current state of the art in sketch-based organic modelling (3D character modelling) contains a lot of gaps and limitations. To bridge the gaps and improve the current sketch-based modelling techniques, this research aims to develop an approach allowing direct and interactive modelling of 3D characters from sketched feature curves, and also make use of 3D shape databases to guide the artist to create his / her desired models. The research involved finding a fusion between 3D shape retrieval, shape manipulation, and shape reconstruction / generation techniques backed by an extensive literature review, experimentation and results. The outcome of this research involved devising a novel and improved technique for sketch-based modelling, the creation of a software interface that allows the artist to quickly and easily create realistic 3D character models with comparatively less effort and learning. The proposed research work provides the tools to draw 3D shape primitives and manipulate them using simple gestures which leads to a better modelling experience than the existing state of the art SBM systems

    Artistic Content Representation and Modelling based on Visual Style Features

    Get PDF
    This thesis aims to understand visual style in the context of computer science, using traditionally intangible artistic properties to enhance existing content manipulation algorithms and develop new content creation methods. The developed algorithms can be used to apply extracted properties to other drawings automatically; transfer a selected style; categorise images based upon perceived style; build 3D models using style features from concept artwork; and other style-based actions that change our perception of an object without changing our ability to recognise it. The research in this thesis aims to provide the style manipulation abilities that are missing from modern digital art creation pipelines

    Sketching-based Skeleton Extraction

    Get PDF
    Articulated character animation can be performed by manually creating and rigging a skeleton into an unfolded 3D mesh model. Such tasks are not trivial, as they require a substantial amount of training and practice. Although methods have been proposed to help automatic extraction of skeleton structure, they may not guarantee that the resulting skeleton can help to produce animations according to user manipulation. We present a sketching-based skeleton extraction method to create a user desired skeleton structure which is used in 3D model animation. This method takes user sketching as an input, and based on the mesh segmentation result of a 3D mesh model, generates a skeleton for articulated character animation. In our system, we assume that a user will properly sketch bones by roughly following the mesh model structure. The user is expected to sketch independently on different regions of a mesh model for creating separate bones. For each sketched stroke, we project it into the mesh model so that it becomes the medial axis of its corresponding mesh model region from the current viewer perspective. We call this projected stroke a “sketched bone”. After pre-processing user sketched bones, we cluster them into groups. This process is critical as user sketching can be done from any orientation of a mesh model. To specify the topology feature for different mesh parts, a user can sketch strokes from different orientations of a mesh model, as there may be duplicate strokes from different orientations for the same mesh part. We need a clustering process to merge similar sketched bones into one bone, which we call a “reference bone”. The clustering process is based on three criteria: orientation, overlapping and locality. Given the reference bones as the input, we adopt a mesh segmentation process to assist our skeleton extraction method. To be specific, we apply the reference bones and the seed triangles to segment the input mesh model into meaningful segments using a multiple-region growing mechanism. The seed triangles, which are collected from the reference bones, are used as the initial seeds in the mesh segmentation process. We have designed a new segmentation metric [1] to form a better segmentation criterion. Then we compute the Level Set Diagrams (LSDs) on each mesh part to extract bones and joints. To construct the final skeleton, we connect bones extracted from all mesh parts together into a single structure. There are three major steps involved: optimizing and smoothing bones, generating joints and forming the skeleton structure. After constructing the skeleton model, we have proposed a new method, which utilizes the Linear Blend Skinning (LBS) technique and the Laplacian mesh deformation technique together to perform skeleton-driven animation. Traditional LBS techniques may have self-intersection problem in regions around segmentation boundaries. Laplacian mesh deformation can preserve the local surface details, which can eliminate the self-intersection problem. In this case, we make use of LBS result as the positional constraint to perform a Laplacian mesh deformation. By using the Laplacian mesh deformation method, we maintain the surface details in segmentation boundary regions. This thesis outlines a novel approach to construct a 3D skeleton model interactively, which can also be used in 3D animation and 3D model matching area. The work is motivated by the observation that either most of the existing automatic skeleton extraction methods lack well-positioned joints specification or the manually generated methods require too much professional training to create a good skeleton structure. We dedicate a novel approach to create 3D model skeleton based on user sketching which specifies articulated skeleton with joints. The experimental results show that our method can produce better skeletons in terms of joint positions and topological structure
    corecore