1,847 research outputs found

    Exploring local regularities for 3D object recognition

    Get PDF
    In order to find better simplicity measurements for 3D object recognition, a new set of local regularities is developed and tested in a stepwise 3D reconstruction method, including localized minimizing standard deviation of angles(L-MSDA), localized minimizing standard deviation of segment magnitudes(L-MSDSM), localized minimum standard deviation of areas of child faces (L-MSDAF), localized minimum sum of segment magnitudes of common edges (L-MSSM), and localized minimum sum of areas of child face (L-MSAF). Based on their effectiveness measurements in terms of form and size distortions, it is found that when two local regularities: L-MSDA and L-MSDSM are combined together, they can produce better performance. In addition, the best weightings for them to work together are identified as 10% for L-MSDSM and 90% for L-MSDA. The test results show that the combined usage of L-MSDA and L-MSDSM with identified weightings has a potential to be applied in other optimization based 3D recognition methods to improve their efficacy and robustness

    3D object reconstruction from 2D and 3D line drawings.

    Get PDF
    Chen, Yu.Thesis (M.Phil.)--Chinese University of Hong Kong, 2008.Includes bibliographical references (leaves 78-85).Abstracts in English and Chinese.Chapter 1 --- Introduction and Related Work --- p.1Chapter 1.1 --- Reconstruction from 2D Line Drawings and the Applications --- p.2Chapter 1.2 --- Previous Work on 3D Reconstruction from Single 2D Line Drawings --- p.4Chapter 1.3 --- Other Related Work on Interpretation of 2D Line Drawings --- p.5Chapter 1.3.1 --- Line Labeling and Superstrictness Problem --- p.6Chapter 1.3.2 --- CAD Reconstruction --- p.6Chapter 1.3.3 --- Modeling from Images --- p.6Chapter 1.3.4 --- Identifying Faces in the Line Drawings --- p.7Chapter 1.4 --- 3D Modeling Systems --- p.8Chapter 1.5 --- Research Problems and Our Contributions --- p.10Chapter 1.5.1 --- Recovering Complex Manifold Objects from Line Drawings --- p.10Chapter 1.5.2 --- The Vision-based Sketching System --- p.11Chapter 2 --- Reconstruction from Complex Line Drawings --- p.13Chapter 2.1 --- Introduction --- p.13Chapter 2.2 --- Assumptions and Terminology --- p.15Chapter 2.3 --- Separation of a Line Drawing --- p.17Chapter 2.3.1 --- Classification of Internal Faces --- p.18Chapter 2.3.2 --- Separating a Line Drawing along Internal Faces of Type 1 --- p.19Chapter 2.3.3 --- Detecting Internal Faces of Type 2 --- p.20Chapter 2.3.4 --- Separating a Line Drawing along Internal Faces of Type 2 --- p.28Chapter 2.4 --- 3D Reconstruction --- p.44Chapter 2.4.1 --- 3D Reconstruction from a Line Drawing --- p.44Chapter 2.4.2 --- Merging 3D Manifolds --- p.45Chapter 2.4.3 --- The Complete 3D Reconstruction Algorithm --- p.47Chapter 2.5 --- Experimental Results --- p.47Chapter 2.6 --- Summary --- p.52Chapter 3 --- A Vision-Based Sketching System for 3D Object Design --- p.54Chapter 3.1 --- Introduction --- p.54Chapter 3.2 --- The Sketching System --- p.55Chapter 3.3 --- 3D Geometry of the System --- p.56Chapter 3.3.1 --- Locating the Wand --- p.57Chapter 3.3.2 --- Calibration --- p.59Chapter 3.3.3 --- Working Space --- p.60Chapter 3.4 --- Wireframe Input and Object Editing --- p.62Chapter 3.5 --- Surface Generation --- p.63Chapter 3.5.1 --- Face Identification --- p.64Chapter 3.5.2 --- Planar Surface Generation --- p.65Chapter 3.5.3 --- Smooth Curved Surface Generation --- p.67Chapter 3.6 --- Experiments --- p.70Chapter 3.7 --- Summary --- p.72Chapter 4 --- Conclusion and Future Work --- p.74Chapter 4.1 --- Conclusion --- p.74Chapter 4.2 --- Future Work --- p.75Chapter 4.2.1 --- Learning-Based Line Drawing Reconstruction --- p.75Chapter 4.2.2 --- New Query Interface for 3D Object Retrieval --- p.75Chapter 4.2.3 --- Curved Object Reconstruction --- p.76Chapter 4.2.4 --- Improving the 3D Sketch System --- p.77Chapter 4.2.5 --- Other Directions --- p.77Bibliography --- p.7

    3D reconstruction of curved objects from single 2D line drawings.

    Get PDF
    Wang, Yingze.Thesis (M.Phil.)--Chinese University of Hong Kong, 2009.Includes bibliographical references (leaves 42-47).Abstract also in Chinese.Chapter 1 --- Introduction --- p.1Chapter 2 --- Related Work --- p.5Chapter 2.1 --- Line labeling and realization problem --- p.5Chapter 2.2 --- 3D reconstruction from multiple views --- p.6Chapter 2.3 --- 3D reconstruction from single line drawings --- p.7Chapter 2.3.1 --- Face identification from the line drawings --- p.7Chapter 2.3.2 --- 3D geometry reconstruction --- p.9Chapter 2.4 --- Our research topic and contributions --- p.13Chapter 3 --- Reconstruction of Curved Manifold Objects --- p.14Chapter 3.1 --- Assumptions and terminology --- p.14Chapter 3.2 --- Reconstruction of curved manifold objects --- p.17Chapter 3.2.1 --- Distinguishing between curved and planar faces --- p.17Chapter 3.2.2 --- Transformation of Line Drawings --- p.20Chapter 3.2.3 --- Regularities --- p.23Chapter 3.2.4 --- 3D Wireframe Reconstruction --- p.26Chapter 3.2.5 --- Generating Curved Faces --- p.28Chapter 3.2.6 --- The Complete 3D Reconstruction Algorithm --- p.33Chapter 4 --- Experiments --- p.35Chapter 5 --- Conclusions and Future Work --- p.40Chapter 5.1 --- Conclusions --- p.40Chapter 5.2 --- Future work --- p.40Bibliography --- p.4

    Parameter optimization and learning for 3D object reconstruction from line drawings.

    Get PDF
    Du, Hao.Thesis (M.Phil.)--Chinese University of Hong Kong, 2010.Includes bibliographical references (p. 61).Abstracts in English and Chinese.Chapter 1 --- Introduction --- p.1Chapter 1.1 --- 3D Reconstruction from 2D Line Drawings and its Applications --- p.1Chapter 1.2 --- Algorithmic Development of 3D Reconstruction from 2D Line Drawings --- p.3Chapter 1.2.1 --- Line Labeling and Realization Problem --- p.4Chapter 1.2.2 --- 3D Reconstruction from Multiple Line Drawings --- p.5Chapter 1.2.3 --- 3D Reconstruction from a Single Line Drawing --- p.6Chapter 1.3 --- Research Problems and Our Contributions --- p.12Chapter 2 --- Adaptive Parameter Setting --- p.15Chapter 2.1 --- Regularities in Optimization-Based 3D Reconstruction --- p.15Chapter 2.1.1 --- Face Planarity --- p.18Chapter 2.1.2 --- Line Parallelism --- p.19Chapter 2.1.3 --- Line Verticality --- p.19Chapter 2.1.4 --- Isometry --- p.19Chapter 2.1.5 --- Corner Orthogonality --- p.20Chapter 2.1.6 --- Skewed Facial Orthogonality --- p.21Chapter 2.1.7 --- Skewed Facial Symmetry --- p.22Chapter 2.1.8 --- Line Orthogonality --- p.24Chapter 2.1.9 --- Minimum Standard Deviation of Angles --- p.24Chapter 2.1.10 --- Face Perpendicularity --- p.24Chapter 2.1.11 --- Line Collinearity --- p.25Chapter 2.1.12 --- Whole Symmetry --- p.25Chapter 2.2 --- Adaptive Parameter Setting in the Objective Function --- p.26Chapter 2.2.1 --- Hill-Climbing Optimization Technique --- p.28Chapter 2.2.2 --- Adaptive Weight Setting and its Explanations --- p.29Chapter 3 --- Parameter Learning --- p.33Chapter 3.1 --- Construction of A Large 3D Object Database --- p.33Chapter 3.2 --- Training Dataset Generation --- p.34Chapter 3.3 --- Parameter Learning Framework --- p.37Chapter 3.3.1 --- Evolutionary Algorithms --- p.38Chapter 3.3.2 --- Reconstruction Error Calculation --- p.39Chapter 3.3.3 --- Parameter Learning Algorithm --- p.41Chapter 4 --- Experimental Results --- p.45Chapter 4.1 --- Adaptive Parameter Setting --- p.45Chapter 4.1.1 --- Use Manually-Set Weights --- p.45Chapter 4.1.2 --- Learn the Best Weights with Different Strategies --- p.48Chapter 4.2 --- Evolutionary-Algorithm-Based Parameter Learning --- p.49Chapter 5 --- Conclusions and Future Work --- p.53Bibliography --- p.5

    Application of Approximate Pattern Matching in Two Dimensional Spaces to Grid Layout for Biochemical Network Maps

    Get PDF
    Background For visualizing large-scale biochemical network maps, it is important to calculate the coordinates of molecular nodes quickly and to enhance the understanding or traceability of them. The grid layout is effective in drawing compact, orderly, balanced network maps with node label spaces, but existing grid layout algorithms often require a high computational cost because they have to consider complicated positional constraints through the entire optimization process. Results We propose a hybrid grid layout algorithm that consists of a non-grid, fast layout (preprocessor) algorithm and an approximate pattern matching algorithm that distributes the resultant preprocessed nodes on square grid points. To demonstrate the feasibility of the hybrid layout algorithm, it is characterized in terms of the calculation time, numbers of edge-edge and node-edge crossings, relative edge lengths, and F-measures. The proposed algorithm achieves outstanding performances compared with other existing grid layouts. Conclusions Use of an approximate pattern matching algorithm quickly redistributes the laid-out nodes by fast, non-grid algorithms on the square grid points, while preserving the topological relationships among the nodes. The proposed algorithm is a novel use of the pattern matching, thereby providing a breakthrough for grid layout. This application program can be freely downloaded from http://www.cadlive.jp/hybridlayout/hybridlayout.html

    TM-NET: Deep Generative Networks for Textured Meshes

    Get PDF
    We introduce TM-NET, a novel deep generative model for synthesizing textured meshes in a part-aware manner. Once trained, the network can generate novel textured meshes from scratch or predict textures for a given 3D mesh, without image guidance. Plausible and diverse textures can be generated for the same mesh part, while texture compatibility between parts in the same shape is achieved via conditional generation. Specifically, our method produces texture maps for individual shape parts, each as a deformable box, leading to a natural UV map with minimal distortion. The network separately embeds part geometry (via a PartVAE) and part texture (via a TextureVAE) into their respective latent spaces, so as to facilitate learning texture probability distributions conditioned on geometry. We introduce a conditional autoregressive model for texture generation, which can be conditioned on both part geometry and textures already generated for other parts to achieve texture compatibility. To produce high-frequency texture details, our TextureVAE operates in a high-dimensional latent space via dictionary-based vector quantization. We also exploit transparencies in the texture as an effective means to model complex shape structures including topological details. Extensive experiments demonstrate the plausibility, quality, and diversity of the textures and geometries generated by our network, while avoiding inconsistency issues that are common to novel view synthesis methods

    New Techniques for the Modeling, Processing and Visualization of Surfaces and Volumes

    Get PDF
    With the advent of powerful 3D acquisition technology, there is a growing demand for the modeling, processing, and visualization of surfaces and volumes. The proposed methods must be efficient and robust, and they must be able to extract the essential structure of the data and to easily and quickly convey the most significant information to a human observer. Independent of the specific nature of the data, the following fundamental problems can be identified: shape reconstruction from discrete samples, data analysis, and data compression. This thesis presents several novel solutions to these problems for surfaces (Part I) and volumes (Part II). For surfaces, we adopt the well-known triangle mesh representation and develop new algorithms for discrete curvature estimation,detection of feature lines, and line-art rendering (Chapter 3), for connectivity encoding (Chapter 4), and for topology preserving compression of 2D vector fields (Chapter 5). For volumes, that are often given as discrete samples, we base our approach for reconstruction and visualization on the use of new trivariate spline spaces on a certain tetrahedral partition. We study the properties of the new spline spaces (Chapter 7) and present efficient algorithms for reconstruction and visualization by iso-surface rendering for both, regularly (Chapter 8) and irregularly (Chapter 9) distributed data samples

    From 3D Models to 3D Prints: an Overview of the Processing Pipeline

    Get PDF
    Due to the wide diffusion of 3D printing technologies, geometric algorithms for Additive Manufacturing are being invented at an impressive speed. Each single step, in particular along the Process Planning pipeline, can now count on dozens of methods that prepare the 3D model for fabrication, while analysing and optimizing geometry and machine instructions for various objectives. This report provides a classification of this huge state of the art, and elicits the relation between each single algorithm and a list of desirable objectives during Process Planning. The objectives themselves are listed and discussed, along with possible needs for tradeoffs. Additive Manufacturing technologies are broadly categorized to explicitly relate classes of devices and supported features. Finally, this report offers an analysis of the state of the art while discussing open and challenging problems from both an academic and an industrial perspective.Comment: European Union (EU); Horizon 2020; H2020-FoF-2015; RIA - Research and Innovation action; Grant agreement N. 68044

    High Relief from Brush Painting

    Get PDF
    Relief is an art form part way between 3D sculpture and 2D painting. We present a novel approach for generating a texture-mapped high-relief model from a single brush painting. Our aim is to extract the brushstrokes from a painting and generate the individual corresponding relief proxies rather than recovering the exact depth map from the painting, which is a tricky computer vision problem, requiring assumptions that are rarely satisfied. The relief proxies of brushstrokes are then combined together to form a 2.5D high-relief model. To extract brushstrokes from 2D paintings, we apply layer decomposition and stroke segmentation by imposing boundary constraints. The segmented brushstrokes preserve the style of the input painting. By inflation and a displacement map of each brushstroke, the features of brushstrokes are preserved by the resultant high-relief model of the painting. We demonstrate that our approach is able to produce convincing high-reliefs from a variety of paintings(with humans, animals, flowers, etc.). As a secondary application, we show how our brushstroke extraction algorithm could be used for image editing. As a result, our brushstroke extraction algorithm is specifically geared towards paintings with each brushstroke drawn very purposefully, such as Chinese paintings, Rosemailing paintings, etc
    corecore