1,172 research outputs found

    Surface-bounded growth modeling applied to human mandibles.

    Get PDF

    Robust and flexible multi-scale medial axis computation

    Get PDF
    The principle of the multi-scale medial axis (MMA) is important in that any object is detected at a blurring scale proportional to the size of the object. Thus it provides a sound balance between noise removal and preserving detail. The robustness of the MMA has been reflected in many existing applications in object segmentation, recognition, description and registration. This thesis aims to improve the computational aspects of the MMA. The MMA is obtained by computing ridges in a “medialness” scale-space derived from an image. In computing the medialness scale-space, we propose an edge-free medialness algorithm, the Concordance-based Medial Axis Transform (CMAT). It not only depends on the symmetry of the positions of boundaries, but also is related to the symmetry of the intensity contrasts at boundaries. Therefore it excludes spurious MMA branches arising from isolated boundaries. In addition, the localisation accuracy for the position and width of an object, as well as the robustness under noisy conditions, is preserved in the CMAT. In computing ridges in the medialness space, we propose the sliding window algorithm for extracting locally optimal scale ridges. It is simple and efficient in that it can readily separate the scale dimension from the search space but avoids the difficult task of constructing surfaces of connected maxima. It can extract a complete set of MMA for interfering objects in scale-space, e.g. embedded or adjacent objects. These algorithms are evaluated using a quantitative study of their performance for 1-D signals and qualitative testing on 2-D images

    Automatic High-Fidelity 3D Road Network Modeling

    Get PDF
    Many computer applications such as racing games and driving simulations frequently make use of 3D high-fidelity road network models for a variety of purposes. However, there are very few existing methods for automatic generation of 3D realistic road networks, especially for those in the real world. On the other hand, vast road network GIS data have been collected in the past and used by a wide range of applications, such as navigation and evaluation. A method that can automatically produce 3D high-fidelity road network models from 2D real road GIS data will significantly reduce both the labor and time needed to generate these models, and greatly benefit numerous applications involving road networks. Based on a set of selected civil engineering rules for road design, this dissertation research addresses this problem with a novel approach which transforms existing road GIS data that contain only 2D road centerline information into 3D road network models. The proposed method consists of several components, mainly including road GIS data preprocessing, 3D centerline modeling and 3D geometry modeling. During road data preprocessing, topology of the road network is extracted from raw road data as a graph composed of road nodes and road links; road link information is simplified and classified. In the 3D centerline modeling part, the missing height information of the road centerline is inferred based on 2D road GIS data, intersections are extracted from road nodes and the whole road network is represented as road intersections and road segments in parametric forms. Finally, the 3D road centerline models are converted into various 3D road geometry models consisting of triangles and textures in the 3D geometry modeling phase. With this approach, basic road elements such as road segments, road intersections and traffic interchanges are generated automatically to compose sophisticated road networks. Results show that this approach provides a rapid and efficient 3D road modeling method for applications that have stringent requirements on high-fidelity road models

    Recognition of feature curves on 3D shapes using an algebraic approach to Hough transforms

    Get PDF
    Feature curves are largely adopted to highlight shape features, such as sharp lines, or to divide surfaces into meaningful segments, like convex or concave regions. Extracting these curves is not sufficient to convey prominent and meaningful information about a shape. We have first to separate the curves belonging to features from those caused by noise and then to select the lines, which describe non-trivial portions of a surface. The automatic detection of such features is crucial for the identification and/or annotation of relevant parts of a given shape. To do this, the Hough transform (HT) is a feature extraction technique widely used in image analysis, computer vision and digital image processing, while, for 3D shapes, the extraction of salient feature curves is still an open problem. Thanks to algebraic geometry concepts, the HT technique has been recently extended to include a vast class of algebraic curves, thus proving to be a competitive tool for yielding an explicit representation of the diverse feature lines equations. In the paper, for the first time we apply this novel extension of the HT technique to the realm of 3D shapes in order to identify and localize semantic features like patterns, decorations or anatomical details on 3D objects (both complete and fragments), even in the case of features partially damaged or incomplete. The method recognizes various features, possibly compound, and it selects the most suitable feature profiles among families of algebraic curves

    Statistical Computing on Non-Linear Spaces for Computational Anatomy

    Get PDF
    International audienceComputational anatomy is an emerging discipline that aims at analyzing and modeling the individual anatomy of organs and their biological variability across a population. However, understanding and modeling the shape of organs is made difficult by the absence of physical models for comparing different subjects, the complexity of shapes, and the high number of degrees of freedom implied. Moreover, the geometric nature of the anatomical features usually extracted raises the need for statistics on objects like curves, surfaces and deformations that do not belong to standard Euclidean spaces. We explain in this chapter how the Riemannian structure can provide a powerful framework to build generic statistical computing tools. We show that few computational tools derive for each Riemannian metric can be used in practice as the basic atoms to build more complex generic algorithms such as interpolation, filtering and anisotropic diffusion on fields of geometric features. This computational framework is illustrated with the analysis of the shape of the scoliotic spine and the modeling of the brain variability from sulcal lines where the results suggest new anatomical findings

    Analysis and Manipulation of Repetitive Structures of Varying Shape

    Get PDF
    Self-similarity and repetitions are ubiquitous in man-made and natural objects. Such structural regularities often relate to form, function, aesthetics, and design considerations. Discovering structural redundancies along with their dominant variations from 3D geometry not only allows us to better understand the underlying objects, but is also beneficial for several geometry processing tasks including compact representation, shape completion, and intuitive shape manipulation. To identify these repetitions, we present a novel detection algorithm based on analyzing a graph of surface features. We combine general feature detection schemes with a RANSAC-based randomized subgraph searching algorithm in order to reliably detect recurring patterns of locally unique structures. A subsequent segmentation step based on a simultaneous region growing is applied to verify that the actual data supports the patterns detected in the feature graphs. We introduce our graph based detection algorithm on the example of rigid repetitive structure detection. Then we extend the approach to allow more general deformations between the detected parts. We introduce subspace symmetries whereby we characterize similarity by requiring the set of repeating structures to form a low dimensional shape space. We discover these structures based on detecting linearly correlated correspondences among graphs of invariant features. The found symmetries along with the modeled variations are useful for a variety of applications including non-local and non-rigid denoising. Employing subspace symmetries for shape editing, we introduce a morphable part model for smart shape manipulation. The input geometry is converted to an assembly of deformable parts with appropriate boundary conditions. Our method uses self-similarities from a single model or corresponding parts of shape collections as training input and allows the user also to reassemble the identified parts in new configurations, thus exploiting both the discrete and continuous learned variations while ensuring appropriate boundary conditions across part boundaries. We obtain an interactive yet intuitive shape deformation framework producing realistic deformations on classes of objects that are difficult to edit using repetition-unaware deformation techniques

    Machine learning methods for 3D object classification and segmentation

    Get PDF
    Field of study: Computer science.Dr. Ye Duan, Thesis Supervisor.Includes vita."July 2018."Object understanding is a fundamental problem in computer vision and it has been extensively researched in recent years thanks to the availability of powerful GPUs and labelled data, especially in the context of images. However, 3D object understanding is still not on par with its 2D domain and deep learning for 3D has not been fully explored yet. In this dissertation, I work on two approaches, both of which advances the state-of-the-art results in 3D classification and segmentation. The first approach, called MVRNN, is based multi-view paradigm. In contrast to MVCNN which does not generate consistent result across different views, by treating the multi-view images as a temporal sequence, our MVRNN correlates the features and generates coherent segmentation across different views. MVRNN demonstrated state-of-the-art performance on the Princeton Segmentation Benchmark dataset. The second approach, called PointGrid, is a hybrid method which combines points and regular grid structure. 3D points can retain fine details but irregular, which is challenge for deep learning methods. Volumetric grid is simple and has regular structure, but does not scale well with data resolution. Our PointGrid, which is simple, allows the fine details to be consumed by normal convolutions under a coarser resolution grid. PointGrid achieved state-of-the-art performance on ModelNet40 and ShapeNet datasets in 3D classification and object part segmentation.Includes bibliographical references (pages 116-140)
    • …
    corecore