16 research outputs found

    Fast and robust curve skeletonization for real-world elongated objects

    Full text link
    We consider the problem of extracting curve skeletons of three-dimensional, elongated objects given a noisy surface, which has applications in agricultural contexts such as extracting the branching structure of plants. We describe an efficient and robust method based on breadth-first search that can determine curve skeletons in these contexts. Our approach is capable of automatically detecting junction points as well as spurious segments and loops. All of that is accomplished with only one user-adjustable parameter. The run time of our method ranges from hundreds of milliseconds to less than four seconds on large, challenging datasets, which makes it appropriate for situations where real-time decision making is needed. Experiments on synthetic models as well as on data from real world objects, some of which were collected in challenging field conditions, show that our approach compares favorably to classical thinning algorithms as well as to recent contributions to the field.Comment: 47 pages; IEEE WACV 2018, main paper and supplementary materia

    Co-skeletons:Consistent curve skeletons for shape families

    Get PDF
    We present co-skeletons, a new method that computes consistent curve skeletons for 3D shapes from a given family. We compute co-skeletons in terms of sampling density and semantic relevance, while preserving the desired characteristics of traditional, per-shape curve skeletonization approaches. We take the curve skeletons extracted by traditional approaches for all shapes from a family as input, and compute semantic correlation information of individual skeleton branches to guide an edge-pruning process via skeleton-based descriptors, clustering, and a voting algorithm. Our approach achieves more concise and family-consistent skeletons when compared to traditional per-shape methods. We show the utility of our method by using co-skeletons for shape segmentation and shape blending on real-world data

    A novel procedure for medial axis reconstruction of vessels from Medical Imaging segmentation

    Get PDF
    A procedure for reconstructing the central axis from diagnostic image processing is presented here, capable of solving the widespread problem of stepped shape effect that characterizes the most common algorithmic tools for processing the central axis for diagnostic imaging applications through the development of an algorithm correcting the spatial coordinates of each point belonging to the axis from the use of a common discrete image skeleton algorithm. The procedure is applied to the central axis traversing the vascular branch of the cerebral system, appropriately reconstructed from the processing of diagnostic images, using investigations of the local intensity values identified in adjacent voxels. The percentage intensity of the degree of adherence to a specific anatomical tissue acts as an attraction pole in the identification of the spatial center on which to place each point of the skeleton crossing the investigated anatomical structure. The results were shown in terms of the number of vessels identified overall compared to the original reference model. The procedure demonstrates high accuracy margin in the correction of the local coordinates of the central points that permits to allocate precise dimensional measurement of the anatomy under examination. The reconstruction of a central axis effectively centered in the region under examination represents a fundamental starting point in deducing, with a high margin of accuracy, key informations of a geometric and dimensional nature that favours the recognition of phenomena of shape alterations ascribable to the presence of clinical pathologies

    Discrete scale axis representations for 3D geometry

    Full text link

    Discrete Scale Axis Representations for 3D Geometry

    Get PDF
    This paper addresses the fundamental problem of computing stable medial representations of 3D shapes. We propose a spatially adaptive classification of geometric features that yields a robust algorithm for generating medial representations at different levels of abstraction. The recently introduced continuous scale axis transform serves as the mathematical foundation of our algorithm. We show how geometric and topological properties of the continuous setting carry over to discrete shape representations. Our method combines scaling operations of medial balls for geometric simplification with filtrations of the medial axis and provably good conversion steps to and from union of balls, to enable efficient processing of a wide variety shape representations including polygon meshes, 3D images, implicit surfaces, and point clouds. We demonstrate the robustness and versatility of our algorithm with an extensive validation on hundreds of shapes including complex geometries consisting of millions of triangles

    A Combined Skeleton Model

    Get PDF
    Skeleton representations are a fundamental way of representing a variety of solid models. They are particularly important for representing certain biological models and are often key to visualizing such data. Several methods exist for extracting skeletal models from 3D data sets. Unfortunately, there is usually not a single correct definition for what makes a good skeleton, and different methods will produce different skeletal models from a given input. Furthermore, for many scanned data sets, there also is inherent noise and loss of data in the scanning process that can reduce ability to identify a skeleton. In this document, I propose a method for combining multiple algorithms' skeleton results into a single composite skeletal model. This model leverages various aspects of the geometric and topological information contained in the different input skeletal models to form a single result that may limit the error introduced by particular inputs by means of a confidence function. Using such an uncertainty based model, one can better understand, refine, and de-noise/simplify the skeletal structure. The following pages describe methods for forming this composite model and also examples of applying it to some real-world data sets

    Entropy-based particle correspondence for shape populations

    Get PDF
    Statistical shape analysis of anatomical structures plays an important role in many medical image analysis applications such as understanding the structural changes in anatomy in various stages of growth or disease. Establishing accurate correspondence across object populations is essential for such statistical shape analysis studies

    Skeletal representations of orthogonal shapes

    Get PDF
    Skeletal representations are important shape descriptors which encode topological and geometrical properties of shapes and reduce their dimension. Skeletons are used in several fields of science and attract the attention of many researchers. In the biocad field, the analysis of structural properties such as porosity of biomaterials requires the previous computation of a skeleton. As the size of three-dimensional images become larger, efficient and robust algorithms that extract simple skeletal structures are required. The most popular and prominent skeletal representation is the medial axis, defined as the shape points which have at least two closest points on the shape boundary. Unfortunately, the medial axis is highly sensitive to noise and perturbations of the shape boundary. That is, a small change of the shape boundary may involve a considerable change of its medial axis. Moreover, the exact computation of the medial axis is only possible for a few classes of shapes. For example, the medial axis of polyhedra is composed of non planar surfaces, and its accurate and robust computation is difficult. These problems led to the emergence of approximate medial axis representations. There exists two main approximation methods: the shape is approximated with another shape class or the Euclidean metric is approximated with another metric. The main contribution of this thesis is the combination of a specific shape and metric simplification. The input shape is approximated with an orthogonal shape, which are polygons or polyhedra enclosed by axis-aligned edges or faces, respectively. In the same vein, the Euclidean metric is replaced by the L infinity or Chebyshev metric. Despite the simpler structure of orthogonal shapes, there are few works on skeletal representations applied to orthogonal shapes. Much of the efforts have been devoted to binary images and volumes, which are a subset of orthogonal shapes. Two new skeletal representations based on this paradigm are introduced: the cube skeleton and the scale cube skeleton. The cube skeleton is shown to be composed of straight line segments or planar faces and to be homotopical equivalent to the input shape. The scale cube skeleton is based upon the cube skeleton, and introduces a family of skeletons that are more stable to shape noise and perturbations. In addition, the necessary algorithms to compute the cube skeleton of polygons and polyhedra and the scale cube skeleton of polygons are presented. Several experimental results confirm the efficiency, robustness and practical use of all the presented methods

    Groupwise shape correspondence with local features

    Get PDF
    Statistical shape analysis of anatomical structures plays an important role in many medical image analysis applications such as understanding the structural changes in anatomy in various stages of growth or disease. Establishing accurate correspondence across object populations is essential for such statistical shape analysis studies. However, anatomical correspondence is rarely a direct result of spatial proximity of sample points but rather depends on many other features such as local curvature, position with respect to blood vessels, or connectivity to other parts of the anatomy. This dissertation presents a novel method for computing point-based correspondence among populations of surfaces by combining spatial location of the sample points with non-spatial local features. A framework for optimizing correspondence using arbitrary local features is developed. The performance of the correspondence algorithm is objectively assessed using a set of evaluation metrics. The main focus of this research is on correspondence across human cortical surfaces. Statistical analysis of cortical thickness, which is key to many neurological research problems, is the driving problem. I show that incorporating geometric (sulcal depth) and non-geometric (DTI connectivity) knowledge about the cortex significantly improves cortical correspondence compared to existing techniques. Furthermore, I present a framework that is the first to allow the white matter fiber connectivity to be used for improving cortical correspondence
    corecore