1,155 research outputs found
Semiautomated Skeletonization of the Pulmonary Arterial Tree in Micro-CT Images
We present a simple and robust approach that utilizes planar images at different angular rotations combined with unfiltered back-projection to locate the central axes of the pulmonary arterial tree. Three-dimensional points are selected interactively by the user. The computer calculates a sub- volume unfiltered back-projection orthogonal to the vector connecting the two points and centered on the first point. Because more x-rays are absorbed at the thickest portion of the vessel, in the unfiltered back-projection, the darkest pixel is assumed to be the center of the vessel. The computer replaces this point with the newly computer-calculated point. A second back-projection is calculated around the original point orthogonal to a vector connecting the newly-calculated first point and user-determined second point. The darkest pixel within the reconstruction is determined. The computer then replaces the second point with the XYZ coordinates of the darkest pixel within this second reconstruction. Following a vector based on a moving average of previously determined 3- dimensional points along the vessel\u27s axis, the computer continues this skeletonization process until stopped by the user. The computer estimates the vessel diameter along the set of previously determined points using a method similar to the full width-half max algorithm. On all subsequent vessels, the process works the same way except that at each point, distances between the current point and all previously determined points along different vessels are determined. If the difference is less than the previously estimated diameter, the vessels are assumed to branch. This user/computer interaction continues until the vascular tree has been skeletonized
Extracting 3D parametric curves from 2D images of Helical objects
Helical objects occur in medicine, biology, cosmetics, nanotechnology, and engineering. Extracting a 3D parametric curve from a 2D image of a helical object has many practical applications, in particular being able to extract metrics such as tortuosity, frequency, and pitch. We present a method that is able to straighten the image object and derive a robust 3D helical curve from peaks in the object boundary. The algorithm has a small number of stable parameters that require little tuning, and the curve is validated against both synthetic and real-world data. The results show that the extracted 3D curve comes within close Hausdorff distance to the ground truth, and has near identical tortuosity for helical objects with a circular profile. Parameter insensitivity and robustness against high levels of image noise are demonstrated thoroughly and quantitatively
Co-skeletons:Consistent curve skeletons for shape families
We present co-skeletons, a new method that computes consistent curve skeletons for 3D shapes from a given family. We compute co-skeletons in terms of sampling density and semantic relevance, while preserving the desired characteristics of traditional, per-shape curve skeletonization approaches. We take the curve skeletons extracted by traditional approaches for all shapes from a family as input, and compute semantic correlation information of individual skeleton branches to guide an edge-pruning process via skeleton-based descriptors, clustering, and a voting algorithm. Our approach achieves more concise and family-consistent skeletons when compared to traditional per-shape methods. We show the utility of our method by using co-skeletons for shape segmentation and shape blending on real-world data
A skeletonization algorithm for gradient-based optimization
The skeleton of a digital image is a compact representation of its topology,
geometry, and scale. It has utility in many computer vision applications, such
as image description, segmentation, and registration. However, skeletonization
has only seen limited use in contemporary deep learning solutions. Most
existing skeletonization algorithms are not differentiable, making it
impossible to integrate them with gradient-based optimization. Compatible
algorithms based on morphological operations and neural networks have been
proposed, but their results often deviate from the geometry and topology of the
true medial axis. This work introduces the first three-dimensional
skeletonization algorithm that is both compatible with gradient-based
optimization and preserves an object's topology. Our method is exclusively
based on matrix additions and multiplications, convolutional operations, basic
non-linear functions, and sampling from a uniform probability distribution,
allowing it to be easily implemented in any major deep learning library. In
benchmarking experiments, we prove the advantages of our skeletonization
algorithm compared to non-differentiable, morphological, and
neural-network-based baselines. Finally, we demonstrate the utility of our
algorithm by integrating it with two medical image processing applications that
use gradient-based optimization: deep-learning-based blood vessel segmentation,
and multimodal registration of the mandible in computed tomography and magnetic
resonance images.Comment: Accepted at ICCV 202
Generating Second Order (Co)homological Information within AT-Model Context
In this paper we design a new family of relations between
(co)homology classes, working with coefficients in a field and starting
from an AT-model (Algebraic Topological Model) AT(C) of a finite cell
complex C These relations are induced by elementary relations of type
“to be in the (co)boundary of” between cells. This high-order connectivity
information is embedded into a graph-based representation model,
called Second Order AT-Region-Incidence Graph (or AT-RIG) of C. This
graph, having as nodes the different homology classes of C, is in turn,
computed from two generalized abstract cell complexes, called primal
and dual AT-segmentations of C. The respective cells of these two complexes
are connected regions (set of cells) of the original cell complex C,
which are specified by the integral operator of AT(C). In this work in
progress, we successfully use this model (a) in experiments for discriminating
topologically different 3D digital objects, having the same Euler
characteristic and (b) in designing a parallel algorithm for computing
potentially significant (co)homological information of 3D digital objects.Ministerio de EconomĂa y Competitividad MTM2016-81030-PMinisterio de EconomĂa y Competitividad TEC2012-37868-C04-0
- …