110 research outputs found
2D parallel thinning and shrinking based on sufficient conditions for topology preservation
Thinning and shrinking algorithms, respectively, are capable of extracting medial lines and topological kernels from digital binary objects in a topology preserving way. These topological algorithms are composed of reduction operations: object points that satisfy some topological and geometrical constraints are removed until stability is reached. In this work we present some new sufficient conditions for topology preserving parallel reductions and fiftyfour new 2D parallel thinning and shrinking algorithms that are based on our conditions. The proposed thinning algorithms use five characterizations of endpoints
A skeletonization algorithm for gradient-based optimization
The skeleton of a digital image is a compact representation of its topology,
geometry, and scale. It has utility in many computer vision applications, such
as image description, segmentation, and registration. However, skeletonization
has only seen limited use in contemporary deep learning solutions. Most
existing skeletonization algorithms are not differentiable, making it
impossible to integrate them with gradient-based optimization. Compatible
algorithms based on morphological operations and neural networks have been
proposed, but their results often deviate from the geometry and topology of the
true medial axis. This work introduces the first three-dimensional
skeletonization algorithm that is both compatible with gradient-based
optimization and preserves an object's topology. Our method is exclusively
based on matrix additions and multiplications, convolutional operations, basic
non-linear functions, and sampling from a uniform probability distribution,
allowing it to be easily implemented in any major deep learning library. In
benchmarking experiments, we prove the advantages of our skeletonization
algorithm compared to non-differentiable, morphological, and
neural-network-based baselines. Finally, we demonstrate the utility of our
algorithm by integrating it with two medical image processing applications that
use gradient-based optimization: deep-learning-based blood vessel segmentation,
and multimodal registration of the mandible in computed tomography and magnetic
resonance images.Comment: Accepted at ICCV 202
Topologically robust CAD model generation for structural optimisation
Computer-aided design (CAD) models play a crucial role in the design,
manufacturing and maintenance of products. Therefore, the mesh-based finite
element descriptions common in structural optimisation must be first translated
into CAD models. Currently, this can at best be performed semi-manually. We
propose a fully automated and topologically accurate approach to synthesise a
structurally-sound parametric CAD model from topology optimised finite element
models. Our solution is to first convert the topology optimised structure into
a spatial frame structure and then to regenerate it in a CAD system using
standard constructive solid geometry (CSG) operations. The obtained parametric
CAD models are compact, that is, have as few as possible geometric parameters,
which makes them ideal for editing and further processing within a CAD system.
The critical task of converting the topology optimised structure into an
optimal spatial frame structure is accomplished in several steps. We first
generate from the topology optimised voxel model a one-voxel-wide voxel chain
model using a topology-preserving skeletonisation algorithm from digital
topology. The weighted undirected graph defined by the voxel chain model yields
a spatial frame structure after processing it with standard graph algorithms.
Subsequently, we optimise the cross-sections and layout of the frame members to
recover its optimality, which may have been compromised during the conversion
process. At last, we generate the obtained frame structure in a CAD system by
repeatedly combining primitive solids, like cylinders and spheres, using
boolean operations. The resulting solid model is a boundary representation
(B-Rep) consisting of trimmed non-uniform rational B-spline (NURBS) curves and
surfaces
Sketching-based Skeleton Extraction
Articulated character animation can be performed by manually creating and rigging a skeleton into an unfolded 3D mesh model. Such tasks are not trivial, as they require a substantial amount of training and practice. Although methods have been proposed to help automatic extraction of skeleton structure, they may not guarantee that the resulting skeleton can help to produce animations according to user manipulation. We present a sketching-based skeleton extraction method to create a user desired skeleton structure which is used in 3D model animation. This method takes user sketching as an input, and based on the mesh segmentation result of a 3D mesh model, generates a skeleton for articulated character animation.
In our system, we assume that a user will properly sketch bones by roughly following the mesh model structure. The user is expected to sketch independently on different regions of a mesh model for creating separate bones. For each sketched stroke, we project it into the mesh model so that it becomes the medial axis of its corresponding mesh model region from the current viewer perspective. We call this projected stroke a “sketched bone”. After pre-processing user sketched bones, we cluster them into groups. This process is critical as user sketching can be done from any orientation of a mesh model. To specify the topology feature for different mesh parts, a user can sketch strokes from different orientations of a mesh model, as there may be duplicate strokes from different orientations for the same mesh part. We need a clustering process to merge similar sketched bones into one bone, which we call a “reference bone”. The clustering process is based on three criteria: orientation, overlapping and locality.
Given the reference bones as the input, we adopt a mesh segmentation process to assist our skeleton extraction method. To be specific, we apply the reference bones and the seed triangles to segment the input mesh model into meaningful segments using a multiple-region growing mechanism. The seed triangles, which are collected from the reference bones, are used as the initial seeds in the mesh segmentation process. We have designed a new segmentation metric [1] to form a better segmentation criterion. Then we compute the Level Set Diagrams (LSDs) on each mesh part to extract bones and joints. To construct the final skeleton, we connect bones extracted from all mesh parts together into a single structure.
There are three major steps involved: optimizing and smoothing bones, generating joints and forming the skeleton structure. After constructing the skeleton model, we have proposed a new method, which utilizes the Linear Blend Skinning (LBS) technique and the Laplacian mesh deformation technique together to perform skeleton-driven animation. Traditional LBS techniques may have self-intersection problem in regions around segmentation boundaries. Laplacian mesh deformation can preserve the local surface details, which can eliminate the self-intersection problem. In this case, we make use of LBS result as the positional constraint to perform a Laplacian mesh deformation. By using the Laplacian mesh deformation method, we maintain the surface details in segmentation boundary regions.
This thesis outlines a novel approach to construct a 3D skeleton model interactively, which can also be used in 3D animation and 3D model matching area. The work is motivated by the observation that either most of the existing automatic skeleton extraction methods lack well-positioned joints specification or the manually generated methods require too much professional training to create a good skeleton structure. We dedicate a novel approach to create 3D model skeleton based on user sketching which specifies articulated skeleton with joints. The experimental results show that our method can produce better skeletons in terms of joint positions and topological structure
- …