2,402 research outputs found
Doctor of Philosophy
dissertationThe medial axis of an object is a shape descriptor that intuitively presents the morphology or structure of the object as well as intrinsic geometric properties of the object’s shape. These properties have made the medial axis a vital ingredient for shape analysis applications, and therefore the computation of which is a fundamental problem in computational geometry. This dissertation presents new methods for accurately computing the 2D medial axis of planar objects bounded by B-spline curves, and the 3D medial axis of objects bounded by B-spline surfaces. The proposed methods for the 3D case are the first techniques that automatically compute the complete medial axis along with its topological structure directly from smooth boundary representations. Our approach is based on the eikonal (grassfire) flow where the boundary is offset along the inward normal direction. As the boundary deforms, different regions start intersecting with each other to create the medial axis. In the generic situation, the (self-) intersection set is born at certain creation-type transition points, then grows and undergoes intermediate transitions at special isolated points, and finally ends at annihilation-type transition points. The intersection set evolves smoothly in between transition points. Our approach first computes and classifies all types of transition points. The medial axis is then computed as a time trace of the evolving intersection set of the boundary using theoretically derived evolution vector fields. This dynamic approach enables accurate tracking of elements of the medial axis as they evolve and thus also enables computation of topological structure of the solution. Accurate computation of geometry and topology of 3D medial axes enables a new graph-theoretic method for shape analysis of objects represented with B-spline surfaces. Structural components are computed via the cycle basis of the graph representing the 1-complex of a 3D medial axis. This enables medial axis based surface segmentation, and structure based surface region selection and modification. We also present a new approach for structural analysis of 3D objects based on scalar functions defined on their surfaces. This approach is enabled by accurate computation of geometry and structure of 2D medial axes of level sets of the scalar functions. Edge curves of the 3D medial axis correspond to a subset of ridges on the bounding surfaces. Ridges are extremal curves of principal curvatures on a surface indicating salient intrinsic features of its shape, and hence are of particular interest as tools for shape analysis. This dissertation presents a new algorithm for accurately extracting all ridges directly from B-spline surfaces. The proposed technique is also extended to accurately extract ridges from isosurfaces of volumetric data using smooth implicit B-spline representations. Accurate ridge curves enable new higher-order methods for surface analysis. We present a new definition of salient regions in order to capture geometrically significant surface regions in the neighborhood of ridges as well as to identify salient segments of ridges
Morphological operations in image processing and analysis
Morphological operations applied in image processing and analysis are becoming increasingly important in today\u27s technology. Morphological operations which are based on set theory, can extract object features by suitable shape (structuring elements). Morphological filters are combinations of morphological operations that transform an image into a quantitative description of its geometrical structure which based on structuring elements. Important applications of morphological operations are shape description, shape recognition, nonlinear filtering, industrial parts inspection, and medical image processing.
In this dissertation, basic morphological operations are reviewed, algorithms and theorems are presented for solving problems in distance transformation, skeletonization, recognition, and nonlinear filtering. A skeletonization algorithm using the maxima-tracking method is introduced to generate a connected skeleton. A modified algorithm is proposed to eliminate non-significant short branches. The back propagation morphology is introduced to reach the roots of morphological filters in only two-scan. The definitions and properties of back propagation morphology are discussed. The two-scan distance transformation is proposed to illustrate the advantage of this new definition.
G-spectrum (geometric spectrum) which based upon the cardinality of a set of non-overlapping segments in an image using morphological operations is presented to be a useful tool not only for shape description but also for shape recognition. The G-spectrum is proven to be translation-, rotation-, and scaling-invariant. The shape likeliness based on G-spectrum is defined as a measurement in shape recognition. Experimental results are also illustrated.
Soft morphological operations which are found to be less sensitive to additive noise and to small variations are the combinations of order statistic and morphological operations. Soft morphological operations commute with thresholding and obey threshold superposition. This threshold decomposition property allows gray-scale signals to be decomposed into binary signals which can be processed by only logic gates in parallel and then binary results can be combined to produce the equivalent output. Thus the implementation and analysis of function-processing soft morphological operations can be done by focusing only on the case of sets which not only are much easier to deal with because their definitions involve only counting the points instead of sorting numbers, but also allow logic gates implementation and parallel pipelined architecture leading to real-time implementation. In general, soft opening and closing are not idempotent operations, but under some constraints the soft opening and closing can be idempotent and the proof is given. The idempotence property gives us the idea of how to choose the structuring element sets and the value of index such that the soft morphological filters will reach the root signals without iterations. Finally, summary and future research of this dissertation are provided
Doctor of Philosophy
dissertationOne of the fundamental building blocks of many computational sciences is the construction and use of a discretized, geometric representation of a problem domain, often referred to as a mesh. Such a discretization enables an otherwise complex domain to be represented simply, and computation to be performed over that domain with a finite number of basis elements. As mesh generation techniques have become more sophisticated over the years, focus has largely shifted to quality mesh generation techniques that guarantee or empirically generate numerically well-behaved elements. In this dissertation, the two complementary meshing subproblems of vertex placement and element creation are analyzed, both separately and together. First, a dynamic particle system achieves adaptivity over domains by inferring feature size through a new information passing algorithm. Second, a new tetrahedral algorithm is constructed that carefully combines lattice-based stenciling and mesh warping to produce guaranteed quality meshes on multimaterial volumetric domains. Finally, the ideas of lattice cleaving and dynamic particle systems are merged into a unified framework for producing guaranteed quality, unstructured and adaptive meshing of multimaterial volumetric domains
Feature-based hybrid inspection planning for complex mechanical parts
Globalization and emerging new powers in the manufacturing world are among many challenges, major manufacturing enterprises are facing. This resulted in increased alternatives to satisfy customers\u27 growing needs regarding products\u27 aesthetic and functional requirements. Complexity of part design and engineering specifications to satisfy such needs often require a better use of advanced and more accurate tools to achieve good quality. Inspection is a crucial manufacturing function that should be further improved to cope with such challenges. Intelligent planning for inspection of parts with complex geometric shapes and free form surfaces using contact or non-contact devices is still a major challenge. Research in segmentation and localization techniques should also enable inspection systems to utilize modern measurement technologies capable of collecting huge number of measured points.
Advanced digitization tools can be classified as contact or non-contact sensors. The purpose of this thesis is to develop a hybrid inspection planning system that benefits from the advantages of both techniques. Moreover, the minimization of deviation of measured part from the original CAD model is not the only characteristic that should be considered when implementing the localization process in order to accept or reject the part; geometric tolerances must also be considered. A segmentation technique that deals directly with the individual points is a necessary step in the developed inspection system, where the output is the actual measured points, not a tessellated model as commonly implemented by current segmentation tools.
The contribution of this work is three folds. First, a knowledge-based system was developed for selecting the most suitable sensor using an inspection-specific features taxonomy in form of a 3D Matrix where each cell includes the corresponding knowledge rules and generate inspection tasks. A Travel Salesperson Problem (TSP) has been applied for sequencing these hybrid inspection tasks. A novel region-based segmentation algorithm was developed which deals directly with the measured point cloud and generates sub-point clouds, each of which represents a feature to be inspected and includes the original measured points. Finally, a new tolerance-based localization algorithm was developed to verify the functional requirements and was applied and tested using form tolerance specifications.
This research enhances the existing inspection planning systems for complex mechanical parts with a hybrid inspection planning model. The main benefits of the developed segmentation and tolerance-based localization algorithms are the improvement of inspection decisions in order not to reject good parts that would have otherwise been rejected due to misleading results from currently available localization techniques. The better and more accurate inspection decisions achieved will lead to less scrap, which, in turn, will reduce the product cost and improve the company potential in the market
Computer image processing with application to chemical engineering
A literature survey covers a wide range
of picture processing topics from the general problem of
manipulating digitised images to the specific task
of analysing the shape of objects within an image
field. There follows a discussion and development
of theory relating to this latter task. A number
of shape analysis techniques are inapplicable or
computationally untenable when applied to objects
containing concavities. A method is proposed and
implemented whereby any object may be divided into
convex components the algebraic sum of which
constitute the original. These components may
be related by a tree structure.
It is observed that properties based on
integral measurements, e.g. area, are less
susceptible to quantisation errors than those based
on linear and derivative measurements such as
diameters anti slopes. A set of moments invariant
with respect to size, position and orientation
are derived and applied to the study of the above
convex components. An outline of possible further
developments is given
Adaptive approximation of signed distance fields through piecewise continuous interpolation
In this paper, we present an adaptive structure to represent a signed distance field through trilinear or tricubic interpolation of values, and derivatives, that allows for fast querying of the field. We also provide a method to decide when to subdivide a node to achieve a provided threshold error. Both the numerical error control, and the values needed to build the interpolants, require the evaluation of the input field. Still, both are designed to minimize the total number of evaluations. C0 continuity is guaranteed for both the trilinear and tricubic version of the algorithm. Furthermore, we describe how to preserve C1 continuity between nodes of different levels when using a tricubic interpolant, and provide a proof that this property is maintained. Finally, we illustrate the usage of our approach in several applications, including direct rendering using sphere marching.This work has been partially funded by Ministeri de Ciència i Innovació (MICIN), Agencia Estatal de Investigación (AEI) and the Fons Europeu de Desenvolupament Regional (FEDER) (project PID2021-122136OB-C21 funded by MCIN/AEI/10.13039/501100011033/FEDER, UE). The first author gratefully acknowledges the Universitat Politècnica de Catalunya and Banco Santander for the financial support of his predoctoral grant FPI-UPC grant.Peer ReviewedPostprint (published version
Recommended from our members
MR Shuffling: Accelerated Single-Scan Multi-Contrast Magnetic Resonance Imaging
Magnetic resonance imaging (MRI) is an attractive medical imaging modality as it is non-invasive and does not involve ionizing radiation. Routine clinical MRI exams obtain MR images corresponding to different soft tissue contrast by performing multiple scans. When two-dimensional (2D) imaging is used, these scans are often repeated in other scanning planes. As a result, the number of scans comprising an MRI exam leads to prohibitively long exam times as compared to other medical imaging modalities such as computed tomography. Many approaches have been designed to accelerate the MRI acquisition while maintaining diagnostic quality.One approach is to collect multiple measurements while the MRI signal is evolving due to relaxation. This enables a reduction in scan time, as fewer acquisition windows are needed to collect the same number of measurements. However, when the temporal aspect of the acquisition is left unmodeled, artifacts are likely to appear in the reconstruction. Most often, these artifacts manifest as image blurring. The effect depends on the acquisition parameters as well as the tissue relaxation itself, resulting in spatially varying blurring. The severity of the artifacts is directly related to the level of acceleration, and thus presents a tradeoff with scan time. The effect is amplified when imaging in three dimensions, severely limiting scan efficiency. Volumetric variants would be used if not for the blurring, as they are able to reconstruct images at isotropic resolution and support mutli-planar reformatting.Another established acceleration technique, called parallel imaging, takes advantage of spatially sensitive receive coil arrays to collect multiple MRI measurements in parallel. Thus, the acquisition is shortened, and the reconstruction uses the spatial sensitivity information to recover the image. More recently, methods have been developed that leverage image structure such as sparsity and low rank to reduce the required number of samples for a well-posed reconstruction. Compressed sensing and its low rank extensions use these concepts to acquire incoherent measurements below the Nyquist rate. These techniques are especially suited to MRI, as incoherent measurements can be easily achieved through pseudo-random under-sampling. As the mechanisms behind parallel imaging and compressed sensing are fundamentally different, they can be combined to achieve even higher acceleration.This dissertation proposes accelerated MRI acquisition and reconstruction techniques that account for the temporal dynamics of the MR signal. The methods build off of parallel imaging and compressed sensing to reduce scan time and flexibly model the temporal relaxation behavior. By randomly shuffling the sampling in the acquisition stage and imposing low rank constraints in the reconstruction stage, intrinsic physical parameters are modeled and their dynamics are recovered as multiple images of varying tissue contrast. Additionally, blurring artifacts are significantly reduced, as the temporal dynamics are accounted for in the reconstruction.This dissertation first introduces T2 Shuffling, a volumetric technique that reduces blurring and reconstructs multiple T2-weighted image contrasts from a single acquisition. The method is integrated into a clinical hospital environment and evaluated on patients. Next, this dissertation develops a fast and distributed reconstruction for T2 Shuffling that achieves clinically relevant processing time latency. Clinical validation results are shown comparing T2 Shuffling as a single-sequence alternative to conventional pediatric knee MRI. Based off the compelling results, a fast targeted knee MRI using T2 Shuffling is implemented, enabling same-day access to MRI at one-third the cost compared to the conventional exam. To date, over 2,400 T2 Shuffling patient scans have been performed.Continuing the theme of accelerated multi-contrast imaging, this dissertation extends the temporal signal model with T1-T2 Shuffling. Building off of T2 Shuffling, the new method additionally samples multiple points along the saturation recovery curve by varying the repetition time durations during the scan. Since the signal dynamics are governed by both T1 recovery and T2 relaxation, the reconstruction captures information about both intrinsic tissue parameters. As a result, multiple target synthetic contrast images are reconstructed, all from a single scan. Approaches for selecting the sequence parameters are provided, and the method is evaluated on in vivo brain imaging of a volunteer.Altogether, these methods comprise the theme of MR Shuffling, and may open new pathways toward fast clinical MRI
Regular Grids: An Irregular Approach to the 3D Modelling Pipeline
The 3D modelling pipeline covers the process by which a physical object is scanned to create a set of points that lay on its surface. These data are then cleaned to remove outliers or noise, and the points are reconstructed into a digital representation of the original object.
The aim of this thesis is to present novel grid-based methods and provide several case studies of areas in the 3D modelling pipeline in which they may be effectively put to use.
The first is a demonstration of how using a grid can allow a significant reduction in memory required to perform the reconstruction. The second is the detection of surface features (ridges, peaks, troughs, etc.) during the surface reconstruction process.
The third contribution is the alignment of two meshes with zero prior knowledge. This is particularly suited to aligning two related, but not identical, models. The final contribution is the comparison of two similar meshes with support for both qualitative and quantitative outputs
- …