572 research outputs found
Data Fusion of Surface Meshes and Volumetric Representations
The term Data Fusion refers to integrating knowledge from at least two independent sources of information such that the result is more than merely the sum of all inputs. In our project, the knowledge about a given specimen comprises its acquisitions from optical 3D scans and Computed Tomography with a special focus on limited-angle artifacts. In industrial quality inspection those imaging techniques are commonly used for non-destructive testing. Additional sources of information are digital descriptions for manufacturing, or tactile measurements of the specimen. Hence, we have several representations comprising the object as a whole, each with certain shortcomings and unique insights. We strive for combining all their strengths and compensating their weaknesses in order to create an enhanced representation of the acquired object. To achieve this, the identification of correspondences in the representations is the first task. We extract a subset with prominent exterior features from each input because all acquisitions include these features. To this end, regional queries from random seeds on an enclosing hull are employed. Subsequently, the relative orientation of the original data sets is calculated based on their subsets, as those comprise the - potentially defective - areas of overlap. We consider global features such as principal components and barycenters for the alignment, since in this specific case classical point-to-point comparisons are prone to error. Our alignment scheme outperforms traditional approaches and can even be enhanced by considering limited-angle artifacts in the reconstruction process of Computed Tomography. An analysis of local gradients in the resulting volumetric representation allows to distinguish between reliable observations and defects. Lastly, tactile measurements are extremely accurate but lack a suitable 3D representation. Thus, we also present an approach for converting them in a 3D surface suiting our work flow. As a result, the respective inputs are now aligned with each other, indicate the quality of the included information, and are in compatible format to be combined in a subsequent step. The data fusion result permits more accurate metrological tasks and increases the precision of detecting flaws in production or indications of wear-out. The final step of combining the data sets is briefly presented here along with the resulting augmented representation, but in its entirety and details subject to another PhD thesis within our joint project
Diffusion in multi-dimensional solids using Forman's combinatorial differential forms
The formulation of combinatorial differential forms, proposed by Forman for
analysis of topological properties of discrete complexes, is extended by
defining the operators required for analysis of physical processes dependent on
scalar variables. The resulting description is intrinsic, different from the
approach known as Discrete Exterior Calculus, because it does not assume the
existence of smooth vector fields and forms extrinsic to the discrete complex.
In addition, the proposed formulation provides a significant new modelling
capability: physical processes may be set to operate differently on cells with
different dimensions within a complex. An application of the new method to the
heat/diffusion equation is presented to demonstrate how it captures the effect
of changing properties of microstructural elements on the macroscopic behavior.
The proposed method is applicable to a range of physical problems, including
heat, mass and charge diffusion, and flow through porous media
Discrete Geometric Structures in Homogenization and Inverse Homogenization with application to EIT
We introduce a new geometric approach for the homogenization and inverse
homogenization of the divergence form elliptic operator with rough conductivity
coefficients in dimension two. We show that conductivity
coefficients are in one-to-one correspondence with divergence-free matrices and
convex functions over the domain . Although homogenization is a
non-linear and non-injective operator when applied directly to conductivity
coefficients, homogenization becomes a linear interpolation operator over
triangulations of when re-expressed using convex functions, and is a
volume averaging operator when re-expressed with divergence-free matrices.
Using optimal weighted Delaunay triangulations for linearly interpolating
convex functions, we obtain an optimally robust homogenization algorithm for
arbitrary rough coefficients. Next, we consider inverse homogenization and show
how to decompose it into a linear ill-posed problem and a well-posed non-linear
problem. We apply this new geometric approach to Electrical Impedance
Tomography (EIT). It is known that the EIT problem admits at most one isotropic
solution. If an isotropic solution exists, we show how to compute it from any
conductivity having the same boundary Dirichlet-to-Neumann map. It is known
that the EIT problem admits a unique (stable with respect to -convergence)
solution in the space of divergence-free matrices. As such we suggest that the
space of convex functions is the natural space in which to parameterize
solutions of the EIT problem
Courbure discrète : théorie et applications
International audienceThe present volume contains the proceedings of the 2013 Meeting on discrete curvature, held at CIRM, Luminy, France. The aim of this meeting was to bring together researchers from various backgrounds, ranging from mathematics to computer science, with a focus on both theory and applications. With 27 invited talks and 8 posters, the conference attracted 70 researchers from all over the world. The challenge of finding a common ground on the topic of discrete curvature was met with success, and these proceedings are a testimony of this wor
Discrete Differential Geometry
This is the collection of extended abstracts for the 26 lectures and the open problem session at the fourth Oberwolfach workshop on Discrete Differential Geometry
Nonstandard Finite Element Methods
[no abstract available
Comparing Boolean Operation Methods on 3D Solids
Geometric engines are developed to give answers on geometrical queries, such as,
what is the volume of a shape? Developing, testing and maintaining a geometric engine
which can be used generically to answer arbitrary geometric queries is a tedious and
time consuming task. Thousands of work hours are being spent towards this purpose.
A very important element of such geometric engines is the Boolean operations
on 3D objects. Boolean operations can be used to develop powerful tools for
CAD/CAM applications, by which, end users can save thousands of work hours during
modeling. While robust Boolean operations on 3D objects are difficult to implement,
once available, many geometric queries can be reduced to a collection of Boolean
operations. This reduction would save thousands of hours for the developers of such
CAD/CAM applications.
The goal of this thesis is to compare the Boolean implementation of Tekla Structures
with the Boolean implementation of CGAL and a recently introduced method, EMBER.
Using the results of this thesis, Tekla Structures’ currently unidentified vulnerabilities
in its Boolean implementation can be identified and thus, improved.
Quantitative results showed that Tekla Structures’ Boolean implementation, while
being fast, suffered in terms of robustness during the union and difference operations
with respect to CGAL and EMBER while doing remarkably well in the intersection
operations
Computerized Analysis of Magnetic Resonance Images to Study Cerebral Anatomy in Developing Neonates
The study of cerebral anatomy in developing neonates is of great importance for
the understanding of brain development during the early period of life. This
dissertation therefore focuses on three challenges in the modelling of cerebral
anatomy in neonates during brain development. The methods that have been
developed all use Magnetic Resonance Images (MRI) as source data.
To facilitate study of vascular development in the neonatal period, a set of image
analysis algorithms are developed to automatically extract and model cerebral
vessel trees. The whole process consists of cerebral vessel tracking from
automatically placed seed points, vessel tree generation, and vasculature
registration and matching. These algorithms have been tested on clinical Time-of-
Flight (TOF) MR angiographic datasets.
To facilitate study of the neonatal cortex a complete cerebral cortex segmentation
and reconstruction pipeline has been developed. Segmentation of the neonatal
cortex is not effectively done by existing algorithms designed for the adult brain
because the contrast between grey and white matter is reversed. This causes pixels
containing tissue mixtures to be incorrectly labelled by conventional methods. The
neonatal cortical segmentation method that has been developed is based on a novel
expectation-maximization (EM) method with explicit correction for mislabelled
partial volume voxels. Based on the resulting cortical segmentation, an implicit
surface evolution technique is adopted for the reconstruction of the cortex in
neonates. The performance of the method is investigated by performing a detailed
landmark study.
To facilitate study of cortical development, a cortical surface registration algorithm
for aligning the cortical surface is developed. The method first inflates extracted
cortical surfaces and then performs a non-rigid surface registration using free-form
deformations (FFDs) to remove residual alignment. Validation experiments using
data labelled by an expert observer demonstrate that the method can capture local
changes and follow the growth of specific sulcus
Advances in Discrete Differential Geometry
Differential Geometr
- …