55 research outputs found

    Theory and applications of bijective barycentric mappings

    Get PDF
    Barycentric coordinates provide a convenient way to represent a point inside a triangle as a convex combination of the triangle's vertices, and to linearly interpolate data given at these vertices. Due to their favourable properties, they are commonly applied in geometric modelling, finite element methods, computer graphics, and many other fields. In some of these applications it is desirable to extend the concept of barycentric coordinates from triangles to polygons. Several variants of such generalized barycentric coordinates have been proposed in recent years. An important application of barycentric coordinates consists of barycentric mappings, which allow to naturally warp a source polygon to a corresponding target polygon, or more generally, to create mappings between closed curves or polyhedra. The principal practical application is image warping, which takes as input a control polygon drawn around an image and smoothly warps the image by moving the polygon vertices. A required property of image warping is to avoid fold-overs in the resulting image. The problem of fold-overs is a manifestation of a larger problem related to the lack of bijectivity of the barycentric mapping. Unfortunately, bijectivity of such barycentric mappings can only be guaranteed for the special case of warping between convex polygons or by triangulating the domain and hence renouncing smoothness. In fact, for any barycentric coordinates, it is always possible to construct a pair of polygons such that the barycentric mapping is not bijective. In the first part of this thesis we illustrate three methods to achieve bijective mappings. The first method is based on the intuition that, if two polygons are sufficiently close, then the mapping is close to the identity and hence bijective. This suggests to ``split'' the mapping into several intermediate mappings and to create a composite barycentric mapping which is guaranteed to be bijective between arbitrary polygons, polyhedra, or closed planar curves. We provide theoretical bounds on the bijectivity of the composite mapping related to the norm of the gradient of the coordinates. The fact that the bound depends on the gradient implies that these bounds exist only if the gradient of the coordinates is bounded. We focus on mean value coordinates and analyse the behaviour of their directional derivatives and gradient at the vertices of a polygon. The composition of barycentric mappings for closed planar curves leads to the problem of blending between two planar curves. We suggest to solve it by linearly interpolating the signed curvature and then reconstructing the intermediate curve from the interpolated curvature values. However, when both input curves are closed, this strategy can lead to open intermediate curves. We present a new algorithm for solving this problem, which finds the closed curve whose curvature is closest to the interpolated values. Our method relies on the definition of a suitable metric for measuring the distance between two planar curves and an appropriate discretization of the signed curvature functions. The second method to construct smooth bijective mappings with prescribed behaviour along the domain boundary exploits the properties of harmonic maps. These maps can be approximated in different ways, and we discuss their respective advantages and disadvantages. We further present a simple procedure for reducing their distortion and demonstrate the effectiveness of our approach by providing examples. The last method relies on a reformulation of complex barycentric mappings, which allows us to modify the ``speed'' along the edges to create complex bijective mappings. We provide some initial results and an optimization procedure which creates complex bijective maps. In the second part we provide two main applications of bijective mapping. The first one is in the context of finite elements simulations, where the discretization of the computational domain plays a central role. In the standard discretization, the domain is triangulated with a mesh and its boundary is approximated by a polygon. We present an approach which combines parametric finite elements with smooth bijective mappings, leaving the choice of approximation spaces free. This approach allows to represent arbitrarily complex geometries on coarse meshes with curved edges, regardless of the domain boundary complexity. The main idea is to use a bijective mapping for automatically warping the volume of a simple parametrization domain to the complex computational domain, thus creating a curved mesh of the latter. The second application addresses the meshing problem and the possibility to solve finite element simulations on polygonal meshes. In this context we present several methods to discretize the bijective mapping to create polygonal and piece-wise polynomial meshes

    Inferring Geodesic Cerebrovascular Graphs: Image Processing, Topological Alignment and Biomarkers Extraction

    Get PDF
    A vectorial representation of the vascular network that embodies quantitative features - location, direction, scale, and bifurcations - has many potential neuro-vascular applications. Patient-specific models support computer-assisted surgical procedures in neurovascular interventions, while analyses on multiple subjects are essential for group-level studies on which clinical prediction and therapeutic inference ultimately depend. This first motivated the development of a variety of methods to segment the cerebrovascular system. Nonetheless, a number of limitations, ranging from data-driven inhomogeneities, the anatomical intra- and inter-subject variability, the lack of exhaustive ground-truth, the need for operator-dependent processing pipelines, and the highly non-linear vascular domain, still make the automatic inference of the cerebrovascular topology an open problem. In this thesis, brain vessels’ topology is inferred by focusing on their connectedness. With a novel framework, the brain vasculature is recovered from 3D angiographies by solving a connectivity-optimised anisotropic level-set over a voxel-wise tensor field representing the orientation of the underlying vasculature. Assuming vessels joining by minimal paths, a connectivity paradigm is formulated to automatically determine the vascular topology as an over-connected geodesic graph. Ultimately, deep-brain vascular structures are extracted with geodesic minimum spanning trees. The inferred topologies are then aligned with similar ones for labelling and propagating information over a non-linear vectorial domain, where the branching pattern of a set of vessels transcends a subject-specific quantized grid. Using a multi-source embedding of a vascular graph, the pairwise registration of topologies is performed with the state-of-the-art graph matching techniques employed in computer vision. Functional biomarkers are determined over the neurovascular graphs with two complementary approaches. Efficient approximations of blood flow and pressure drop account for autoregulation and compensation mechanisms in the whole network in presence of perturbations, using lumped-parameters analog-equivalents from clinical angiographies. Also, a localised NURBS-based parametrisation of bifurcations is introduced to model fluid-solid interactions by means of hemodynamic simulations using an isogeometric analysis framework, where both geometry and solution profile at the interface share the same homogeneous domain. Experimental results on synthetic and clinical angiographies validated the proposed formulations. Perspectives and future works are discussed for the group-wise alignment of cerebrovascular topologies over a population, towards defining cerebrovascular atlases, and for further topological optimisation strategies and risk prediction models for therapeutic inference. Most of the algorithms presented in this work are available as part of the open-source package VTrails

    Computational Multiscale Methods

    Get PDF
    Computational Multiscale Methods play an important role in many modern computer simulations in material sciences with different time scales and different scales in space. Besides various computational challenges, the meeting brought together various applications from many disciplines and scientists from various scientiïŹc communities

    Multiscale methods for fabrication design

    Get PDF
    Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2018.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Cataloged from student-submitted PDF version of thesis.Includes bibliographical references (pages 135-146).Modern manufacturing technologies such as 3D printing enable the fabrication of objects with extraordinary complexity. Arranging materials to form functional structures can achieve a much wider range of physical properties than in the constituent materials. Many applications have been demonstrated in the fields of mechanics, acoustics, optics, and electromagnetics. Unfortunately, it is difficult to design objects manually in the large combinatorial space of possible designs. Computational design algorithms have been developed to automatically design objects with specified physical properties. However, many types of physical properties are still very challenging to optimize because predictive and efficient simulations are not available for problems such as high-resolution non-linear elasticity or dynamics with friction and impact. For simpler problems such as linear elasticity, where accurate simulation is available, the simulation resolution handled by desktop workstations is still orders of magnitudes below available printing resolutions. We propose to speed up simulation and inverse design process of fabricable objects by using multiscale methods. Our method computes coarse-scale simulation meshes with data-drive material models. It improves the simulation efficiency while preserving the characteristic deformation and motion of elastic objects. The first step in our method is to construct a library of microstructures with their material properties such as Young's modulus and Poisson's ratio. The range of achievable material properties is called the material property gamut. We developed efficient sampling method to compute the gamut by focusing on finding samples near and outside the currently sampled gamut. Next, with a pre-computed gamut, functional objects can be simulated and designed using microstructures instead of the base materials. This allows us to simulate and optimize complex objects at a much coarser scale to improve simulation efficiency. The speed improvement leads to designs with as many as a trillion voxels to match printer resolutions. It also enables computational design of dynamic properties that can be faithfully reproduced in reality. In addition to efficient design optimization, the gamut representation of the microstructure envelope provides a way to discover templates of microstructures with extremal physical properties. In contrast to work where such templates are constructed by hand, our work enables the first computational method to automatically discovery microstructure templates that arise from voxel representations.by Desai Chen.Ph. D

    ISCR Annual Report: Fical Year 2004

    Full text link

    Adaptive construction of surrogate functions for various computational mechanics models

    Get PDF
    In most science and engineering fields, numerical simulation models are often used to replicate physical systems. An attempt to imitate the true behavior of complex systems results in computationally expensive simulation models. The models are more often than not associated with a number of parameters that may be uncertain or variable. Propagation of variability from the input parameters in a simulation model to the output quantities is important for better understanding the system behavior. Variability propagation of complex systems requires repeated runs of costly simulation models with different inputs, which can be prohibitively expensive. Thus for efficient propagation, the total number of model evaluations needs to be as few as possible. An efficient way to account for the variations in the output of interest with respect to these parameters in such situations is to develop black-box surrogates. It involves replacing the expensive high-fidelity simulation model by a much cheaper model (surrogate) using a limited number of the high-fidelity simulations on a set of points called the design of experiments (DoE). The obvious challenge in surrogate modeling is to efficiently deal with simulation models that are expensive and contains a large number of uncertain parameters. Also, replication of different types of physical systems results in simulation models that vary based on the type of output (discrete or continuous models), extent of model output information (knowledge of output or output gradients or both), and whether the model is stochastic or deterministic in nature. All these variations in information from one model to the other demand development of different surrogate modeling algorithms for maximum efficiency. In this dissertation, simulation models related to application problems in the field of solid mechanics are considered that belong to each one of the above-mentioned classes of models. Different surrogate modeling strategies are proposed to deal with these models and their performance is demonstrated and compared with existing surrogate modeling algorithms. The developed algorithms, because of their non-intrusive nature, can be easily extended to simulation models of similar classes, pertaining to any other field of application
    • 

    corecore