308 research outputs found

    Smoothness-Increasing Accuracy-Conserving (SIAC) filtering and quasi interpolation: A unified view

    Get PDF
    Filtering plays a crucial role in postprocessing and analyzing data in scientific and engineering applications. Various application-specific filtering schemes have been proposed based on particular design criteria. In this paper, we focus on establishing the theoretical connection between quasi-interpolation and a class of kernels (based on B-splines) that are specifically designed for the postprocessing of the discontinuous Galerkin (DG) method called Smoothness-Increasing Accuracy-Conserving (SIAC) filtering. SIAC filtering, as the name suggests, aims to increase the smoothness of the DG approximation while conserving the inherent accuracy of the DG solution (superconvergence). Superconvergence properties of SIAC filtering has been studied in the literature. In this paper, we present the theoretical results that establish the connection between SIAC filtering to long-standing concepts in approximation theory such as quasi-interpolation and polynomial reproduction. This connection bridges the gap between the two related disciplines and provides a decisive advancement in designing new filters and mathematical analysis of their properties. In particular, we derive a closed formulation for convolution of SIAC kernels with polynomials. We also compare and contrast cardinal spline functions as an example of filters designed for image processing applications with SIAC filters of the same order, and study their properties

    Isogeometric Approximation of Variational Problems for Shells

    Get PDF
    The interaction of applied geometry and numerical simulation is a growing field in the interplay of com- puter graphics, computational mechanics and applied mathematics known as isogeometric analysis. In this thesis we apply and analyze Loop subdivision surfaces as isogeometric tool because they provide great flexibility in handling surfaces of arbitrary topology combined with higher order smoothness. Compared with finite element methods, isogeometric methods are known to require far less degrees of freedom for the modeling of complex surfaces but at the same time the assembly of the isogeo- metric matrices is much more time-consuming. Therefore, we implement the isogeometric subdivision method and analyze the experimental convergence behavior for different quadrature schemes. The mid-edge quadrature combines robustness and efficiency, where efficiency is additionally increased via lookup tables. For the first time, the lookup tables allow the simulation with control meshes of arbitrary closed connectivity without an initial subdivision step, i.e. triangles can have more than one vertex with valence different from six. Geometric evolution problems have many applications in material sciences, surface processing and modeling, bio-mechanics, elasticity and physical simulations. These evolution problems are often based on the gradient flow of a geometric energy depending on first and second fundamental forms of the surface. The isogeometric approach allows a conforming higher order spatial discretization of these geometric evolutions. To overcome a time-error dominated scheme, we combine higher order space and time discretizations, where the time discretization based on implicit Runge-Kutta methods. We prove that the energy diminishes in every time-step in the fully discrete setting under mild time-step restrictions which is the crucial characteristic of a gradient flow. The overall setup allows for a general type of fourth-order energies. Among others, we perform experiments for Willmore flow with respect to different metrics. In the last chapter of this thesis we apply the time-discrete geodesic calculus in shape space to the space of subdivision shells. By approximating the squared Riemannian distance by a suitable energy, this approach defines a discrete path energy for a consistent computation of geodesics, logarithm and exponential maps and parallel transport. As approximation we pick up an elastic shell energy, which measures the deformation of a shell by membrane and bending contributions of its mid-surface. BĂ©zier curves are a fundamental tool in computer-aided geometric design. We extend these to the subdivision shell space by generalizing the de Casteljau algorithm. The evaluation of BĂ©zier curves depends on all input data. To solve this problem, we introduce B-splines and cardinal splines in shape space by gluing together piecewise BĂ©zier curves in a smooth way. We show examples of quadratic and cubic BĂ©zier curves, quadratic and cubic B-splines as well as cardinal splines in subdivision shell space

    Assisting digital volume correlation with mechanical image-based modeling: application to the measurement of kinematic fields at the architecture scale in cellular materials

    Get PDF
    La mesure de champs de déplacement et de déformation aux petites échelles dans des microstructures complexes représente encore un défi majeur dans le monde de la mécanique expérimentale. Ceci est en partie dû aux acquisitions d'images et à la pauvreté de la texture à ces échelles. C'est notamment le cas pour les matériaux cellulaires lorsqu'ils sont imagés avec des micro-tomographes conventionnels et qu'ils peuvent être sujets à des mécanismes de déformation complexes. Comme la validation de modèles numériques et l'identification des propriétés mécaniques de matériaux se base sur des mesures précises de déplacements et de déformations, la conception et l'implémentation d'algorithmes robustes et fiables de corrélation d'images semble nécessaire. Lorsque l'on s'intéresse à l'utilisation de la corrélation d'images volumiques (DVC) pour les matériaux cellulaires, on est confronté à un paradoxe: l'absence de texture à l'échelle du constituant conduit à considérer l'architecture comme marqueur pour la corrélation. Ceci conduit à l'échec des techniques ordinaires de DVC à mesurer des cinématiques aux échelles subcellulaires en lien avec des comportements mécaniques locaux complexes tels que la flexion ou le flambement de travées. L'objectif de cette thèse est la conception d'une technique de DVC pour la mesure de champs de déplacement dans des matériaux cellulaires à l'échelle de leurs architectures. Cette technique assiste la corrélation d'images par une régularisation élastique faible en utilisant un modèle mécanique généré automatiquement et basé sur les images. La méthode suggérée introduit une séparation d'échelles au dessus desquelles la DVC est dominante et en dessous desquelles elle est assistée par le modèle mécanique basé sur l'image. Une première étude numérique consistant à comparer différentes techniques de construction de modèles mécaniques basés sur les images est conduite. L'accent est mis sur deux méthodes de calcul particulières: la méthode des éléments finis (FEM) et la méthode des cellules finies (FCM) qui consiste à immerger la géométrie complexe dans une grille régulière de haut ordre sans utiliser de mailleurs. Si la FCM évite une première phase délicate de discrétisation, plusieurs paramètres restent néanmoins délicats à fixer. Dans ce travail, ces paramètres sont ajustés afin d'obtenir (a) la meilleure précision (bornée par les erreurs de pixellisation) tout en (b) assurant une complexité minimale. Pour l'aspect mesure par corrélation d'images régularisée, plusieurs expérimentations virtuelles à partir de différentes simulations numériques (en élasticité, en plasticité et en non-linéarité géométrique) sont d'abord réalisées afin d'analyser l'influence des paramètres de régularisation introduits. Les erreurs de mesures peuvent dans ce cas être quantifiées à l'aide des solutions de référence éléments finis. La capacité de la méthode à mesurer des cinématiques complexes en absence de texture est démontrée pour des régimes non-linéaires tels que le flambement. Finalement, le travail proposé est généralisé à la corrélation volumique des différents états de déformation du matériau et à la construction automatique de la micro-architecture cellulaire en utilisant soit une grille B-spline d'ordre arbitraire (FCM) soit un maillage éléments finis (FEM). Une mise en évidence expérimentale de l'efficacité et de la justesse de l'approche proposée est effectuée à travers de la mesure de cinématiques complexes dans une mousse polyuréthane sollicitée en compression lors d'un essai in situ.Measuring displacement and strain fields at low observable scales in complex microstructures still remains a challenge in experimental mechanics often because of the combination of low definition images with poor texture at this scale. The problem is particularly acute in the case of cellular materials, when imaged by conventional micro-tomographs, for which complex highly non-linear local phenomena can occur. As the validation of numerical models and the identification of mechanical properties of materials must rely on accurate measurements of displacement and strain fields, the design and implementation of robust and faithful image correlation algorithms must be conducted. With cellular materials, the use of digital volume correlation (DVC) faces a paradox: in the absence of markings of exploitable texture on/or in the struts or cell walls, the available speckle will be formed by the material architecture itself. This leads to the inability of classical DVC codes to measure kinematics at the cellular and a fortiori sub-cellular scales, precisely because the interpolation basis of the displacement field cannot account for the complexity of the underlying kinematics, especially when bending or buckling of beams or walls occurs. The objective of the thesis is to develop a DVC technique for the measurement of displacement fields in cellular materials at the scale of their architecture. The proposed solution consists in assisting DVC by a weak elastic regularization using an automatic image-based mechanical model. The proposed method introduces a separation of scales above which DVC is dominant and below which it is assisted by image-based modeling. First, a numerical investigation and comparison of different techniques for building automatically a geometric and mechanical model from tomographic images is conducted. Two particular methods are considered: the finite element method (FEM) and the finite-cell method (FCM). The FCM is a fictitious domain method that consists in immersing the complex geometry in a high order structured grid and does not require meshing. In this context, various discretization parameters appear delicate to choose. In this work, these parameters are adjusted to obtain (a) the best possible accuracy (bounded by pixelation errors) while (b) ensuring minimal complexity. Concerning the ability of the mechanical image-based models to regularize DIC, several virtual experimentations are performed in two-dimensions in order to finely analyze the influence of the introduced regularization lengths for different input mechanical behaviors (elastic, elasto-plastic and geometrically non-linear) and in comparison with ground truth. We show that the method can estimate complex local displacement and strain fields with speckle-free low definition images, even in non-linear regimes such as local buckling. Finally a three-dimensional generalization is performed through the development of a DVC framework. It takes as an input the reconstructed volumes at the different deformation states of the material and constructs automatically the cellular micro-architeture geometry. It considers either an immersed structured B-spline grid of arbitrary order or a finite-element mesh. An experimental evidence is performed by measuring the complex kinematics of a polyurethane foam under compression during an in situ test

    Curve Skeleton and Moments of Area Supported Beam Parametrization in Multi-Objective Compliance Structural Optimization

    Get PDF
    This work addresses the end-to-end virtual automation of structural optimization up to the derivation of a parametric geometry model that can be used for application areas such as additive manufacturing or the verification of the structural optimization result with the finite element method. A holistic design in structural optimization can be achieved with the weighted sum method, which can be automatically parameterized with curve skeletonization and cross-section regression to virtually verify the result and control the local size for additive manufacturing. is investigated in general. In this paper, a holistic design is understood as a design that considers various compliances as an objective function. This parameterization uses the automated determination of beam parameters by so-called curve skeletonization with subsequent cross-section shape parameter estimation based on moments of area, especially for multi-objective optimized shapes. An essential contribution is the linking of the parameterization with the results of the structural optimization, e.g., to include properties such as boundary conditions, load conditions, sensitivities or even density variables in the curve skeleton parameterization. The parameterization focuses on guiding the skeletonization based on the information provided by the optimization and the finite element model. In addition, the cross-section detection considers circular, elliptical, and tensor product spline cross-sections that can be applied to various shape descriptors such as convolutional surfaces, subdivision surfaces, or constructive solid geometry. The shape parameters of these cross-sections are estimated using stiffness distributions, moments of area of 2D images, and convolutional neural networks with a tailored loss function to moments of area. Each final geometry is designed by extruding the cross-section along the appropriate curve segment of the beam and joining it to other beams by using only unification operations. The focus of multi-objective structural optimization considering 1D, 2D and 3D elements is on cases that can be modeled using equations by the Poisson equation and linear elasticity. This enables the development of designs in application areas such as thermal conduction, electrostatics, magnetostatics, potential flow, linear elasticity and diffusion, which can be optimized in combination or individually. Due to the simplicity of the cases defined by the Poisson equation, no experts are required, so that many conceptual designs can be generated and reconstructed by ordinary users with little effort. Specifically for 1D elements, a element stiffness matrices for tensor product spline cross-sections are derived, which can be used to optimize a variety of lattice structures and automatically convert them into free-form surfaces. For 2D elements, non-local trigonometric interpolation functions are used, which should significantly increase interpretability of the density distribution. To further improve the optimization, a parameter-free mesh deformation is embedded so that the compliances can be further reduced by locally shifting the node positions. Finally, the proposed end-to-end optimization and parameterization is applied to verify a linear elasto-static optimization result for and to satisfy local size constraint for the manufacturing with selective laser melting of a heat transfer optimization result for a heat sink of a CPU. For the elasto-static case, the parameterization is adjusted until a certain criterion (displacement) is satisfied, while for the heat transfer case, the manufacturing constraints are satisfied by automatically changing the local size with the proposed parameterization. This heat sink is then manufactured without manual adjustment and experimentally validated to limit the temperature of a CPU to a certain level.:TABLE OF CONTENT III I LIST OF ABBREVIATIONS V II LIST OF SYMBOLS V III LIST OF FIGURES XIII IV LIST OF TABLES XVIII 1. INTRODUCTION 1 1.1 RESEARCH DESIGN AND MOTIVATION 6 1.2 RESEARCH THESES AND CHAPTER OVERVIEW 9 2. PRELIMINARIES OF TOPOLOGY OPTIMIZATION 12 2.1 MATERIAL INTERPOLATION 16 2.2 TOPOLOGY OPTIMIZATION WITH PARAMETER-FREE SHAPE OPTIMIZATION 17 2.3 MULTI-OBJECTIVE TOPOLOGY OPTIMIZATION WITH THE WEIGHTED SUM METHOD 18 3. SIMULTANEOUS SIZE, TOPOLOGY AND PARAMETER-FREE SHAPE OPTIMIZATION OF WIREFRAMES WITH B-SPLINE CROSS-SECTIONS 21 3.1 FUNDAMENTALS IN WIREFRAME OPTIMIZATION 22 3.2 SIZE AND TOPOLOGY OPTIMIZATION WITH PERIODIC B-SPLINE CROSS-SECTIONS 27 3.3 PARAMETER-FREE SHAPE OPTIMIZATION EMBEDDED IN SIZE OPTIMIZATION 32 3.4 WEIGHTED SUM SIZE AND TOPOLOGY OPTIMIZATION 36 3.5 CROSS-SECTION COMPARISON 39 4. NON-LOCAL TRIGONOMETRIC INTERPOLATION IN TOPOLOGY OPTIMIZATION 41 4.1 FUNDAMENTALS IN MATERIAL INTERPOLATIONS 43 4.2 NON-LOCAL TRIGONOMETRIC SHAPE FUNCTIONS 45 4.3 NON-LOCAL PARAMETER-FREE SHAPE OPTIMIZATION WITH TRIGONOMETRIC SHAPE FUNCTIONS 49 4.4 NON-LOCAL AND PARAMETER-FREE MULTI-OBJECTIVE TOPOLOGY OPTIMIZATION 54 5. FUNDAMENTALS IN SKELETON GUIDED SHAPE PARAMETRIZATION IN TOPOLOGY OPTIMIZATION 58 5.1 SKELETONIZATION IN TOPOLOGY OPTIMIZATION 61 5.2 CROSS-SECTION RECOGNITION FOR IMAGES 66 5.3 SUBDIVISION SURFACES 67 5.4 CONVOLUTIONAL SURFACES WITH META BALL KERNEL 71 5.5 CONSTRUCTIVE SOLID GEOMETRY 73 6. CURVE SKELETON GUIDED BEAM PARAMETRIZATION OF TOPOLOGY OPTIMIZATION RESULTS 75 6.1 FUNDAMENTALS IN SKELETON SUPPORTED RECONSTRUCTION 76 6.2 SUBDIVISION SURFACE PARAMETRIZATION WITH PERIODIC B-SPLINE CROSS-SECTIONS 78 6.3 CURVE SKELETONIZATION TAILORED TO TOPOLOGY OPTIMIZATION WITH PRE-PROCESSING 82 6.4 SURFACE RECONSTRUCTION USING LOCAL STIFFNESS DISTRIBUTION 86 7. CROSS-SECTION SHAPE PARAMETRIZATION FOR PERIODIC B-SPLINES 96 7.1 PRELIMINARIES IN B-SPLINE CONTROL GRID ESTIMATION 97 7.2 CROSS-SECTION EXTRACTION OF 2D IMAGES 101 7.3 TENSOR SPLINE PARAMETRIZATION WITH MOMENTS OF AREA 105 7.4 B-SPLINE PARAMETRIZATION WITH MOMENTS OF AREA GUIDED CONVOLUTIONAL NEURAL NETWORK 110 8. FULLY AUTOMATED COMPLIANCE OPTIMIZATION AND CURVE-SKELETON PARAMETRIZATION FOR A CPU HEAT SINK WITH SIZE CONTROL FOR SLM 115 8.1 AUTOMATED 1D THERMAL COMPLIANCE MINIMIZATION, CONSTRAINED SURFACE RECONSTRUCTION AND ADDITIVE MANUFACTURING 118 8.2 AUTOMATED 2D THERMAL COMPLIANCE MINIMIZATION, CONSTRAINT SURFACE RECONSTRUCTION AND ADDITIVE MANUFACTURING 120 8.3 USING THE HEAT SINK PROTOTYPES COOLING A CPU 123 9. CONCLUSION 127 10. OUTLOOK 131 LITERATURE 133 APPENDIX 147 A PREVIOUS STUDIES 147 B CROSS-SECTION PROPERTIES 149 C CASE STUDIES FOR THE CROSS-SECTION PARAMETRIZATION 155 D EXPERIMENTAL SETUP 15

    B-splines for sparse grids : algorithms and application to higher-dimensional optimization

    Get PDF
    In simulation technology, computationally expensive objective functions are often replaced by cheap surrogates, which can be obtained by interpolation. Full grid interpolation methods suffer from the so-called curse of dimensionality, rendering them infeasible if the parameter domain of the function is higher-dimensional (four or more parameters). Sparse grids constitute a discretization method that drastically eases the curse, while the approximation quality deteriorates only insignificantly. However, conventional basis functions such as piecewise linear functions are not smooth (continuously differentiable). Hence, these basis functions are unsuitable for applications in which gradients are required. One example for such an application is gradient-based optimization, in which the availability of gradients greatly improves the speed of convergence and the accuracy of the results. This thesis demonstrates that hierarchical B-splines on sparse grids are well-suited for obtaining smooth interpolants for higher dimensionalities. The thesis is organized in two main parts: In the first part, we derive new B-spline bases on sparse grids and study their implications on theory and algorithms. In the second part, we consider three real-world applications in optimization: topology optimization, biomechanical continuum-mechanics, and dynamic portfolio choice models in finance. The results reveal that the optimization problems of these applications can be solved accurately and efficiently with hierarchical B-splines on sparse grids.In der Simulationstechnik werden zeitaufwendige Zielfunktionen oft durch einfache Surrogate ersetzt, die durch Interpolation gewonnen werden können. Vollgitter-Interpolationsmethoden leiden unter dem sogenannten Fluch der Dimensionalität, der sie unbrauchbar macht, falls der Parameterbereich der Funktion höherdimensional ist (vier oder mehr Parameter). Dünne Gitter sind eine Diskretisierungsmethode, die den Fluch drastisch lindert und die Approximationsqualität nur leicht verschlechtert. Leider sind konventionelle Basisfunktionen wie die stückweise linearen Funktionen nicht glatt (stetig differenzierbar). Daher sind sie für Anwendungen ungeeignet, in denen Gradienten benötigt werden. Ein Beispiel für eine solche Anwendung ist gradientenbasierte Optimierung, in der die Verfügbarkeit von Gradienten die Konvergenzgeschwindigkeit und die Ergebnisgenauigkeit deutlich verbessert. Diese Dissertation demonstriert, dass hierarchische B-Splines auf dünnen Gittern hervorragend geeignet sind, um glatte Interpolierende für höhere Dimensionalitäten zu erhalten. Die Dissertation ist in zwei Hauptteile gegliedert: Der erste Teil leitet neue B-Spline-Basen auf dünnen Gittern her und untersucht ihre Implikationen bezüglich Theorie und Algorithmen. Der zweite Teil behandelt drei Realwelt-Anwendungen aus der Optimierung: Topologieoptimierung, biomechanische Kontinuumsmechanik und Modelle der dynamischen Portfolio-Wahl in der Finanzmathematik. Die Ergebnisse zeigen, dass die Optimierungsprobleme dieser Anwendungen durch hierarchische B-Splines auf dünnen Gittern genau und effizient gelöst werden können

    Novel mesh generation method for accurate image-based computational modelling of blood vessels

    Get PDF
    • …
    corecore