60 research outputs found
Topology Optimization of Structures with High Spatial Definition Considering Minimum Weight and Stress Constraints
Programa Oficial de Doutoramento en Enxeñaría Civil . 5011V01[Abstract]
The first formulation of Topology Optimization was proposed in 1988. Since then,
many contributions have been presented with the purpose of improving its efficiency
and extending its applicability. In this thesis, a topology optimization algorithm that
allows to obtain the structure of minimum weight that is able to support different loads
is developed. For this purpose, the requirement that stresses have to be lower than a
maximum value has been considered in its development.
Although the structural topology optimization problem with stress constraints have
been previously formulated with several different approaches, a Damage Constraint
approach is developed in this thesis to incorporate them in a different way. The main
objective of this modification is to reduce the CPU time required in the solution of
the topology optimization problem. This reduction will allow to solve problems with a
higher number of design variables what enables the attainment of solutions with high
spatial definition.
Moreover, two different approaches are used to define the material distribution in the
domain: uniform density per element formulation and material density distribution by
means of isogeometric interpolation. In the first approach the Finite Element Method
(FEM) is used to solve the structural analysis and the relative density in each element
of the mesh is chosen as design variable, while the second one uses the Isogeometric
Analysis (IGA) for solving the structural analysis and the values of the relative density
at a certain number of control points are used as design variables.
On the other hand, the optimization is addressed by using Sequential Linear Programming,
that requires a first order sensitivity analysis. All the sensitivities are
obtained through analytic derivatives by using both, direct differentiation and the adjoint
variable method. Finally, some application examples are solved by means of both
methods (FEM and IGA) in the two-dimensional and three-dimensional space.[Resumen]
La primera formulación de la Optimización Topológica fue propuesta en 1988. Desde
entonces muchas aportaciones se han presentado para mejorar su eficiencia y extender
su aplicabilidad. En esta tesis se desarrolla un algoritmo de optimización topológica
que permita obtener la estructura de mínimo peso que sea capaz de soportar diferentes
cargas. Para este propósito se ha considerado en su desarrollo la condición de que las
tensiones sean inferiores a un cierto valor máximo.
Aunque el problema de optimización topológica estructural con restricciones de
tensión se formuló previamente con diferentes enfoques, en esta tesis se desarrolla un
enfoque que considera una restricción de daño para incorporarlas de una forma diferente.
El principal objetivo de esta modificación es reducir el tiempo de computación
requerido en la solución del problema de optimización topológica. Esta reducción permitir
´a resolver problemas con un mayor número de variables de diseño lo que a su vez
permite la obtención de soluciones con alta definición espacial.
Para definir la distribución de material en el dominio se usan dos formulaciones
diferentes: formulación de densidad uniforme por elemento y distribución de material
por medio de una interpolación isogeométrica. El primer planteamiento usa el Método
de los Elementos Finitos (MEF) para resolver el análisis estructural y toma como
variable de diseño el valor de la densidad relativa en cada elemento de la malla, mientras
que el segundo requiere del uso del Análisis Isogeométrico (IGA) para resolver el análisis
estructural y los valores de la densidad relativa en un cierto número de puntos de control
son las variables de diseño.
El problema de optimización se resuelve con las técnicas de Programación Lineal Secuencial
requiriendo ´únicamente el análisis de sensibilidad de primer orden. Todas las
derivadas se calculan por derivación analítica haciendo uso de las técnicas de derivación
directa y del método de la variable adjunta. Finalmente, se resuelven algunos ejemplos
de aplicación con ambos métodos (MEF e IGA) en el espacio bidimensional y
tridimensional.[Resumo]
A primeira formulación da Optimización Topolóxica foi proposta en 1988. Desde
entón moitas achegas se presentaron para mellorar a súa eficiencia e estender a súa
aplicabilidade. Nesta tese desenvólvese un algoritmo de optimización topolóxica que
permita obter a estrutura de mínimo peso que sexa capaz de soportar diferentes cargas.
Para este propósito considerouse no seu desenvolvemento a condición de que as tensións
sexan inferiores a un certo valor máximo.
Aínda que o problema de optimización topolóxica estrutural con restricións de
tensi´on formulouse previamente con diferentes enfoques, nesta tese desenvólvese un enfoque
que considera unha restrición de dano para incorporalas dunha forma diferente.
O principal obxectivo desta modificación é reducir o tempo de computación requirido
na solución do problema de optimizaci´on topol´oxica. Esta reduci´on permitir´a resolver
problemas cun maior número de variables de dese˜no o que ´a s´ua vez permite a obtención
de solucións con alta definición espacial.
Para definir a distribución de material no dominio úsanse dúas formulacións diferentes:
formulación de densidade uniforme por elemento e distribución de material por
medio dunha interpolación isoxeométrica. A primeira formulación usa o Método dos
Elementos Finitos (MEF) para resolver a análise estrutural e toma coma variable de
deseño o valor da densidade relativa en cada elemento da malla, mentres que o segundo
require do uso da Análise Isoxeométrica (IGA) para resolver a análise estrutural e os
valores da densidade relativa nun certo número de puntos de control son as variables
de deseño.
O problema de optimización resólvese coas técnicas de Programación Lineal Secuencial
requirindo unicamente a análise de sensibilidade de primeira orde. Todas as
derivadas calcúlanse por derivación analítica facendo uso das técnicas de derivación
directa e do método da variable adxunta. Finalmente, resólvense algúns exemplos de
aplicación con ámbolos métodos (MEF e IGA) no espazo bidimensional e tridimensionalMinisterio de Economía y Competitividad; DPI2015-68341-RMinisterio de Economía y Competitividad; RTI2018-093366-B-I00Xunta de Galicia; GRC2014/039Xunta de Galicia; GRC2018/4
Isogeometric shape optimization of smoothed petal auxetic structures via computational periodic homogenization
An important feature that drives the auxetic behaviour of the star-shaped auxetic structures is the hinge-functional connection at the vertex connections. This feature poses a great challenge for manufacturing and may lead to significant stress concentrations. To overcome these problems, we introduced smoothed petal-shaped auxetic structures, where the hinges are replaced by smoothed connections. To accommodate the curved features of the petal-shaped auxetics, a parametrisation modelling scheme using multiple NURBS patches is proposed. Next, an integrated shape design frame work using isogeometric analysis is adopted to improve the structural performance. To ensure a minimum thickness for each member, a geometry sizing constraint is imposed via piece-wise bounding polynomials. This geometry sizing constraint, in the context of isogeometric shape optimization, is particularly interesting due to the non-interpolatory nature of NURBS basis. The effective Poisson ratio is used directly as the objective function, and an adjoint sensitivity analysis is carried out. The optimized designs – smoothed petal auxetic structures – are shown to achieve low negative Poisson’s ratios, while the difficulties of manufacturing the hinges are avoided. For the case with six petals, an in-plane isotropy is achieved.Singapore MOE Tier 2 Grant R30200013911
GRIDS-Net: Inverse shape design and identification of scatterers via geometric regularization and physics-embedded deep learning
This study presents a deep learning based methodology for both remote sensing
and design of acoustic scatterers. The ability to determine the shape of a
scatterer, either in the context of material design or sensing, plays a
critical role in many practical engineering problems. This class of inverse
problems is extremely challenging due to their high-dimensional, nonlinear, and
ill-posed nature. To overcome these technical hurdles, we introduce a geometric
regularization approach for deep neural networks (DNN) based on non-uniform
rational B-splines (NURBS) and capable of predicting complex 2D scatterer
geometries in a parsimonious dimensional representation. Then, this geometric
regularization is combined with physics-embedded learning and integrated within
a robust convolutional autoencoder (CAE) architecture to accurately predict the
shape of 2D scatterers in the context of identification and inverse design
problems. An extensive numerical study is presented in order to showcase the
remarkable ability of this approach to handle complex scatterer geometries
while generating physically-consistent acoustic fields. The study also assesses
and contrasts the role played by the (weakly) embedded physics in the
convergence of the DNN predictions to a physically consistent inverse design.Comment: 23 pages of main text, 10 figure
A sharp interface isogeometric strategy for moving boundary problems
The proposed methodology is first utilized to model stationary and propagating cracks. The crack face is enriched with the Heaviside function which captures the displacement discontinuity. Meanwhile, the crack tips are enriched with asymptotic displacement functions to reproduce the tip singularity. The enriching degrees of freedom associated with the crack tips are chosen as stress intensity factors (SIFs) such that these quantities can be directly extracted from the solution without a-posteriori integral calculation.
As a second application, the Stefan problem is modeled with a hybrid function/derivative enriched interface. Since the interface geometry is explicitly defined, normals and curvatures can be analytically obtained at any point on the interface, allowing for complex boundary conditions dependent on curvature or normal to be naturally imposed. Thus, the enriched approximation naturally captures the interfacial discontinuity in temperature gradient and enables the imposition of Gibbs-Thomson condition during solidification simulation.
The shape optimization through configuration of finite-sized heterogeneities is lastly studied. The optimization relies on the recently derived configurational derivative that describes the sensitivity of an arbitrary objective with respect to arbitrary design modifications of a heterogeneity inserted into a domain. The THB-splines, which serve as the underlying approximation, produce sufficiently smooth solution near the boundaries of the heterogeneity for accurate calculation of the configurational derivatives. (Abstract shortened by ProQuest.
Assisting digital volume correlation with mechanical image-based modeling: application to the measurement of kinematic fields at the architecture scale in cellular materials
La mesure de champs de déplacement et de déformation aux petites échelles dans des microstructures complexes représente encore un défi majeur dans le monde de la mécanique expérimentale. Ceci est en partie dû aux acquisitions d'images et à la pauvreté de la texture à ces échelles. C'est notamment le cas pour les matériaux cellulaires lorsqu'ils sont imagés avec des micro-tomographes conventionnels et qu'ils peuvent être sujets à des mécanismes de déformation complexes. Comme la validation de modèles numériques et l'identification des propriétés mécaniques de matériaux se base sur des mesures précises de déplacements et de déformations, la conception et l'implémentation d'algorithmes robustes et fiables de corrélation d'images semble nécessaire. Lorsque l'on s'intéresse à l'utilisation de la corrélation d'images volumiques (DVC) pour les matériaux cellulaires, on est confronté à un paradoxe: l'absence de texture à l'échelle du constituant conduit à considérer l'architecture comme marqueur pour la corrélation. Ceci conduit à l'échec des techniques ordinaires de DVC à mesurer des cinématiques aux échelles subcellulaires en lien avec des comportements mécaniques locaux complexes tels que la flexion ou le flambement de travées. L'objectif de cette thèse est la conception d'une technique de DVC pour la mesure de champs de déplacement dans des matériaux cellulaires à l'échelle de leurs architectures. Cette technique assiste la corrélation d'images par une régularisation élastique faible en utilisant un modèle mécanique généré automatiquement et basé sur les images. La méthode suggérée introduit une séparation d'échelles au dessus desquelles la DVC est dominante et en dessous desquelles elle est assistée par le modèle mécanique basé sur l'image. Une première étude numérique consistant à comparer différentes techniques de construction de modèles mécaniques basés sur les images est conduite. L'accent est mis sur deux méthodes de calcul particulières: la méthode des éléments finis (FEM) et la méthode des cellules finies (FCM) qui consiste à immerger la géométrie complexe dans une grille régulière de haut ordre sans utiliser de mailleurs. Si la FCM évite une première phase délicate de discrétisation, plusieurs paramètres restent néanmoins délicats à fixer. Dans ce travail, ces paramètres sont ajustés afin d'obtenir (a) la meilleure précision (bornée par les erreurs de pixellisation) tout en (b) assurant une complexité minimale. Pour l'aspect mesure par corrélation d'images régularisée, plusieurs expérimentations virtuelles
à partir de différentes simulations numériques (en élasticité, en plasticité et en non-linéarité géométrique) sont d'abord réalisées afin d'analyser l'influence des paramètres de régularisation introduits. Les erreurs de mesures peuvent dans ce cas être quantifiées à l'aide des solutions de référence éléments finis. La capacité de la méthode à mesurer des cinématiques complexes en absence de texture est démontrée pour des régimes non-linéaires tels que le flambement. Finalement, le travail proposé est généralisé à la corrélation volumique des différents états de déformation du matériau et à la construction automatique de la micro-architecture cellulaire en utilisant soit une grille B-spline d'ordre arbitraire (FCM) soit un maillage éléments finis (FEM). Une mise en évidence expérimentale de l'efficacité et de la justesse de l'approche proposée est effectuée à travers de la mesure de cinématiques complexes dans une mousse polyuréthane sollicitée en compression lors d'un essai in situ.Measuring displacement and strain fields at low observable scales in complex microstructures still remains a challenge in experimental mechanics often because of the combination of low definition images with poor texture at this scale. The problem is particularly acute in the case of cellular materials, when imaged by conventional micro-tomographs, for which complex highly non-linear local phenomena can occur. As the validation of numerical models and the identification of mechanical properties of materials must rely on accurate measurements of displacement and strain fields, the design and implementation of robust and faithful image correlation algorithms must be conducted. With cellular materials, the use of digital volume correlation (DVC) faces a paradox: in the absence of markings of exploitable texture on/or in the struts or cell walls, the available speckle will be formed by the material architecture itself. This leads to the inability of classical DVC codes to measure kinematics at the cellular and a fortiori sub-cellular scales, precisely because the interpolation basis of the displacement field cannot account for the complexity of the underlying kinematics, especially when bending or buckling of beams or walls occurs. The objective of the thesis is to develop a DVC technique for the measurement of displacement fields in cellular materials at the scale of their architecture. The proposed solution consists in assisting DVC by a weak elastic regularization using an automatic image-based mechanical model. The proposed method introduces a separation of scales above which DVC is dominant and below which it is assisted by image-based modeling. First, a numerical investigation and comparison of different techniques for building automatically a geometric and mechanical model from tomographic images is conducted. Two particular methods are considered: the finite element method (FEM) and the finite-cell method (FCM). The FCM is a fictitious domain method that consists in immersing the complex geometry in a high order structured grid and does not require meshing. In this context, various discretization parameters appear delicate to choose. In this work, these parameters are adjusted to obtain (a) the best possible accuracy (bounded by pixelation errors) while (b) ensuring minimal complexity. Concerning the ability of the mechanical image-based models to regularize DIC, several virtual experimentations are performed in two-dimensions in order to finely analyze the influence of the introduced regularization lengths for different input mechanical behaviors (elastic, elasto-plastic and geometrically non-linear) and in comparison with ground truth. We show that the method can estimate complex local displacement and strain fields with speckle-free low definition images, even in non-linear regimes such as local buckling. Finally a three-dimensional generalization is performed through the development of a DVC framework. It takes as an input the reconstructed volumes at the different deformation states of the material and constructs automatically the cellular micro-architeture geometry. It considers either an immersed structured B-spline grid of arbitrary order or a finite-element mesh. An experimental evidence is performed by measuring the complex kinematics of a polyurethane foam under compression during an in situ test
An adaptive space-time phase field formulation for dynamic fracture of brittle shells based on LR NURBS
We present an adaptive space-time phase field formulation for dynamic
fracture of brittle shells. Their deformation is characterized by the
Kirchhoff-Love thin shell theory using a curvilinear surface description. All
kinematical objects are defined on the shell's mid-plane. The evolution
equation for the phase field is determined by the minimization of an energy
functional based on Griffith's theory of brittle fracture. Membrane and bending
contributions to the fracture process are modeled separately and a thickness
integration is established for the latter. The coupled system consists of two
nonlinear fourth-order PDEs and all quantities are defined on an evolving
two-dimensional manifold. Since the weak form requires -continuity,
isogeometric shape functions are used. The mesh is adaptively refined based on
the phase field using Locally Refinable (LR) NURBS. Time is discretized based
on a generalized- method using adaptive time-stepping, and the
discretized coupled system is solved with a monolithic Newton-Raphson scheme.
The interaction between surface deformation and crack evolution is demonstrated
by several numerical examples showing dynamic crack propagation and branching.Comment: In this version, typos were fixed, Fig. 16 is added, the literature
review is extended and clarifying explanations and remarks are added at
several places. Supplementary movies are available at
https://av.tib.eu/series/641/supplemental+videos+of+the+paper+an+adaptive+space+time+phase+field+formulation+for+dynamic+fracture+of+brittle+shells+based+on+lr+nurb
Learning Relaxation for Multigrid
During the last decade, Neural Networks (NNs) have proved to be extremely
effective tools in many fields of engineering, including autonomous vehicles,
medical diagnosis and search engines, and even in art creation. Indeed, NNs
often decisively outperform traditional algorithms. One area that is only
recently attracting significant interest is using NNs for designing numerical
solvers, particularly for discretized partial differential equations. Several
recent papers have considered employing NNs for developing multigrid methods,
which are a leading computational tool for solving discretized partial
differential equations and other sparse-matrix problems. We extend these new
ideas, focusing on so-called relaxation operators (also called smoothers),
which are an important component of the multigrid algorithm that has not yet
received much attention in this context. We explore an approach for using NNs
to learn relaxation parameters for an ensemble of diffusion operators with
random coefficients, for Jacobi type smoothers and for 4Color GaussSeidel
smoothers. The latter yield exceptionally efficient and easy to parallelize
Successive Over Relaxation (SOR) smoothers. Moreover, this work demonstrates
that learning relaxation parameters on relatively small grids using a two-grid
method and Gelfand's formula as a loss function can be implemented easily.
These methods efficiently produce nearly-optimal parameters, thereby
significantly improving the convergence rate of multigrid algorithms on large
grids.Comment: This research was carried out under the supervision of Prof. Irad
Yavneh and Prof. Ron Kimmel. XeLate
- …