7,411 research outputs found

    A machine learning approach as a surrogate for a finite element analysis: Status of research and application to one dimensional systems

    Get PDF
    Current maintenance intervals of mechanical systems are scheduled a priori based on the life of the system, resulting in expensive maintenance scheduling, and often undermining the safety of passengers. Going forward, the actual usage of a vehicle will be used to predict stresses in its structure, and therefore, to define a specific maintenance scheduling. Machine learning (ML) algorithms can be used to map a reduced set of data coming from real-time measurements of a structure into a detailed/high-fidelity finite element analysis (FEA) model of the same system. As a result, the FEA-based ML approach will directly estimate the stress distribution over the entire system during operations, thus improving the ability to define ad-hoc, safe, and efficient maintenance procedures. The paper initially presents a review of the current state-of-the-art of ML methods applied to finite elements. A surrogate finite element approach based on ML algorithms is also proposed to estimate the time-varying response of a one-dimensional beam. Several ML regression models, such as decision trees and artificial neural networks, have been developed, and their performance is compared for direct estimation of the stress distribution over a beam structure. The surrogate finite element models based on ML algorithms are able to estimate the response of the beam accurately, with artificial neural networks providing more accurate results

    A multiscale multi-permeability poroplasticity model linked by recursive homogenizations and deep learning

    Get PDF
    Many geological materials, such as shale, mudstone, carbonate rock, limestone and rock salt are multi-porosity porous media in which pores of different scales may co-exist in the host matrix. When fractures propagate in these multi-porosity materials, these pores may enlarge and coalesce and therefore change the magnitude and the principal directions of the effective permeability tensors. The pore-fluid inside the cracks and the pores of host matrix may interact and exchange fluid mass, but the difference in hydraulic properties of these pores often means that a single homogenized effective permeability tensor field is insufficient to characterize the evolving hydraulic properties of these materials at smaller time scale. Furthermore, the complexity of the hydro-mechanical coupling process and the induced mechanical and hydraulic anisotropy originated from the micro-fracture and plasticity at grain scale also makes it difficult to propose, implement and validate separated macroscopic constitutive laws for numerical simulations. This article presents a hybrid data-driven method designed to capture the multiscale hydro-mechanical coupling effect of porous media with pores of various different sizes. At each scale, data-driven models generated from supervised machine learning are hybridized with classical constitutive laws in a directed graph that represents the numerical models. By using sub-scale simulations to generate database to train material models, an offline homogenization procedure is used to replace the up-scaling procedure to generate cohesive laws for localized physical discontinuities at both grain and specimen scales. Through a proper homogenization procedure that preserves spatial length scales, the proposed method enables field-scale simulations to gather insights from meso-scale and grain-scale micro-structural attributes. This method is proven to be much more computationally efficient than the classical DEM–FEM or FEM2 approach while at the same time more robust and flexible than the classical surrogate modeling approach. Due to the usage of bridging-scale technique, the proposed model may provide multiple opportunities to incorporate different types of simulations and experimental data across different length scales for machine learning. Numerical issues will also be discussed

    Determination of Aggregate Elastic Properties of Powder-Beds in Additive Manufacturing Using Convolutional Neural Networks

    Get PDF
    The most popular strategy for the estimation of effective elastic properties of powder-beds in Additively Manufactured structures (AM structures) is through either the Finite Element Method (FEM) or the Discrete Element Method (DEM). Both of these techniques, however, are computationally expensive for practical applications. This paper presents a novel Convolutional Neural Network (CNN) regression approach to estimate the effective elastic properties of powder-beds in AM structures. In this approach, the time-consuming DEM is used for CNN training purposes and not at run time. The DEM is used to model the interactions of powder particles and to evaluate the macro-level continuum-mechanical state variables (volume average of stress and strain). For the Neural Network training purposes, the DEM code creates a dataset, including hundreds of AM structures with their corresponding mechanical properties. The approach utilizes methods from deep learning to train a CNN capable of reducing the computational time needed to predict the effective elastic properties of the aggregate. The saving in computational time could reach 99.9995% compared to DEM, and on average, the difference in predicted effective elastic properties between the DEM code and trained CNN is less than 4%. The resulting sub-second level computational time can be considered as a step towards the development of a near real-time process control system capable of predicting the effective elastic properties of the aggregate at any given stage of the manufacturing process

    Casing structural integrity and failure modes in a range of well types: a review.

    Get PDF
    This paper focus on factors attributing to casing failure, their failure mechanism and the resulting failure mode. The casing is a critical component in a well and the main mechanical structural barrier element that provide conduits and avenue for oil and gas production over the well lifecycle and beyond. The casings are normally subjected to material degradation, varying local loads, induced stresses during stimulation, natural fractures, slip and shear during their installation and operation leading to different kinds of casing failure modes. The review paper also covers recent developments in casing integrity assessment techniques and their respective limitations. The taxonomy of the major causes and cases of casing failure in different well types is covered. In addition, an overview of casing trend utilisation and failure mix by grades is provided. The trend of casing utilisation in different wells examined show deep-water and shale gas horizontal wells employing higher tensile grades (P110 & Q125) due to their characteristics. Additionally, this review presents casing failure mixed by grades, with P110 recording the highest failure cases owing to its stiffness, high application in injection wells, shale gas, deep-water and high temperature and high temperature (HPHT) wells with high failure probability. A summary of existing tools used for the assessment of well integrity issues and their respective limitations is provided and conclusions drawn

    Theory and Practice of Tunnel Engineering

    Get PDF
    Tunnel construction is expensive when compared to the construction of other engineering structures. As such, there is always the need to develop more sophisticated and effective methods of construction. There are many long and large tunnels with various purposes in the world, especially for highways, railways, water conveyance, and energy production. Tunnels can be designed effectively by means of two and three-dimensional numerical models. Ground–structure interaction is one of the significant factors acting on economic and safe design. This book presents recent data on tunnel engineering to improve the theory and practice of the construction of underground structures. It provides an overview of tunneling technology and includes chapters that address analytical and numerical methods for rock load estimation and design support systems and advances in measurement systems for underground structures. The book discusses the empirical, analytical, and numerical methods of tunneling practice worldwide

    Evaluation of process-structure-property relationships of carbon nanotube forests using simulation and deep learning

    Get PDF
    This work is aimed to explore process-structure-property relationships of carbon nanotube (CNT) forests. CNTs have superior mechanical, electrical and thermal properties that make them suitable for many applications. Yet, due to lack of manufacturing control, there is a huge performance gap between promising properties of individual CNTs and CNT forest properties that hinders their adoption into potential industrial applications. In this research, computational modelling, in-situ electron microscopy for CNT synthesis, and data-driven and high-throughput deep convolutional neural networks are employed to not only accelerate implementing CNTs in various applications but also to establish a framework to make validated predictive models that can be easily extended to achieve application-tailored synthesis of any materials. A time-resolved and physics-based finite-element simulation tool is modelled in MATLAB to investigate synthesis of CNT forests, specially to study the CNT-CNT interactions and generated mechanical forces and their role in ensemble structure and properties. A companion numerical model with similar construct is then employed to examine forest mechanical properties in compression. In addition, in-situ experiments are carried out inside Environmental Scanning Electron Microscope (ESEM) to nucleate and synthesize CNTs. Findings may primarily be used to expand the forest growth and self-assembly knowledge and to validate the assumptions of simulation package. Also, SEM images can be used as feed database to construct a deep learning model to grow CNTs by design. The chemical vapor deposition parameter space of CNT synthesis is so vast that it is not possible to investigate all conceivable combinations in terms of time and costs. Hence, simulated CNT forest morphology images are used to train machine learning and learning algorithms that are able to predict CNT synthesis conditions based on desired properties. Exceptionally high prediction accuracies of R2 > 0.94 is achieved for buckling load and stiffness, as well as accuracies of > 0.91 for the classification task. This high classification accuracy promotes discovering the CNT forest synthesis-structure relationships so that their promising performance can be adopted in real world applications. We foresee this work as a meaningful step towards creating an unsupervised simulation using machine learning techniques that can seek out the desired CNT forest synthesis parameters to achieve desired property sets for diverse applications.Includes bibliographical reference

    Linear attention coupled Fourier neural operator for simulation of three-dimensional turbulence

    Full text link
    Modeling three-dimensional (3D) turbulence by neural networks is difficult because 3D turbulence is highly-nonlinear with high degrees of freedom and the corresponding simulation is memory-intensive. Recently, the attention mechanism has been shown as a promising approach to boost the performance of neural networks on turbulence simulation. However, the standard self-attention mechanism uses O(n2)O(n^2) time and space with respect to input dimension nn, and such quadratic complexity has become the main bottleneck for attention to be applied on 3D turbulence simulation. In this work, we resolve this issue with the concept of linear attention network. The linear attention approximates the standard attention by adding two linear projections, reducing the overall self-attention complexity from O(n2)O(n^2) to O(n)O(n) in both time and space. The linear attention coupled Fourier neural operator (LAFNO) is developed for the simulation of 3D turbulence. Numerical simulations show that the linear attention mechanism provides 40\% error reduction at the same level of computational cost, and LAFNO can accurately reconstruct a variety of statistics and instantaneous spatial structures of 3D turbulence. The linear attention method would be helpful for the improvement of neural network models of 3D nonlinear problems involving high-dimensional data in other scientific domains.Comment: 28 pages, 14 figure

    Deep Ritz Method with Adaptive Quadrature for Linear Elasticity

    Full text link
    In this paper, we study the deep Ritz method for solving the linear elasticity equation from a numerical analysis perspective. A modified Ritz formulation using the H1/2(ΓD)H^{1/2}(\Gamma_D) norm is introduced and analyzed for linear elasticity equation in order to deal with the (essential) Dirichlet boundary condition. We show that the resulting deep Ritz method provides the best approximation among the set of deep neural network (DNN) functions with respect to the ``energy'' norm. Furthermore, we demonstrate that the total error of the deep Ritz simulation is bounded by the sum of the network approximation error and the numerical integration error, disregarding the algebraic error. To effectively control the numerical integration error, we propose an adaptive quadrature-based numerical integration technique with a residual-based local error indicator. This approach enables efficient approximation of the modified energy functional. Through numerical experiments involving smooth and singular problems, as well as problems with stress concentration, we validate the effectiveness and efficiency of the proposed deep Ritz method with adaptive quadrature

    A machine learning based material homogenization technique for masonry structures

    Get PDF
    Cutting-edge methods in the computational analysis of structures have been developed over the last decades. Such modern tools are helpful to assess the safety of existing buildings. Two main finite element (FE) modeling approaches have been developed in the field of masonry structures, i.e. micro and macro scale. While the micro modeling distinguishes between the masonry components in order to accurately represent the typical masonry damage mechanisms in the material constituents, macro modeling considers a single continuum material with smeared properties so that large scale masonry models can be analyzed. Both techniques have demonstrated their advantages in different structural applications. However, each approach comes along with some possible disadvantages. For example, the use of micro modeling is limited to small scale structures, since the computational effort becomes too expensive for large scale applications, while macro modeling cannot take into account precisely the complex interaction among masonry components (brick units and mortar joints). Multi scale techniques have been proposed to combine the accuracy of micro modeling and the computational efficiency of macro modeling. Such procedures consider linked FE analyses at both scales, and are based on the concept of a representative volume element (RVE). The analysis of a RVE takes into account the micro structural behavior of component materials, and scales it up to the macro level. In spite of being a very accurate tool for the analysis of masonry structures, multi scale techniques still exhibit high computational cost while connecting the FE analyses at the two scales. Machine learning (ML) tools have been utilized successfully to train specific models by feeding big source data from different fields, e.g. autonomous driving, face recognition, etc. This thesis proposes the use of ML to develop a novel homogenization strategy for the in-plane analysis of masonry structures, where a continuous nonlinear material law is calibrated by considering relevant data derived from micro scale analysis. The proposed method is based on a ML tool that links the macro and micro scales of the analysis, by training a macro model smeared damage constitutive law through benchmark data from numerical tests derived from RVE micro models. In this context, numerical nonlinear tests on masonry micro models executed in a virtual laboratory provide the benchmark data for feeding the ML training procedure. The adopted ML technique allows the accurate and efficient simulation of the anisotropic behavior of masonry material by means of a tensor mapping procedure. The final stage of this novel homogenization method is the definition of a calibrated continuum constitutive model for the structural application to the masonry macro scale. The developed technique is applied to the in-plane homogenization of a Flemish bond masonry wall. Evaluation examples based on the simulation of physical laboratory tests show the accuracy of the method when compared with sophisticated micro modeling of the entire structure. Finally, an application example of the novel homogenization technique is given for the pushover analysis of a masonry heritage structure.En las últimas décadas se han desarrollado diversos métodos avanzados para el análisis computacional de estructuras. Estas herramientas modernas son también útiles para evaluar la seguridad de los edificios existentes. En el campo de las estructuras de la obra de fábrica se han desarrollado principalmente dos técnicas de modelizacón por elementos finitos (FE): la modelización en escala micro y en escala macro. Mientras que en un micromodelo se distingue entre los componentes de la obra de fábrica para representar con precisión los mecanismos de daño característicos de la misma, en un macromodelo se asignan las propiedades a un único material continuo que permite analizar modelos de obra de fábrica a gran escala. Ambas técnicas han demostrado sus ventajas en diferentes aplicaciones estructurales. Sin embargo, cada enfoque viene acompañado de algunas posibles desventajas. Por ejemplo, la micromodelización se limita a estructuras de pequeña escala, puesto que el esfuerzo computacional que requieren aumenta rápidamente con el tamaño de los modelos, mientras que la macromodelización, por su parte, es un enfoque promediado que no puede por tanto tener en cuenta precisamente la interacción compleja entre los componentes de la fábrica (unidades de ladrillo y juntas de mortero). Hasta el momento, se han propuesto algunas técnicas multiescala para combinar la precisión de la micromodelización y la eficiencia computacional de la macromodelización. Estos procedimientos aplican el análisis de FE vinculado a ambas escalas y se basan en el concepto de elemento de volumen representativo (RVE). El análisis de un RVE tiene en cuenta el comportamiento microestructural de los materiales componentes y lo escala hasta el nivel macro. A pesar de ser una herramienta muy precisa para el análisis de obra de fábrica, las técnicas multiescala siguen presentando un elevado coste computacional que se produce al conectar los análisis de FE de dos escalas. Además, diversos autores han utilizado con éxito herramientas de aprendizaje automático (machine learning (ML)) para poner a punto modelos específicos alimentados con grandes fuentes de datos de diferentes campos, por ejemplo, la conducción autónoma, el reconocimiento de caras, etc. Partiendo de los anteriores conceptos, este tesis propone el uso de ML para desarrollar una novedosa estrategia de homogeneización para el análisis en plano de estructuras de mampostería, donde se calibra una ley de materiales continua no lineal considerando datos relevantes derivados del análisis a microescala. El método propuesto se basa en una herramienta de ML que vincula las escalas macro y micro del análisis mediante la puesta a punto de una ley constitutiva para el modelo macro a través de datos producidos en ensayos numéricos de un RVE micro modelo. En este contexto, los ensayos numéricos no lineales sobre micro modelos de mampostería ejecutados en un laboratorio virtual proporcionan los datos de referencia para alimentar el procedimiento de entrenamiento del ML. La técnica de ML adoptada permite la simulación precisa y eficiente del comportamiento anisotrópico del material de mampostería mediante un procedimiento de mapeo tensorial. La etapa final de este novedoso método de homogeneización es la definición de un modelo constitutivo continuo calibrado para la aplicación estructural a la macroescala de mampostería. La técnica desarrollada se aplica a la homogeneización en el plano de un muro de obra de fábrica construido con aparejo flamenco. Ejemplos de evaluación basados en la simulación de pruebas físicas de laboratorio muestran la precisión del método en comparación con una sofisticada micro modelización de toda la estructura. Por último, se ofrece un ejemplo de aplicación de la novedosa técnica de homogeneización para el análisis pushover de una estructura patrimonial de obra de fábrica.Postprint (published version

    Experimental and Data-driven Workflows for Microstructure-based Damage Prediction

    Get PDF
    Materialermüdung ist die häufigste Ursache für mechanisches Versagen. Die Degradationsmechanismen, welche die Lebensdauer von Bauteilen bei vergleichsweise ausgeprägten zyklischen Belastungen bestimmen, sind gut bekannt. Bei Belastungen im makroskopisch elastischen Bereich hingegen, der (sehr) hochzyklischen Ermüdung, bestimmen die innere Struktur eines Werkstoffs und die Wechselwirkung kristallografischer Defekte die Lebensdauer. Unter diesen Umständen sind die inneren Degradationsphänomene auf der mikroskopischen Skala weitgehend reversibel und führen nicht zur Bildung kritischer Schädigungen, die kontinuierlich wachsen können. Allerdings sind einige Kornensembles in polykristallinen Metallen, je nach den lokalen mikrostrukturellen Gegebenheiten, anfällig für Schädigungsinitiierung, Rissbildung und -wachstum und wirken daher als Schwachstellen. Daher weisen Bauteile, die solchen Belastungen ausgesetzt sind, oft eine ausgeprägte Lebensdauerstreuung auf. Die Tatsache, dass ein umfassendes mechanistisches Verständnis für diese Degradationsprozesse in verschiedenen Werkstoffen nicht vorliegt, hat zur Folge, dass die derzeitigen Modellierungsbemühungen die mittlere Lebensdauer und ihre Varianz in der Regel nur mit unbefriedigender Genauigkeit vorhersagen. Dies wiederum erschwert die Bauteilauslegung und macht die Nutzung von Sicherheitsfaktoren während des Dimensionierungsprozesses erforderlich. Abhilfe kann geschaffen werden, indem umfangreiche Daten zu Einflussfaktoren und deren Wirkung auf die Bildung initialer Ermüdungsschädigungen erhoben werden. Die Datenknappheit wirkt sich nach wie vor negativ auf Datenwissenschaftler und Modellierungsexperten aus, die versuchen, trotz geringer Stichprobengröße und unvollständigen Merkmalsräumen, mikrostrukturelle Abhängigkeiten abzuleiten, datengetriebene Vorhersagemodelle zu trainieren oder physikalische, regelbasierte Modelle zu parametrisieren. Die Tatsache, dass nur wenige kritische Schädigungen bezogen auf das gesamte Probenvolumen auftreten und die hochzyklische Ermüdung eine Vielzahl unterschiedlicher Abhängigkeiten aufweist, impliziert einige Anforderungen an die Datenerfassung und -verarbeitung. Am wichtigsten ist, dass die Messtechniken so empfindlich sind, dass nuancierte Schwankungen im Probenzustand erfasst werden können, dass die gesamte Routine effizient ist und dass die korrelative Mikroskopie räumliche Informationen aus verschiedenen Messungen miteinander verbindet. Das Hauptziel dieser Arbeit besteht darin, einen Workflow zu etablieren, der den Datenmangel behebt, so dass die zukünftige virtuelle Auslegung von Komponenten effizienter, zuverlässiger und nachhaltiger gestaltet werden kann. Zu diesem Zweck wird in dieser Arbeit ein kombinierter experimenteller und datenverarbeitender Workflow vorgeschlagen, um multimodale Datensätze zu Ermüdungsschädigungen zu erzeugen. Der Schwerpunkt liegt dabei auf dem Auftreten von lokalen Gleitbändern, der Rissinitiierung und dem Wachstum mikrostrukturell kurzer Risse. Der Workflow vereint die Ermüdungsprüfung von mesoskaligen Proben, um die Empfindlichkeit der Schädigungsdetektion zu erhöhen, die ergänzende Charakterisierung, die multimodale Registrierung und Datenfusion der heterogenen Daten, sowie die bildverarbeitungsbasierte Schädigungslokalisierung und -bewertung. Mesoskalige Biegeresonanzprüfung ermöglicht das Erreichen des hochzyklischen Ermüdungszustands in vergleichsweise kurzen Zeitspannen bei gleichzeitig verbessertem Auflösungsvermögen der Schädigungsentwicklung. Je nach Komplexität der einzelnen Bildverarbeitungsaufgaben und Datenverfügbarkeit werden entweder regelbasierte Bildverarbeitungsverfahren oder Repräsentationslernen gezielt eingesetzt. So sorgt beispielsweise die semantische Segmentierung von Schädigungsstellen dafür, dass wichtige Ermüdungsmerkmale aus mikroskopischen Abbildungen extrahiert werden können. Entlang des Workflows wird auf einen hohen Automatisierungsgrad Wert gelegt. Wann immer möglich, wurde die Generalisierbarkeit einzelner Workflow-Elemente untersucht. Dieser Workflow wird auf einen ferritischen Stahl (EN 1.4003) angewendet. Der resultierende Datensatz verknüpft unter anderem große verzerrungskorrigierte Mikrostrukturdaten mit der Schädigungslokalisierung und deren zyklischer Entwicklung. Im Zuge der Arbeit wird der Datensatz wird im Hinblick auf seinen Informationsgehalt untersucht, indem detaillierte, analytische Studien zur einzelnen Schädigungsbildung durchgeführt werden. Auf diese Weise konnten unter anderem neuartige, quantitative Erkenntnisse über mikrostrukturinduzierte plastische Verformungs- und Rissstopmechanismen gewonnen werden. Darüber hinaus werden aus dem Datensatz abgeleitete kornweise Merkmalsvektoren und binäre Schädigungskategorien verwendet, um einen Random-Forest-Klassifikator zu trainieren und dessen Vorhersagegüte zu bewerten. Der vorgeschlagene Workflow hat das Potenzial, die Grundlage für künftiges Data Mining und datengetriebene Modellierung mikrostrukturempfindlicher Ermüdung zu legen. Er erlaubt die effiziente Erhebung statistisch repräsentativer Datensätze mit gleichzeitig hohem Informationsgehalt und kann auf eine Vielzahl von Werkstoffen ausgeweitet werden
    corecore