9 research outputs found

    Προχωρημένες μέθοδοι προσομοίωσης στοχαστικών πεπερασμένων στοιχείων και αξιοπιστίας των κατασκευών

    Get PDF
    204 σ.Στα πλαίσια της παρούσας διδακτορικής διατριβής, για την αντιμετώπιση του υπολογιστικού κόστους που έχει η μέθοδος Monte Carlo, διατυπώνονται αρχικά δυο μεθοδολογίες για τον υπολογισμό της πιθανότητας αστοχίας ενός στοχαστικού συστήματος. Στην πρώτη μεθοδολογία τα τεχνητά νευρωνικά δίκτυα χρησιμοποιούνται στο πλαίσιο της μεθόδου των υποσυνόλων ενώ στη δεύτερη μεθοδολογία χρησιμοποιούνται στα πλαίσια της προσομοίωσης Monte Carlo. Οι δύο αυτές μέθοδοι έχουν ως αποτέλεσμα τη μείωση του υπολογιστικού κόστους τόσο της μεθόδου των υποσυνόλων όσο και της Monte Carlo. Στη συνέχεια πραγματοποιείται μια προσαρμοστική διατύπωση της μεθόδου των φασματικών στοχαστικών πεπερασμένων στοιχείων με μεθόδους Galerkin χρησιμοποιώντας τη συνάρτηση διακύμανσης της απόκρισης για τη εύρεση της χωρικής κατανομής των όρων της Karhunen-Loeve, οδηγώντας σε μείωση των συντελεστών της πολυωνυμικής βάσης που πρέπει να υπολογιστού, και συνεπώς μειώνοντας την πυκνότητα των διευρυμένων μητρώων της μεθόδου. Η προσαρμοστική αυτή διατύπωση σε συνδυασμό με επαναληπτικές μεθόδους επίλυσης βελτιώνει την υπολογιστική απόδοση της μεθόδου. Τέλος, πραγματοποιείται μια παραμετρική διερεύνηση της συμπεριφοράς της μεθόδου των φασματικών πεπερασμένων στοιχείων για διάφορες τιμές παραμέτρων του στοχαστικού πεδίου, η οποία χρησιμοποιείται στα πλαίσια της εκτίμησης της υπολογιστικής συμπεριφοράς της μεθόδου σε σχέση με τη μέθοδο Monte Carlo.This thesis presents a series of methodologies that have been implemented in the framework of SFEM and reliability analysis, in order to reduce the computational effort involved. The first methodology is a neural network-based subset simulation in which neural networks are trained and then used as robust meta-models in order to increase the efficiency of subset simulation with a minimum additional computational effort. In the second methodology neural networks are used in the framework of MCS for computing the reliability of stochastic structural systems, by providing robust neural network estimates of the structural response. The third methodology consists of constructing an adaptive sparse polynomial chaos (PC) expansion of the response of stochastic systems in the framework of spectral stochastic finite element method (SSFEM). The proposed methodology utilizes the concept of variability response function (VRF) in order to compute an a priori low cost estimation of the spatial distribution of the second-order error of the response as a function of the number of terms used in the truncated Karhunen-Loeve series representation of the random field involved in the problem. Finally, a parametric study of Monte Carlo simulation versus SSFEM in large-scale systems is performed.Δημήτρης Γ. Γιοβάνη

    High-dimensional interpolation on the Grassmann manifold using Gaussian processes

    Get PDF
    This paper proposes a novel method for performing interpolation of high-dimensional systems. The proposed method projects the high-dimensional full-field solution into a lower-dimensional space where interpolation is computationally more tractable. The method combines the spectral clustering technique, which refers to a class of machine learning techniques that utilizes the eigen-structure of a similarity matrix to partition data into disjoint clusters based on the similarity of the points, in order to effectively identify areas of the parameter space where sharp changes of the solution field are resolved. In order to do this, we derive a similarity matrix based on the pairwise distances between the high-dimensional solutions of the stochastic system projected onto the Grassmann manifold. The distances are calculated using appropriately defined metrics and the similarity matrix is used in order to cluster the data based on their similarity. Points that belong to the same cluster are projected onto the tangent space (which is an inner-product flat space) defined at the Karcher mean of these points and a Gaussian process is used for interpolation on the tangent of the Grassmann manifold in order to predict the solution without requiring full model evaluations.Methodological developments presented herein have been supported by the Office of Naval Research

    Structural reliability analysis from sparse data

    Get PDF
    Over the past several decades, major advances have been made in probabilistic methods for assessing structural reliability with a critical feature of these methods being that probability models of random variables are known precisely. However, when data are scant it is rear to identify a unique probability distribution that fits the data, a fact that introduces uncertainty into the estimation of the probability of failure since the location of the limit surface in the probability space is also uncertain. The objective of the proposed work is to realistically assess the uncertainty in probability of failure estimates of the First Order Reliability Method (FORM) resulting from the limited amount of data.Methodological developments presented herein have been supported by the Office of Naval Research with Dr. Paul Hess as program officer

    A survey of unsupervised learning methods for high-dimensional uncertainty quantification in black-box-type problems

    Full text link
    Constructing surrogate models for uncertainty quantification (UQ) on complex partial differential equations (PDEs) having inherently high-dimensional O(102)\mathcal{O}(10^{\ge 2}) stochastic inputs (e.g., forcing terms, boundary conditions, initial conditions) poses tremendous challenges. The curse of dimensionality can be addressed with suitable unsupervised learning techniques used as a pre-processing tool to encode inputs onto lower-dimensional subspaces while retaining its structural information and meaningful properties. In this work, we review and investigate thirteen dimension reduction methods including linear and nonlinear, spectral, blind source separation, convex and non-convex methods and utilize the resulting embeddings to construct a mapping to quantities of interest via polynomial chaos expansions (PCE). We refer to the general proposed approach as manifold PCE (m-PCE), where manifold corresponds to the latent space resulting from any of the studied dimension reduction methods. To investigate the capabilities and limitations of these methods we conduct numerical tests for three physics-based systems (treated as black-boxes) having high-dimensional stochastic inputs of varying complexity modeled as both Gaussian and non-Gaussian random fields to investigate the effect of the intrinsic dimensionality of input data. We demonstrate both the advantages and limitations of the unsupervised learning methods and we conclude that a suitable m-PCE model provides a cost-effective approach compared to alternative algorithms proposed in the literature, including recently proposed expensive deep neural network-based surrogates and can be readily applied for high-dimensional UQ in stochastic PDEs.Comment: 45 pages, 14 figure

    UQpy v4.1: Uncertainty quantification with Python

    No full text
    This paper presents the latest improvements introduced in Version 4 of the UQpy, Uncertainty Quantification with Python, library. In the latest version, the code was restructured to conform with the latest Python coding conventions, refactored to simplify previous tightly coupled features, and improve its extensibility and modularity. To improve the robustness of UQpy, software engineering best practices were adopted. A new software development workflow significantly improved collaboration between team members, and continuous integration and automated testing ensured the robustness and reliability of software performance. Continuous deployment of UQpy allowed its automated packaging and distribution in system agnostic format via multiple channels, while a Docker image enables the use of the toolbox regardless of operating system limitations

    UQpy v4.1: Uncertainty Quantification with Python

    Full text link
    This paper presents the latest improvements introduced in Version 4 of the UQpy, Uncertainty Quantification with Python, library. In the latest version, the code was restructured to conform with the latest Python coding conventions, refactored to simplify previous tightly coupled features, and improve its extensibility and modularity. To improve the robustness of UQpy, software engineering best practices were adopted. A new software development workflow significantly improved collaboration between team members, and continous integration and automated testing ensured the robustness and reliability of software performance. Continuous deployment of UQpy allowed its automated packaging and distribution in system agnostic format via multiple channels, while a Docker image enables the use of the toolbox regardless of operating system limitations
    corecore