14 research outputs found
Machine Learning Aided Stochastic Elastoplastic and Damage Analysis of Functionally Graded Structures
The elastoplastic and damage analyses, which serve as key indicators for the nonlinear performances of engineering structures, have been extensively investigated during the past decades. However, with the development of advanced composite material, such as the functionally graded material (FGM), the nonlinear behaviour evaluations of such advantageous materials still remain tough challenges. Moreover, despite of the assumption that structural system parameters are widely adopted as deterministic, it is already illustrated that the inevitable and mercurial uncertainties of these system properties inherently associate with the concerned structural models and nonlinear analysis process. The existence of such fluctuations potentially affects the actual elastoplastic and damage behaviours of the FGM structures, which leads to the inadequacy between the approximation results with the actual structural safety conditions. Consequently, it is requisite to establish a robust stochastic nonlinear analysis framework complied with the requirements of modern composite engineering practices.
In this dissertation, a novel uncertain nonlinear analysis framework, namely the machine leaning aided stochastic elastoplastic and damage analysis framework, is presented herein for FGM structures. The proposed approach is a favorable alternative to determine structural reliability when full-scale testing is not achievable, thus leading to significant eliminations of manpower and computational efforts spent in practical engineering applications. Within the developed framework, a novel extended support vector regression (X-SVR) with Dirichlet feature mapping approach is introduced and then incorporated for the subsequent uncertainty quantification. By successfully establishing the governing relationship between the uncertain system parameters and any concerned structural output, a comprehensive probabilistic profile including means, standard deviations, probability density functions (PDFs), and cumulative distribution functions (CDFs) of the structural output can be effectively established through a sampling scheme.
Consequently, by adopting the machine learning aided stochastic elastoplastic and damage analysis framework into real-life engineering application, the advantages of the next generation uncertainty quantification analysis can be highlighted, and appreciable contributions can be delivered to both structural safety evaluation and structural design fields
Time-Independent Reliability Analysis of Bridge System Based on Mixed Copula Models
The actual structural systems have many failure modes. Due to the same random sources owned by the performance functions of these failure modes, there usually exist some nonlinear correlations between the various failure modes. How to handle the nonlinear correlations is one of the main scientific problems in the field of structural system reliability. In this paper, for the two-component systems and multiple-component systems with multiple failure modes, the mixed copula models for time-independent reliability analysis of series systems, parallel systems, series-parallel systems, and parallel-series systems are presented. These obtained mixed copula models, considering the nonlinear correlation between failure modes, are obtained with the chosen optimal copula functions with the Bayesian selection criteria and Monte Carlo Sampling (MCS) method. And a numerical example is provided to illustrate the feasibility and application of the built mixed models for structural system reliability
Recommended from our members
Transmission Expansion Planning : computational challenges toward real-size networks
The importance of the transmission network for supplying electricity demand is undeniable, and Transmission Expansion Planning (TEP) studies is key for a reliable power system. Due to increasing sources of uncertainty such as more intermittent energy resources, mobile and controllable demands, and fast technology improvements for PVs and energy storage devices, the need for using systematic ways for solving this complex problem is increased. One of the main barriers for deploying optimization-based TEP studies is computationally intractability, which is the main motivation for this research.
The aim of this work is to investigate the computational challenges associated with systematic TEP studies for large-scale problems, and develop algorithms to improve computational performance. In the first step, we investigate the impact of adding security constraints (as NERC standard requirement) into TEP optimization problem, and develop the Variable Contingency List (VCL) algorithm to pre-screen security constraints to only add those that may affect the feasible region. It significantly decreases the size of the problem compared to considering all security constraints. Then, we evaluate the impact of the size of candidate lines list (number of binary variables) on TEP, and developed a heuristic algorithm to decrease the size of this list.
In the next step, we integrate uncertainties into the TEP optimization problem and formulate the problem as a two-stage stochastic program. Adding uncertainties increases the size of the problem significantly. It leads us to develop a three-level filter that introduces important scenario identification index (ISII) and similar scenario elimination (SSE) technique to decrease the number of security constraints in stochastic TEP in a systematic and tractable way.
We then investigate the scalability of the
stochastic TEP formulation. We develop a configurable decomposition framework that allows us to decompose the original problem into subproblems that can be solved independently and in parallel. This framework can benefit from using both progressive hedging (PH) and Benders decomposition (BD) algorithms to decompose and parallelize a large-scale problem both vertically and horizontally. We have also developed a bundling algorithm that improves the performance of PH algorithm and the overall performance of the framework.
We have implemented our work on a reduced ERCOT network with more than 3000 buses to demonstrate the practicality of the proposed method in this work for large-scale problems.Electrical and Computer Engineerin
Where Quantum Complexity Helps Classical Complexity
Scientists have demonstrated that quantum computing has presented novel
approaches to address computational challenges, each varying in complexity.
Adapting problem-solving strategies is crucial to harness the full potential of
quantum computing. Nonetheless, there are defined boundaries to the
capabilities of quantum computing. This paper concentrates on aggregating prior
research efforts dedicated to solving intricate classical computational
problems through quantum computing. The objective is to systematically compile
an exhaustive inventory of these solutions and categorize a collection of
demanding problems that await further exploration
Biorenewable value chain optimisation with multi-layered value chains and advanced analytics
A crucial element of the quest of curbing carbon dioxide emissions is deemed to rely on a biobased economy, which will rely on the development of economically and environmentally sustainable biorefining systems enabling a full exploitation of lignocellulosic biomass (and its macrocomponents such as cellulose, hemicellulose, and lignin) for the co-production of biofuels and bioderived platform chemicals. The thesis aims to develop comprehensive modelling frameworks to provide, through optimisation techniques, holistic decision-making regarding the strategic design and systematic planning of advanced biorefining supply chain networks. Therefore, the modelling of the entire value chain behaviour, involving both upstream and downstream aspects within a temporal and geographical context, is of great importance in this study.
A deterministic, spatially explicit, multi-echelon and multi-period Mixed Integer Linear Programming prototype modelling framework is developed for the identification of profitably optimal strategic and operating decisions regarding a full supply chain system, integrated with a technology superstructure of multiple biomass feedstocks, bioproducts and processing portfolios. The potential dimensionality reduction of the resulting large-scale optimisation problem is explored by utilising a bilevel decomposition algorithm. The financial sustainability of such biobased supply chains is further analysed through two-stage stochastic optimisation and risk management models, incorporating biomass cultivation yield uncertainties and expected downside risk, respectively. Finally, greenhouse gas emission factors are added to the prototype modelling approach through a multi-objective optimisation scheme to steer decision-making on biorefining supply chain systems under both economic and environmental criteria, comparing two different solution procedures.
The developed models are applied to a Hungarian case study of lignocellulosic biorefining production systems. An additional case study in a Southeastern Romanian region and Marseille, regarding a first-generation biorefining supply chain for the production of castor oil, is undertaken to further examine the compatibility and efficiency of the generic deterministic model.Open Acces
Transmission and interconnection planning in power systems: Contributions to investment under uncertainty and cross-border cost allocation.
<p>Electricity transmission network investments are playing a key role in the integration process of power systems in the European Union. Given the magnitude of investment costs, their irreversibility, and their impact in the overall development of a region, accounting for the role of uncertainties as well as the involvement of multiple parties in the decision process allows for improved and more robust investment decisions. Even though the creation of this internal energy market requires attention to flexibility and strategic decision-making, existing literature and practitioners have not given proper attention to these topics. Using portfolios of real options, we present two stochastic mixed integer linear programming models for transmission network expansion planning. We study the importance of explicitly addressing uncertainties, the option to postpone decisions and other sources of flexibility in the design of transmission networks. In a case study based on the Azores archipelago we show how renewables penetration can increase by introducing contingency planning into the decision process considering generation capacity uncertainty. We also present a two-party Nash-Coase bargaining transmission capacity investment model. We illustrate optimal fair share cost allocation policies with a case study based on the Iberian market. Lastly, we develop a new model that considers both interconnection expansion planning under uncertainty and cross-border cost allocation based on portfolios of real options and Nash-Coase bargaining. The model is illustrated using Iberian transmission and market data.</p
Decomposition Algorithms in Stochastic Integer Programming: Applications and Computations.
In this dissertation we focus on two main topics. Under the first topic, we develop a new framework for stochastic network interdiction problem to address ambiguity in the defender risk preferences. The second topic is dedicated to computational studies of two-stage stochastic integer programs. More specifically, we consider two cases. First, we develop some solution methods for two-stage stochastic integer programs with continuous recourse; second, we study some computational strategies for two-stage stochastic integer programs with integer recourse. We study a class of stochastic network interdiction problems where the defender has incomplete (ambiguous) preferences. Specifically, we focus on the shortest path network interdiction modeled as a Stackelberg game, where the defender (leader) makes an interdiction decision first, then the attacker (follower) selects a shortest path after the observation of random arc costs and interdiction effects in the network. We take a decision-analytic perspective in addressing probabilistic risk over network parameters, assuming that the defender\u27s risk preferences over exogenously given probabilities can be summarized by the expected utility theory. Although the exact form of the utility function is ambiguous to the defender, we assume that a set of historical data on some pairwise comparisons made by the defender is available, which can be used to restrict the shape of the utility function. We use two different approaches to tackle this problem. The first approach conducts utility estimation and optimization separately, by first finding the best fit for a piecewise linear concave utility function according to the available data, and then optimizing the expected utility. The second approach integrates utility estimation and optimization, by modeling the utility ambiguity under a robust optimization framework following \cite{armbruster2015decision} and \cite{Hu}. We conduct extensive computational experiments to evaluate the performances of these approaches on the stochastic shortest path network interdiction problem. In third chapter, we propose partition-based decomposition algorithms for solving two-stage stochastic integer program with continuous recourse. The partition-based decomposition method enhance the classical decomposition methods (such as Benders decomposition) by utilizing the inexact cuts (coarse cuts) induced by a scenario partition. Coarse cut generation can be much less expensive than the standard Benders cuts, when the partition size is relatively small compared to the total number of scenarios. We conduct an extensive computational study to illustrate the advantage of the proposed partition-based decomposition algorithms compared with the state-of-the-art approaches. In chapter four, we concentrate on computational methods for two-stage stochastic integer program with integer recourse. We consider the partition-based relaxation framework integrated with a scenario decomposition algorithm in order to develop strategies which provide a better lower bound on the optimal objective value, within a tight time limit