29 research outputs found

    Stochastic Modeling and Analysis of Power Systems with Intermittent Energy Sources

    Get PDF
    Electric power systems continue to increase in complexity because of the deployment of market mechanisms, the integration of renewable generation and distributed energy resources (DER) (e.g., wind and solar), the penetration of electric vehicles and other price sensitive loads. These revolutionary changes and the consequent increase in uncertainty and dynamicity call for significant modifications to power system operation models including unit commitment (UC), economic load dispatch (ELD) and optimal power flow (OPF). Planning and operation of these “smart” electric grids are expected to be impacted significantly, because of the intermittent nature of various supply and demand resources that have penetrated into the system with the recent advances. The main focus of this thesis is on the application of the Affine Arithmetic (AA) method to power system operational problems. The AA method is a very efficient and accurate tool to incorporate uncertainties, as it takes into account all the information amongst dependent variables, by considering their correlations, and hence provides less conservative bounds compared to the Interval Arithmetic (IA) method. Moreover, the AA method does not require assumptions to approximate the probability distribution function (pdf) of random variables. In order to take advantage of the AA method in power flow analysis problems, first a novel formulation of the power flow problem within an optimization framework that includes complementarity constraints is proposed. The power flow problem is formulated as a mixed complementarity problem (MCP), which can take advantage of robust and efficient state-of-the-art nonlinear programming (NLP) and complementarity problems solvers. Based on the proposed MCP formulation, it is formally demonstrated that the Newton-Raphson (NR) solution of the power flow problem is essentially a step of the traditional General Reduced Gradient (GRG) algorithm. The solution of the proposed MCP model is compared with the commonly used NR method using a variety of small-, medium-, and large-sized systems in order to examine the flexibility and robustness of this approach. The MCP-based approach is then used in a power flow problem under uncertainties, in order to obtain the operational ranges for the variables based on the AA method considering active and reactive power demand uncertainties. The proposed approach does not rely on the pdf of the uncertain variables and is therefore shown to be more efficient than the traditional solution methodologies, such as Monte Carlo Simulation (MCS). Also, because of the characteristics of the MCP-based method, the resulting bounds take into consideration the limits of real and reactive power generation. The thesis furthermore proposes a novel AA-based method to solve the OPF problem with uncertain generation sources and hence determine the operating margins of the thermal generators in systems under these conditions. In the AA-based OPF problem, all the state and control variables are treated in affine form, comprising a center value and the corresponding noise magnitudes, to represent forecast, model error, and other sources of uncertainty without the need to assume a pdf. The AA-based approach is benchmarked against the MCS-based intervals, and is shown to obtain bounds close to the ones obtained using the MCS method, although they are slightly more conservative. Furthermore, the proposed algorithm to solve the AA-based OPF problem is shown to be efficient as it does not need the pdf approximations of the random variables and does not rely on iterations to converge to a solution. The applicability of the suggested approach is tested on a large real European power system

    Parameter estimation of oscillatory systems (with application to circadian rhythms)

    Get PDF
    Master'sMASTER OF ENGINEERIN

    The Third Air Force/NASA Symposium on Recent Advances in Multidisciplinary Analysis and Optimization

    Get PDF
    The third Air Force/NASA Symposium on Recent Advances in Multidisciplinary Analysis and Optimization was held on 24-26 Sept. 1990. Sessions were on the following topics: dynamics and controls; multilevel optimization; sensitivity analysis; aerodynamic design software systems; optimization theory; analysis and design; shape optimization; vehicle components; structural optimization; aeroelasticity; artificial intelligence; multidisciplinary optimization; and composites

    Compressive behaviour of closed-cell aluminium foam at different strain rates

    Get PDF
    Closed-cell aluminium foams were fabricated and characterised at different strain rates. Quasi-static and high strain rate experimental compression testing was performed using a universal servo-hydraulic testing machine and powder gun. The experimental results show a large influence of strain rate hardening on mechanical properties, which contributes to significant quasi-linear enhancement of energy absorption capabilities at high strain rates. The results of experimental testing were further used for the determination of critical deformation velocities and validation of the proposed computational model. A simple computational model with homogenised crushable foam material model shows good correlation between the experimental and computational results at analysed strain rates. The computational model offers efficient (simple, fast and accurate) analysis of high strain rate deformation behaviour of a closed-cell aluminium foam at different loading velocities.publishe

    Multi-objective optimisation: algorithms and application to computer-aided molecular and process design

    Get PDF
    Computer-Aided Molecular Design (CAMD) has been put forward as a powerful and systematic technique that can accelerate the identification of new candidate molecules. Given the benefits of CAMD, the concept has been extended to integrated molecular and process design, usually referred to as Computer-Aided Molecular and Process Design (CAMPD). In CAMPD approaches, not only is the interdependence between the properties of the molecules and the process performance captured, but it is also possible to assess the optimal overall performance of a given fluid using an objective function that may be based on process economics, energy efficiency, or environmental criteria. Despite the significant advances made in the field of CAM(P)D, there are remaining challenges in handling the complexities arising from the large mixed-integer nonlinear structure-property and process models and the presence of conflicting performance criteria that cannot be easily merged into a single metric. Many of the algorithms proposed to date, however, resort to single-objective decomposition-based approaches. To overcome these challenges, a novel CAMPD optimisation framework is proposed, in the first part of thesis, in the context of identifying optimal amine solvents for carbon dioxide (CO2) chemical absorption. This requires development and validation of a model that enables the prediction of process performance metrics for a wide range of solvents for which no experimental data exist. An equilibrium-stage model that incorporates the SAFT-γ Mie group contribution approach is proposed to provide an appropriate balance between accuracy and predictive capability with varying molecular design spaces. In order to facilitate the convergence behaviour of the process-molecular model, a tailored initialisation strategy is established based on the inside-out algorithm. Novel feasibility tests that are capable of recognising infeasible regions of molecular and process domains are developed and incorporated into an outer-approximation framework to increase solution robustness. The efficiency of the proposed algorithm is demonstrated by applying it to the design of CO2 chemical absorption processes. The algorithm is found to converge successfully in all 150 runs carried out. To derive greater insights into the interplay between solvent and process performance, it is desirable to consider multiple objectives. In the second part of the thesis, we thus explore the relative performance of five multi-objective optimisations (MOO) solution techniques, modified from the literature to address nonconvex MINLPs, on CAM(P)D problems to gain a better understanding of the performance of different algorithms in identifying the Pareto front efficiently. The combination of the sandwich algorithm with a multi-level single-linkage algorithm to solve nonconvex subproblems is found to perform best on average. Next, a robust algorithm for bi-objective optimisation (BOO), the SDNBI algorithm, is designed to address the theoretical and numerical challenges associated with the solution of general nonconvex and discrete BOO problems. The main improvements in the development of the algorithm are focused on the effective exploration of the nonconvex regions of the Pareto front and the early identification of regions where no additional Pareto solutions exist. The performance of the algorithm is compared to that of the sandwich algorithm and the modified normal boundary intersection method (mNBI) over a set of literature benchmark problems and molecular design problems. The SDNBI found to provide the most evenly distributed approximation of the Pareto front as well as useful information on regions of the objective space that do not contain a nondominated point. The advances in this thesis can accelerate the discovery of novel solvents for CO2 capture that can achieve improved process performance. More broadly, the modelling and algorithmic development presented extend the applicability of CAMPD and MOO based CAMD/CAMPD to a wider range of applications.Open Acces

    Approximations in Stochastic Optimization and Their Applications

    Get PDF
    Mnoho inženýrských úloh vede na optimalizační modely s~omezeními ve tvaru obyčejných (ODR) nebo parciálních (PDR) diferenciálních rovnic, přičemž jsou v praxi často některé parametry neurčité. V práci jsou uvažovány tři inženýrské problémy týkající se optimalizace vibrací a optimálního návrhu rozměrů nosníku. Neurčitost je v nich zahrnuta ve formě náhodného zatížení nebo náhodného Youngova modulu. Je zde ukázáno, že dvoustupňové stochastické programování nabízí slibný přístup k řešení úloh daného typu. Odpovídající matematické modely, zahrnující ODR nebo PDR omezení, neurčité parametry a více kritérií, vedou na (vícekriteriální) stochastické nelineární optimalizační modely. Dále je dokázáno, pro jaký typ úloh je nutné použít stochastické programování (EO reformulace), a kdy naopak stačí řešit jednodušší deterministickou úlohu (EV reformulace), což má v praxi význam z hlediska výpočetní náročnosti. Jsou navržena výpočetní schémata zahrnující diskretizační metody pro náhodné proměnné a ODR nebo PDR omezení. Matematické modely odvozené pomocí těchto aproximací jsou implementovány a řešeny v softwaru GAMS. Kvalita řešení je určena na základě intervalových odhadů "optimality gapu" spočtených pomocí metody Monte Carlo. Parametrická analýza vícekriteriálního modelu vede na výpočet "efficient frontier". Jsou studovány možnosti aproximace modelu zahrnujícího pravděpodobnostní členy související se spolehlivostí pomocí smíšeného celočíselného nelineárního programování a reformulace pomocí penalizační funkce. Dále je vzhledem k budoucím možnostem paralelních výpočtů rozsáhlých inženýrských úloh implementován a testován PHA algoritmus. Výsledky ukazují, že lze tento algoritmus použít, i když nejsou splněny matematické podmínky zaručující konvergenci. Na závěr je pro deterministickou verzi jedné z úloh porovnána metoda konečných diferencí s metodou konečných prvků za použití softwarů GAMS a ANSYS se zcela srovnatelnými výsledky.Many optimum design problems in engineering areas lead to optimization models constrained by ordinary (ODE) or partial (PDE) differential equations, and furthermore, several elements of the problems may be uncertain in practice. Three engineering problems concerning the optimization of vibrations and an optimal design of beam dimensions are considered. The uncertainty in the form of random load or random Young's modulus is involved. It is shown that two-stage stochastic programming offers a promising approach in solving such problems. Corresponding mathematical models involving ODE or PDE type constraints, uncertain parameters and multiple criteria are formulated and lead to (multi-objective) stochastic nonlinear optimization models. It is also proved for which type of problems stochastic programming approach (EO reformulation) should be used and when it is sufficient to solve simpler deterministic problem (EV reformulation). This fact has the big importance in practice in term of computational intensity of large scale problems. Computational schemes for this type of problems are proposed, including discretization methods for random elements and ODE or PDE constraints. By means of derived approximations the mathematical models are implemented and solved in GAMS. The solution quality is determined by an interval estimate of the optimality gap computed via Monte Carlo bounding technique. Parametric analysis of multi-criteria model results in efficient frontier computation. The alternatives of approximations of the model with reliability-related probabilistic terms including mixed-integer nonlinear programming and penalty reformulations are discussed. Furthermore, the progressive hedging algorithm is implemented and tested for the selected problems with respect to future possibilities of parallel computing of large engineering problems. The results show that it can be used even when the mathematical conditions for convergence are not fulfilled. Finite difference method and finite element method are compared for deterministic version of ODE constrained problem by using GAMS and ANSYS with quite comparable results.

    Global optimization at work

    Get PDF
    In many research situations where mathematical models are used, researchers try to find parameter values such that a given performance criterion is at an optimum. If the parameters can be varied in a continuous way, this in general defines a so-called Nonlinear Programming Problem. Methods for Nonlinear Programming usually result in local optima. A local optimum is a solution (parameter values) which is the best with respect to values in the neighbourhood of that solution, not necessarily the best over the total admissible, feasible set of all possible parameter values, solutions.For mathematicians this results in the research question: How to find the best, global optimum in situations where several local optima exist?, the field of Global Optimization (GLOP). Literature, books and a specific journal, has appeared during the last decades on the field. Main focus has been on the mathematical side, i.e. given assumptions on the structure of the problems to be solved and specific global optimization methods and properties are derived. Cooperation between mathematicians and researchers (in this book called 'the modeller' or 'the potential user'), who saw global optimization problems in practical problems has lead to application of GLOP algorithms to practical optimization problems. Some of those can be found in this book. In this book we started with the question:Given a potential user with an arbitrary global optimization problem, what route can be taken in the GLOP forest to find solutions of the problem?From this first question we proceed by raising new questions. In Chapter 1 we outline the target group of users we have in mind, i.e. agricultural and environmental engineers, designers and OR workers in agricultural science. These groups are not clearly defined, nor mutually exclusive, but have in common that mathematical modelling is used and there is knowledge of linear programming and possibly of combinatorial optimization.In general, when modellers are confronted with optimization aspects, the first approach is to develop heuristics or to look for standard nonlinear programming codes to generate solutions of the optimization problem. During the search for solutions, multiple local optima may appear. We distinguished two major tracks for the path to be taken from there by the potential user to solve the problem. One track is called the deterministic track and is discussed in Chapters 2, 3 and 4. The other track is called the stochastic track and is discussed in Chapters 5 and 6. The two approaches are intended to reach a different goal.The deterministic track aims at:The global optimum is approximated (found) with certainty in a finite number of steps.The stochastic track is understood to contain some stochastic elements and aims at:Approaching the optimum in a probabilistic sense as effort grows to infinity.Both tracks are investigated in this book from the viewpoint of a potential user corresponding to the way of thinking in Popperian science. The final results are new challenging problems, questions for further research. A side question along the way is:How can the user influence the search process given the knowledge of the underlying problem and the information that becomes available during the search?The deterministic approachWhen one starts looking into the deterministic track for a given problem, one runs into the requirements which determine a major difference in applicability of the two approaches.Deterministic methods require the availability of explicit mathematical expressions of the functions to be optimized.In many practical situations which are also discussed in this book, these expressions are not available and deterministic methods cannot be applied. The operations in deterministic methods are based on concepts such as Branch-and-Bound and Cutting which require bounding of functions and parameters based on so-called mathematical structures.In Chapter 2 we describe these structures and distinguish between those which can be derived directly from the expressions, such as quadratic, bilinear and fractional functions and other structures which require analysis of the expressions such as concave and Lipschitz continuous functions. Examples are given of optimization problems revealing their structure. Moreover, we show that symmetry in the model formulation may cause models to have more than one extreme.In Chapter 3 the relationship between GLOP and Integer Programming (IP) is highlighted for several reasons.Sometimes practical GLOP problems can be approximated by IP variants and solved by standard Mixed Integer Linear Programming (MILP) techniques.The algorithms of GLOP and IP can similarly be classified.The transformability of GLOP problems to IP problems and vice versa shows that difficult problems in one class will not become easier to solve in the other.Analysis of problems, which is common in Global Optimization, can be used to better understand the complexity of some IP problems.In Chapter 4 we analyze the use of deterministic methods, demonstrating the application of the Branch-and-Bound concept. The following can be stated from the point of view of the potential user:Analysis of the expressions is required to find useful mathematical structures (Chapter 2). It should be noted that also interval arithmetic techniques can be applied directly on the expressions.The elegance of the techniques is the guarantee that we are certain about the global optimality of the optimum, when it has been discovered and verified.The methods are hard to implement. Thorough use should be made of special data structures to store the necessary information in memory.Two cases are elaborated. The quadratic product design problem illustrates how the level of Decision Support Systems can be reached for low dimensional problems, i.e. the number of variables, components or ingredients, is less than 10. The other case, the nutrient problem, shows how by analysis of the problem many useful properties can be derived which help to cut away large areas of the feasible space where the optimum cannot be situated. However, it also demonstrates the so-called Curse of Dimensionality; the problem has so many variables in a realistic situation that it is impossible to traverse the complete Branch-and-Bound tree. Therefore it is good to see the relativity of the use of deterministic methods:No global optimization method can guarantee to find and verify the global optimum for every practical situation, within a humans lifetime.The stochastic approachThe stochastic approach is followed in practice for many optimization problems by combining the generation of random points with standard nonlinear optimization algorithms. The following can be said from the point of view of the potential user.The methods require no mathematical structure of the problem and are therefore more generally applicable.The methods are relatively easy to implement.The user is never completely certain that the global optimum has been reached.The optimum is approximated in a probabilistic sense when effort increases to infinity.In Chapter 5 much attention is paid to the question what happens when a user wants to spend a limited (not infinite) amount of time to the search for the optimum, preferably less than a humans lifetime:What to do when the time for solving the problem is finite?First we looked at the information which becomes available during the search and the instruments with which the user can influence the search. It appeared that besides classical instruments which are also available in traditional nonlinear programming, the main instrument is to influence the trade-off between global (random) and local search (looking for a local optimum). This lead to a new question:Is there a best way to rule the choice between global and local search, given the information which becomes available?Analyzing in a mathematical way with extreme cases lead to the comfortable conclusion that a best method of choosing between global and local search -thus a best global optimization method- does not exist. This is valid for cases where further information (more than the information which becomes available during the search) on the function to be optimized is not available, called in literature the black-box case. The conclusion again shows that mathematical analysis with extreme cases is a powerful tool to demonstrate that so-called magic algorithms -algorithms which are said in scientific journals to be very promising, because they perform well on some test cases- can be analyzed and 'falsified' in the way of Popperian thinking. This leads to the conclusion that:Magic algorithms which are going to solve all of your problems do not exist.Several side questions derived from the main problem are investigated in this book.In Chapter 6 we place the optimization problem in the context of parameter estimation. One practical question is raised by the phenomenonEvery local search leads to a new local optimum.We know from parameter estimation that this is a symptom in so called non-identifiable systems. The minimum is obtained at a lower dimensional surface or curve. Some (non-magic) heuristics are discussed to overcome this problem.There are two side questions of users derived from the general remark:"I am not interested in the best (GLOP) solution, but in good points".The first question is that of Robust Solutions, introduced in Chapter 4, and the other is called Uniform Covering, concerning the generation of points which are nearly as good as the optimum, discussed in Chapter 6.Robust solutions are discussed in the context of product design. The robustness is defined as a measure of the error one can make from the solution so that the solution (product) is still acceptable. Looking for the most robust product is looking for that point which is as far away as possible from the boundaries of the feasible (acceptable) area. For the solution procedures, we had a look at the appearance of the problem in practice, where boundaries are given by linear and quadratic surfaces, properties of the product.For linear boundaries, finding the most robust solution is an LP problem and thus rather easy.For quadratic properties the development of specific algorithms is required.The question of Uniform Covering concerns the desire to have a set of "suboptimal" points, i.e. points with low function value (given an upper level of the function value); the points are in a so-called level set. To generate "low" points, one could run a local search many times. However, we want the points not to be concentrated in one of the compartments or one sub-area of the level set, we want them to be equally, uniformly spread over the region. This is a very difficult problem for which we test and analyze several approaches in Chapter 6. The analysis taught us that:It is unlikely that stochastic methods will be proposed which solve problems in an expected calculation time, which is polynomial in the number of variables of the problem.Final resultWhether an arbitrary problem of a user can be solved by GLOP requires analysis. There are many optimization problems which can be solved satisfactorily. Besides the selection of algorithms the user has various instruments to steer the process. For stochastic methods it mainly concerns the trade-off between local and global search. For deterministic methods it includes setting bounds and influencing the selection rule in Branch-and-Bound. We hope with this book to have given a tool and a guidance to solution procedures. Moreover, it is an introduction to further literature on the subject of Global Optimization.</p

    Optimization Methods and Algorithms for Classes of Black-Box and Grey-Box Problems

    Get PDF
    There are many optimization problems in physics, chemistry, finance, computer science, engineering and operations research for which the analytical expressions of the objective and/or the constraints are unavailable. These are black-box problems where the derivative information are often not available or too expensive to approximate numerically. When the derivative information is absent, it becomes challenging to optimize and guarantee optimality of the solution. The objective of this Ph.D. work is to propose methods and algorithms to address some of the challenges of blackbox optimization (BBO). A top-down approach is taken by first addressing an easier class of black-box and then the difficulty and complexity of the problems is gradually increased. In the first part of the dissertation, a class of grey-box problems is considered for which the closed form of the objective and/or constraints are unknown, but it is possible to obtain a global upper bound on the diagonal Hessian elements. This allows the construction of an edge-concave underestimator with vertex polyhedral solution. This lower bounding technique is implemented within a branch-and-bound framework with guaranteed convergence to global optimality. The technique is applied for the optimization of problems with embedded system of ordinary differential equations (ODEs). Time dependent bounds on the state variables and the diagonal elements of the Hessian are computed by solving auxiliary set of ODEs that are derived using differential inequalities. In the second part of the dissertation, general box-constrained black-box problems are addressed for which only simulations can be performed. A novel optimization method, UNIPOPT (Univariate Projection-based Optimization) based on projection onto a univariate space is proposed. A special function is identified in this space that also contains the global minima of the original function. Computational experiments suggest that UNIPOPT often have better space exploration features compared to other approaches. The third part of the dissertation addresses general black-box problems with constraints of both known and unknown algebraic forms. An efficient two-phase algorithm based on trust-region framework is proposed for problems particularly involving high function evaluation cost. The performance of the approach is illustrated through computational experiments which evaluate its ability to reduce a merit function and find the optima
    corecore