43 research outputs found

    Numerical studies of space filling designs: optimization of Latin Hypercube Samples and subprojection properties

    Get PDF
    International audienceQuantitative assessment of the uncertainties tainting the results of computer simulations is nowadays a major topic of interest in both industrial and scientific communities. One of the key issues in such studies is to get information about the output when the numerical simulations are expensive to run. This paper considers the problem of exploring the whole space of variations of the computer model input variables in the context of a large dimensional exploration space. Various properties of space filling designs are justified: interpoint-distance, discrepancy, minimum spanning tree criteria. A specific class of design, the optimized Latin Hypercube Sample, is considered. Several optimization algorithms, coming from the literature, are studied in terms of convergence speed, robustness to subprojection and space filling properties of the resulting design. Some recommendations for building such designs are given. Finally, another contribution of this paper is the deep analysis of the space filling properties of the design 2D-subprojections

    Generalized latin hypercube design for computer experiments

    Get PDF
    Space filling designs, which satisfy a uniformity property, are widely used in computer experiments. In the present paper the performance of non-uniform experimental designs which locate more points in a neighborhood of the boundary of the design space is investigated. These designs are obtained by a quantile transformation of the one-dimensional projections of commonly used space filling designs. This transformation is motivated by logarithmic potential theory, which yields the arc-sine measure as equilibrium distribution. Alternative distance measures yield to Beta distributions, which put more weight in the interior of the design space. The methodology is illustrated for maximin Latin hypercube designs in several examples. In particular it is demonstrated that in many cases the new designs yield a smaller integrated mean square error for prediction. Moreover, the new designs yield to substantially better performance with respect to the entropy criterion

    Two-stage methods for multimodal optimization

    Get PDF
    Für viele praktische Optimierungsprobleme ist es ratsam nicht nur eine einzelne optimale Lösung zu suchen, sondern eine Menge von Lösungen die gut und untereinander verschieden sind. Die Argumentation hinter dieser Meinung ist, dass ein Entscheidungsträger möglicherweise nachträglich zusätzliche Kriterien einbringen möchte, die nicht im Optimierungsproblem enthalten waren. Gründe für die Nichtberücksichtigung im Optimierungsproblem sind zum Beispiel dass das notwendige Expertenwissen noch nicht formalisiert wurde, oder dass die Bewertung der Zusatzkriterien mehr oder weniger subjektiv abläuft. Das Forschungsgebiet für diese einkriteriellen Optimierungsprobleme mit Bedarf für eine Menge von mehreren Lösungen wird momentan mit dem Begriff multimodale Optimierung umschrieben. In dieser Arbeit wenden wir zweistufige Optimieralgorithmen, die aus sich abwechselnden globalen und lokalen Komponenten bestehen, auf diese Probleme an. Diese Algorithmen sind attraktiv für uns wegen ihrer Einfachheit und ihrer belegten Leistungsfähigkeit auf multimodalen Problemen. Das Hauptaugenmerk liegt darauf, die globale Phase zu verbessern, da lokale Suche schon ein gut erforschtes Themengebiet ist. Wir tun dies, indem wir vorher ausgewertete Punkte und bereits bekannte Optima in unserem globalen Samplingalgorithmus berücksichtigen. Unser Ansatz basiert auf der Maximierung der minimalen Distanz in einer Punktmenge, während Kanteneffekte, welche durch die Beschränktheit des Suchraums verursacht werden, durch geeignete Korrekturmaßnahmen verhindert werden. Experimente bestätigen die Überlegenheit dieses Algorithmus gegenüber zufällig gleichverteiltem Sampling und anderen Methoden in diversen Problemstellungen multimodaler Optimierung.For many practical optimization problems it seems advisable to seek not only a single optimal solution, but a diverse set of good solutions. The rationale behind this opinion is that a decision maker may want to consider additional criteria, which are not included in the optimization problem itself. Reasons for not including them are for example that the expert knowledge constituting the additional criteria has not been formalized or that the evaluation of the additional criteria is more or less subjective. The area containing single-objective problems with the need to identify a set of solutions is currently called multimodal optimization. In this work, we apply two-stage optimization algorithms, which consist of alternating global and local searches, to these problems. These algorithms are attractive because of their simplicity and their demonstrated performance on multimodal problems. The main focus is on improving the global stages, as local search is already a thoroughly investigated topic. This is done by considering previously sampled points and found optima in the global sampling, thus obtaining a super-uniform distribution. The approach is based on maximizing the minimal distance in a point set, while boundary effects of the box-constrained search space are avoided by correction methods. Experiments confirm the superiority of this algorithm over random uniform sampling and other methods in various different settings of multimodal optimization

    Metamodel-based inverse uncertainty quantification of nuclear reactor simulators under the Bayesian framework

    Get PDF
    Mathematical modeling and computer simulations have long been the central technical topics in practically all branches of science and technology. Tremendous progress has been achieved in revealing quantitative connections between numerical predictions and real-world observations. However, because computer models are reduced representations of the real phenomena, there are always discrepancies between ideal in silico designed systems and real-world manufactured ones. As a consequence, uncertainties must be quantified along with the simulation outputs to facilitate optimal design and decision making, ensure robustness, performance or safety margins. Forward uncertainty propagation requires knowledge in the statistical information for computer model random inputs, for example, the mean, variance, Probability Density Functions (PDFs), upper and lower bounds, etc. Historically, ``expert judgment'' or ``user self-evaluation'' have been used to specify the uncertainty information associated with random input parameters. Such ad hoc characterization is unscientific and lacks mathematical rigor. In this thesis, we attempt to solve such ``lack of uncertainty information'' issue with inverse Uncertainty Quantification (UQ). Inverse UQ is the process to seek statistical descriptions of the random input parameters that are consistent with available high-quality experimental data. We formulate the inverse UQ process under the Bayesian framework using the ``model updating equation''. Markov Chain Monte Carlo (MCMC) sampling is applied to explore the posterior distributions and generate samples from which we can extract statistical information for the uncertain input parameters. To greatly alleviate the computational burden during MCMC sampling, we used systematically and rigorously developed metamodels based on stochastic spectral techniques and Gaussian Processes (also known as Kriging) emulators. We demonstrated the developed methodology based on three problems with different levels of sophistication: (1) Point Reactor Kinetics Equation (PRKE) coupled with lumped parameter thermal-hydraulics feedback model based on synthetic experimental data; (2) best-estimate system thermal-hydraulics code TRACE physical model parameters based on OECD/NRC BWR Full-size Fine-Mesh Bundle Tests (BFBT) benchmark steady-state void fraction data; (3) fuel performance code BISON Fission Gas Release (FGR) model based on Risø-AN3 on-line time-dependent FGR measurement data. Metamodels constructed with generalized Polynomial Chaos Expansion (PCE), Sparse Gird Stochastic Collocation (SGSC) and GP were applied respectively for these three problems to replace the full models during MCMC sampling. We proposed an improved modular Bayesian approach that can avoid extrapolating the model discrepancy that is learnt from the inverse UQ domain to the validation/prediction domain. The improved approach is organized in a structure such that the posteriors achieved with data in inverse UQ domain is informed by data in the validation domain. Therefore, over-fitting can be avoided while extrapolation is not required. A sequential approach was also developed for test source allocation (TSA) for inverse UQ and validation. This sequential TSA methodology first select tests for validation that has a full coverage of the test domain to avoid extrapolation of model discrepancy term when evaluated at input setting of tests for inverse UQ. Then it select tests that tend to reside in the unfilled zones of the test domain for inverse UQ, so that inverse UQ can extract the most information for posteriors of calibration parameters using only a relatively small number of tests. The inverse UQ process successfully quantified the uncertainties associated with input parameters that are consistent with the experimental observations. The quantified uncertainties are necessary for future uncertainty and sensitivity study of nuclear reactor simulators in system design and safety analysis. We applied and extended several advanced metamodeling approaches to nuclear engineering practice to greatly reduce the computational cost. The current research bridges the gap between models and data by solving ``lack of uncertainty information'' issue, as well as providing guidance for improving nuclear reactor simulators through the validation process

    Multidisciplinary Optimisation of Radial and Mixed-inflow Turbines for Turbochargers

    Get PDF
    The radial and mixed-inflow turbines have been widely used for the turbocharger application. The design of a turbocharger turbine with good performance still presents a lot of challenges. Apart from the traditional requirements such as high efficiency and low stress, the turbine blade is also required to achieve certain performance targets at multiple operating points, high unsteady efficiency under pulsating flow condition, reduced moment of inertia (MOI) and high vibration characteristic. To meet these challenges it is important to optimise the radial and mixed-inflow turbines for the aerodynamic performance at multiple operating points and the structural performance subject to MOI, stress and vibration constraints. In this thesis we propose an approach based on 3D inverse design method that makes such a design optimisation strategy possible under industrial timescales. Using the inverse design method, the turbine blade geometry is computed iteratively based on the prescribed blade loading distribution. The turbine’s aerodynamic and mechanical performance is evaluated using CFD and Finite Element Analysis (FEA). A linear regression is performed based on the results of a linear DOE study. The number of design parameters is reduced based on a sensitivity analysis of the linear polynomial coefficients. A more detailed DOE with around 60 designs is generated and Kriging is used to construct a response surface model (RSM). Multi-objective genetic algorithm (MOGA) is then used to search the optimal designs which meet multiple constraints and objectives on the Kriging response surface. The radial filament blading is always applied by the conventional design method to reduce the stress, while the inverse designed blade is three-dimensional (3D). Two radial filament modification (RFM) methods are proposed to control the stress level of 3D blades. Radial turbines with a backswept leading edge (LE) designed using the inverse design method show improved cycle-averaged efficiency. An optimal design is obtained through the second optimisation. Its performance is evaluated in both the aerodynamic and mechanical aspects based on CFD and FEA simulations. The CFD model is validated against the experimental results of the baseline design. The numerical results show that the optimal design leads to better performance in almost all aspects including improved efficiency in the low U/Cis (velocity ratio), reduced maximum stress, reduced MOI, and increased vibration frequencies
    corecore