447 research outputs found

    Comparison of Gaussian process modeling software

    Get PDF
    Gaussian process fitting, or kriging, is often used to create a model from a set of data. Many available software packages do this, but we show that very different results can be obtained from different packages even when using the same data and model. We describe the parameterization, features, and optimization used by eight different fitting packages that run on four different platforms. We then compare these eight packages using various data functions and data sets, revealing that there are stark differences between the packages. In addition to comparing the prediction accuracy, the predictive variance--which is important for evaluating precision of predictions and is often used in stopping criteria--is also evaluated

    Single and Multiresponse Adaptive Design of Experiments with Application to Design Optimization of Novel Heat Exchangers

    Get PDF
    Engineering design optimization often involves complex computer simulations. Optimization with such simulation models can be time consuming and sometimes computationally intractable. In order to reduce the computational burden, the use of approximation-assisted optimization is proposed in the literature. Approximation involves two phases, first is the Design of Experiments (DOE) phase, in which sample points in the input space are chosen. These sample points are then used in a second phase to develop a simplified model termed as a metamodel, which is computationally efficient and can reasonably represent the behavior of the simulation response. The DOE phase is very crucial to the success of approximation assisted optimization. This dissertation proposes a new adaptive method for single and multiresponse DOE for approximation along with an approximation-based framework for multilevel performance evaluation and design optimization of air-cooled heat exchangers. The dissertation is divided into three research thrusts. The first thrust presents a new adaptive DOE method for single response deterministic computer simulations, also called SFCVT. For SFCVT, the problem of adaptive DOE is posed as a bi-objective optimization problem. The two objectives in this problem, i.e., a cross validation error criterion and a space-filling criterion, are chosen based on the notion that the DOE method has to make a tradeoff between allocating new sample points in regions that are multi-modal and have sensitive response versus allocating sample points in regions that are sparsely sampled. In the second research thrust, a new approach for multiresponse adaptive DOE is developed (i.e., MSFCVT). Here the approach from the first thrust is extended with the notion that the tradeoff should also consider all responses. SFCVT is compared with three other methods from the literature (i.e., maximum entropy design, maximin scaled distance, and accumulative error). It was found that the SFCVT method leads to better performing metamodels for majority of the test problems. The MSFCVT method is also compared with two adaptive DOE methods from the literature and is shown to yield better metamodels, resulting in fewer function calls. In the third research thrust, an approximation-based framework is developed for the performance evaluation and design optimization of novel heat exchangers. There are two parts to this research thrust. First, is a new multi-level performance evaluation method for air-cooled heat exchangers in which conventional 3D Computational Fluid Dynamics (CFD) simulation is replaced with a 2D CFD simulation coupled with an e-NTU based heat exchanger model. In the second part, the methods developed in research thrusts 1 and 2 are used for design optimization of heat exchangers. The optimal solutions from the methods in this thrust have 44% less volume and utilize 61% less material when compared to the current state of the art microchannel heat exchangers. Compared to 3D CFD, the overall computational savings is greater than 95%

    Inverse Uncertainty Quantification using the Modular Bayesian Approach based on Gaussian Process, Part 1: Theory

    Full text link
    In nuclear reactor system design and safety analysis, the Best Estimate plus Uncertainty (BEPU) methodology requires that computer model output uncertainties must be quantified in order to prove that the investigated design stays within acceptance criteria. "Expert opinion" and "user self-evaluation" have been widely used to specify computer model input uncertainties in previous uncertainty, sensitivity and validation studies. Inverse Uncertainty Quantification (UQ) is the process to inversely quantify input uncertainties based on experimental data in order to more precisely quantify such ad-hoc specifications of the input uncertainty information. In this paper, we used Bayesian analysis to establish the inverse UQ formulation, with systematic and rigorously derived metamodels constructed by Gaussian Process (GP). Due to incomplete or inaccurate underlying physics, as well as numerical approximation errors, computer models always have discrepancy/bias in representing the realities, which can cause over-fitting if neglected in the inverse UQ process. The model discrepancy term is accounted for in our formulation through the "model updating equation". We provided a detailed introduction and comparison of the full and modular Bayesian approaches for inverse UQ, as well as pointed out their limitations when extrapolated to the validation/prediction domain. Finally, we proposed an improved modular Bayesian approach that can avoid extrapolating the model discrepancy that is learnt from the inverse UQ domain to the validation/prediction domain.Comment: 27 pages, 10 figures, articl

    Novel Airside Heat Transfer Surface Designs Using an Integrated Multi-Scale Analysis with Topology and Shape Optimization

    Get PDF
    The major limitation of air-to-refrigerant Heat eXchangers (HX) is the air side thermal resistance which can account for more than 90% of the overall thermal resistance. The current research on heat transfer augmentation extensively focuses on the secondary heat transfer surfaces (fins). The disadvantages of fins may include reduction of heat transfer potential due to temperature gradient, increased friction resistance, fouling and additional material consumption. On the other hand, fins contribute to reducing thermal resistance by adding significant secondary surface area. The heat transfer coefficient on the primary surfaces (tubes) is not high enough to minimize thermal resistance without significantly increasing the HX size. One contributing factor is the shape of the tube itself, which is generally limited to circular, oval, or flat. Another important aspect is the tube size; the reduction of the refrigerant flow channel significantly improves performance and compactness. This characteristic grants microchannel HX’s (MCHX) a top position amongst the current state-of-the-art air-to-refrigerant HX’s. Although the airside performance of MCHX is also improved, the need for fins has not yet been eliminated. In this paper we investigate three novel surface concepts, using NURBS and ellipse arcs, focusing on the airside tube shape with small flow channels aiming at the minimization or total elimination of fins. The study constitutes designing a 1.0kW air-to-water HX, using an integrated multi-scale analysis with topology and shape optimization methodology. Typically, such an optimization is unreasonably time consuming and computationally unaffordable. To overcome these limitations we leverage automated CFD simulations and Approximation Assisted Optimization (AAO), thus, significantly reducing the computational time and resources required for the overall analysis. The resulting optimum designs exhibit capacity similar to a baseline MCHX, with same flow rates and 20% reduced approach temperature, more than 20% reduction in pumping power, more than 20% reduction in size, while still reducing entropy generation. Experimental validation for a proof-of-concept design is conducted and the predicted heat capacity agrees within 5% of the measured values, whereas the air-side pressure drop agrees within 10%

    Wavy Fin Profile Optimization Using NURBS for Air-To-Refrigerant Tube-Fin Heat Exchangers with Small Diameter Tubes

    Get PDF
    The major limitation of any air-to-refrigerant HX is the air side thermal resistance which can account for 90%, or more, of the overall thermal resistance. For this reason the secondary heat transfer surfaces (fins) play a major role in these HX’s by providing additional surface area. Many researchers extensively investigate how to improve the performance of fins. The most common passive heat transfer augmentation method applied to fins uses surface discontinuity; providing an efficient disruption-reattachment mechanism of the boundary layer. Such approach is leveraged by louvers, slits and even vortex generators. In some applications, however, these concepts are not adequate especially when there is high fouling or frosting, which is the case of many HVAC&R systems including heat pumps for cold climates. In such cases a continuous fin surface is required, which can usually be plain or wavy. The latter provides larger surface area and can induce turbulent flows improving the heat transfer. Normally the wavy fins are either a smooth sinusoidal or Herringbone profile, longitudinal to the airflow direction. In this paper we propose a novel wavy fin design method using Non-Uniform Rational B-Splines (NURBS) on both longitudinal and transverse directions. In this method the fin surface is subdivided in to 1 x n identical cells with periodic boundaries. The horizontal and vertical edges independently describe a NURBS curve on separate planes with the third spatial direction. The tools used in this work include automated CFD simulations, metamodeling and Multi-Objective Genetic Algorithm (MOGA). The analysis comprises of optimizing all wavy fin types, both the conventional ones and the novel designs presented in this paper, and compare their performance and compactness while fixing hydraulic diameter and Reynolds numbers. In conclusion, design recommendations for made for the use of the proposed novel fins.

    Assessing and improving the quality of model transformations

    Get PDF
    Software is pervading our society more and more and is becoming increasingly complex. At the same time, software quality demands remain at the same, high level. Model-driven engineering (MDE) is a software engineering paradigm that aims at dealing with this increasing software complexity and improving productivity and quality. Models play a pivotal role in MDE. The purpose of using models is to raise the level of abstraction at which software is developed to a level where concepts of the domain in which the software has to be applied, i.e., the target domain, can be expressed e??ectively. For that purpose, domain-speci??c languages (DSLs) are employed. A DSL is a language with a narrow focus, i.e., it is aimed at providing abstractions speci??c to the target domain. This makes that the application of models developed using DSLs is typically restricted to describing concepts existing in that target domain. Reuse of models such that they can be applied for di??erent purposes, e.g., analysis and code generation, is one of the challenges that should be solved by applying MDE. Therefore, model transformations are typically applied to transform domain-speci??c models to other (equivalent) models suitable for di??erent purposes. A model transformation is a mapping from a set of source models to a set of target models de??ned as a set of transformation rules. MDE is gradually being adopted by industry. Since MDE is becoming more and more important, model transformations are becoming more prominent as well. Model transformations are in many ways similar to traditional software artifacts. Therefore, they need to adhere to similar quality standards as well. The central research question discoursed in this thesis is therefore as follows. How can the quality of model transformations be assessed and improved, in particular with respect to development and maintenance? Recall that model transformations facilitate reuse of models in a software development process. We have developed a model transformation that enables reuse of analysis models for code generation. The semantic domains of the source and target language of this model transformation are so far apart that straightforward transformation is impossible, i.e., a semantic gap has to be bridged. To deal with model transformations that have to bridge a semantic gap, the semantics of the source and target language as well as possible additional requirements should be well understood. When bridging a semantic gap is not straightforward, we recommend to address a simpli??ed version of the source metamodel ??rst. Finally, the requirements on the transformation may, if possible, be relaxed to enable automated model transformation. Model transformations that need to transform between models in di??erent semantic domains are expected to be more complex than those that merely transform syntax. The complexity of a model transformation has consequences for its quality. Quality, in general, is a subjective concept. Therefore, quality can be de??ned in di??erent ways. We de??ned it in the context of model transformation. A model transformation can either be considered as a transformation de??nition or as the process of transforming a source model to a target model. Accordingly, model transformation quality can be de??ned in two di??erent ways. The quality of the de??nition is referred to as its internal quality. The quality of the process of transforming a source model to a target model is referred to as its external quality. There are also two ways to assess the quality of a model transformation (both internal and external). It can be assessed directly, i.e., by performing measurements on the transformation de??nition, or indirectly, i.e., by performing measurements in the environment of the model transformation. We mainly focused on direct assessment of internal quality. However, we also addressed external quality and indirect assessment. Given this de??nition of quality in the context of model transformations, techniques can be developed to assess it. Software metrics have been proposed for measuring various kinds of software artifacts. However, hardly any research has been performed on applying metrics for assessing the quality of model transformations. For four model transformation formalisms with di??fferent characteristics, viz., for ASF+SDF, ATL, Xtend, and QVTO, we de??ned sets of metrics for measuring model transformations developed with these formalisms. While these metric sets can be used to indicate bad smells in the code of model transformations, they cannot be used for assessing quality yet. A relation has to be established between the metric sets and attributes of model transformation quality. For two of the aforementioned metric sets, viz., the ones for ASF+SDF and for ATL, we conducted an empirical study aiming at establishing such a relation. From these empirical studies we learned what metrics serve as predictors for di??erent quality attributes of model transformations. Metrics can be used to quickly acquire insights into the characteristics of a model transformation. These insights enable increasing the overall quality of model transformations and thereby also their maintainability. To support maintenance, and also development in a traditional software engineering process, visualization techniques are often employed. For model transformations this appears as a feasible approach as well. Currently, however, there are few visualization techniques available tailored towards analyzing model transformations. One of the most time-consuming processes during software maintenance is acquiring understanding of the software. We expect that this holds for model transformations as well. Therefore, we presented two complementary visualization techniques for facilitating model transformation comprehension. The ??rst-technique is aimed at visualizing the dependencies between the components of a model transformation. The second technique is aimed at analyzing the coverage of the source and target metamodels by a model transformation. The development of the metric sets, and in particular the empirical studies, have led to insights considering the development of model transformations. Also, the proposed visualization techniques are aimed at facilitating the development of model transformations. We applied the insights acquired from the development of the metric sets as well as the visualization techniques in the development of a chain of model transformations that bridges a number of semantic gaps. We chose to solve this transformational problem not with one model transformation, but with a number of smaller model transformations. This should lead to smaller transformations, which are more understandable. The language on which the model transformations are de??ned, was subject to evolution. In particular the coverage visualization proved to be bene??cial for the co-evolution of the model transformations. Summarizing, we de??ned quality in the context of model transformations and addressed the necessity for a methodology to assess it. Therefore, we de??ned metric sets and performed empirical studies to validate whether they serve as predictors for model transformation quality. We also proposed a number of visualizations to increase model transformation comprehension. The acquired insights from developing the metric sets and the empirical studies, as well as the visualization tools, proved to be bene??cial for developing model transformations

    A multi-objective evolutionary approach to simulation-based optimisation of real-world problems.

    Get PDF
    This thesis presents a novel evolutionary optimisation algorithm that can improve the quality of solutions in simulation-based optimisation. Simulation-based optimisation is the process of finding optimal parameter settings without explicitly examining each possible configuration of settings. An optimisation algorithm generates potential configurations and sends these to the simulation, which acts as an evaluation function. The evaluation results are used to refine the optimisation such that it eventually returns a high-quality solution. The algorithm described in this thesis integrates multi-objective optimisation, parallelism, surrogate usage, and noise handling in a unique way for dealing with simulation-based optimisation problems incurred by these characteristics. In order to handle multiple, conflicting optimisation objectives, the algorithm uses a Pareto approach in which the set of best trade-off solutions is searched for and presented to the user. The algorithm supports a high degree of parallelism by adopting an asynchronous master-slave parallelisation model in combination with an incremental population refinement strategy. A surrogate evaluation function is adopted in the algorithm to quickly identify promising candidate solutions and filter out poor ones. A novel technique based on inheritance is used to compensate for the uncertainties associated with the approximative surrogate evaluations. Furthermore, a novel technique for multi-objective problems that effectively reduces noise by adopting a dynamic procedure in resampling solutions is used to tackle the problem of real-world unpredictability (noise). The proposed algorithm is evaluated on benchmark problems and two complex real-world problems of manufacturing optimisation. The first real-world problem concerns the optimisation of a production cell at Volvo Aero, while the second one concerns the optimisation of a camshaft machining line at Volvo Cars Engine. The results from the optimisations show that the algorithm finds better solutions for all the problems considered than existing, similar algorithms. The new techniques for dealing with surrogate imprecision and noise used in the algorithm are identified as key reasons for the good performance.University of Skövde Knowledge Foundation Swede
    • …
    corecore