9 research outputs found
Regresyon ve Kriging Meta-Modelleri için Kullanılan Deney Tasarımı Yöntemleri
Benzetim
modelinden veri üretmenin oldukça zaman alıcı olduğu durumlarda eniyileme,
duyarlılık analizi gibi amaçlarla meta-model kullanılır. Deney tasarımı
meta-model kurma çalışmalarının en önemli aşamalarından biridir ve benzetim
modelinin hangi girdi değişkenleri kombinasyonları için çalıştırılacağı
belirlenir. Seçilen meta-modelin yapısına uygun deney tasarımı kullanılması
gerekir. Bu çalışmada literatürde regresyon ve kriging meta-modelleri için
kullanılan deney tasarımı yöntemleri incelenmiş ve yorumlanmıştır
Recommended from our members
A hybrid global surrogate modeling software for nuclear reactor cross section estimation
Nuclear fuel cycle (NFC) simulators track the amount and composition of materials as they move through facilities such as mines, fuel fabrication plants, and nuclear reactors. A major task of a NFC simulator is to calculate the evolution of compositions of batches of nuclear materials as they are transmuted in reactors, decay, and are blended with other batches to create reactor fuel or be reprocessed or disposed. Codes used for NFC simulation that utilize intermediate data saved in databases which are calculated ahead of time are attractive since their fidelity can be improved by investing more resources in expanding their databases. Shifting the computational work ahead of the reactor simulation like this allows the fidelity to be improved without sacrificing runtime computational cost. This dissertation describes a method that attempts to maximize the fidelity increase per unit time invested during this precomputation step. Unlike previous work in the reactor simulation field, this methodology does not limit the number and type of runtime simulation inputs. NUDGE (NUclear Database GEneration software) is an implementation of this methodology. The methodology has two main steps where new data is added to databases. First is exploration, where inputs to the database are selected to be as uniformly distributed as possible within the problem input domain. Second step is exploitation, where output information is utilized to inform the selection of the next point to run. An improvement to exploitation, named Voronoi Cell Adjustment, is described in this dissertation and implemented in NUDGE. This improvement has been shown to benefit the average fidelity increase during database building. A study of the scaling of the methodology, a comparison of error metrics, and an exploration of optimal values for several key parameters in the methodology are presented. NUDGE has also been used to create a global surrogate model of a NFC simulation software (named XSgen). This model shows better performance compared to models generated by other established methods under equal constraints.Mechanical Engineerin
FORCE FED MICROCHANNEL HIGH HEAT FLUX COOLING UTILIZING MICROGROOVED SURFACES
Among other applications, the increase in power density of advanced electronic components has created a need for high heat flux cooling. Future processors have been anticipated to exceed the current barrier of 1000 W/cm2, while the working temperature of such systems is expected to remain more or less the same. Currently, the well known cooling technologies have shown little promise of meeting these demands.
This dissertation investigated an innovative cooling technology, referred to as force-fed heat transfer. Force-fed microchannel heat sinks (FFMHS) utilize certain enhanced microgrooved surfaces and advanced flow distribution manifolds, which create a system of short microchannels running in parallel. For a single-phase FFMHS, a numerical model was incorporated in a multi-objective optimization algorithm, and the optimum parameters that generate the maximum heat transfer
coefficients with minimum pumping power were identified. Similar multi-objective optimization procedures were applied to Traditional Microchannel Heat Sinks (TMHS) and Jet Impingement Heat Sinks (JIHS). The comparison study at optimum designs indicates that for a 1 x 1 cm2 base heat sink area, heat transfer coefficients of FFMHS can be 72% higher than TMHS and 306% higher than JIHS at same pumping power. For two-phase FFMHS, three different heat sink designs incorporating microgrooved surfaces with microchannel widths between 21 μm and 60 μm were tested experimentally using R-245fa, a dielectric fluid. It was demonstrated that FFMHS can cool higher heat fluxes with lower pumping power values when compared to conventional methods.
The flow and heat transfer characteristics in two-phase mode were evaluated using a visualization test setup. It was found that at low hydraulic diameter and low mass flux, the dominant heat transfer mechanism is dynamic rapid bubble expansion leading to an elongated bubble flow regime. For high heat-flux, as well as combination of high heat flux and high hydraulic diameters, the flow regimes resemble the flow characteristics observed in conventional tubes.
The present research is the first of its kind to develop a better understanding of single-phase and phase-change heat transfer in FFMHS through flow visualization, numerical and experimental modeling of the phenomena, and multi-objective optimization of the heat sink
Fundamentos e aplicações da metodologia de superfície de resposta
Dissertação de Mestrado em Estatística, Matemática e Computação apresentada à Universidade AbertaA otimização de processos e produtos, a caracterização do sistema e a quantificação do impacto da incerteza dos parâmetros de entrada na resposta do sistema, assumem importância cada vez maior na investigação nas mais diversas áreas da sociedade, seja pelo impacto económico seja pelas consequências que possam advir. A Metodologia de Superfície de Resposta (MSR), nas suas mais diversas abordagens, tem-se revelado uma ferramenta da maior importância nestas áreas.
Desde a publicação do artigo de Box e Wilson (1951) que a metodologia foi sendo objeto do interesse de investigadores no âmbito dos fundamentos e das aplicações. Esta metodologia, na abordagem tradicional, tem um carater sequencial e em cada iteração contemplam-se três etapas: definição do planeamento experimental, ajuste do modelo e otimização. Nestas seis décadas, os planeamentos experimentais foram sendo desenvolvidos para responder às aplicações e aos objetivos, com vista a proporcionar um modelo o mais preciso possível. Os modelos utilizados para aproximar a resposta foram evoluindo dos modelos polinomiais de primeira e segunda ordem para os modelos de aprendizagem automática, passando por diferentes modelos não lineares. Os métodos de otimização passaram pelo mesmo processo de expansão da metodologia, com vista a responder a desafios cada vez mais exigentes.
A este caminho não são alheios o desenvolvimento computacional e a simulação. Se no início a metodologia se aplicava apenas a sistemas reais, hoje, a simulação de sistemas, nas mais diversas áreas e com crescente grau de complexidade, socorre-se dos metamodelos para reduzir os custos computacionais associados. A quantificação probabilística da incerteza é um excelente exemplo da aplicação da MSR.
A quantificação do impacto da incerteza nas variáveis de entrada na resposta do sistema pode ser obtida implementando a metodologia com uma abordagem estocástica. Esta forma de implementação da metodologia também permite implementar a análise de sensibilidade.
Neste trabalho faz-se um levantamento dos desenvolvimentos da MSR, nas várias fases da implementação da metodologia, nas seis décadas que decorreram desde a sua introdução. Apresentam-se três aplicações: na indústria da cerâmica, na produção florestal e na área da saúde, mais especificamente no prognóstico do cancro da mama.The processes and products optimization, the system characterization and quantification of the uncertainty impact of the input parameters on the system response assume increasing importance in research in several areas of society, either by economic impact or by the consequences that may ensue. The Response Surface Methodology (RSM), in its various approaches, has proven itself to be a tool of major importance in these fields.
Since the publication of the paper of Box and Wilson (1951) the methodology has been a subject of interest to researchers in the context of the fundamentals and applications. In the traditional approach, this methodology has a sequential character, and for each iteration there are three steps involved: defining the experimental design, fitting the model and optimization.
In these six decades, the experimental designs have been developed to respond to the applications and objectives, in order to provide the most accurate model possible, according to the purpose. The models used to approximate the response have evolved from first and second order polynomials models to machine learning models, going through different nonlinear models. Optimization methods have gone through the same process of expansion of the methodology, in order to meet increasingly demanding challenges.
And this path is not unconnected with the computational development and computer simulation. If at the beginning the methodology was applied only to real systems, today, in simulation systems, in different areas and with increasing degree of complexity, we use the metamodel to reduce the associated computational costs.
The probabilistic quantification of uncertainty is an excellent example of the application of the MSR. The quantification of the input uncertainties impact in the system response can be obtained by implementing the method with a stochastic approach. This way of implementing the methodology also allows the implementation of the sensitivity analysis.
In this paper we make a survey of the developments of the MSR, at various stages of the implementation of the methodology, in the six decades that have elapsed since its introduction. We present three applications: in the ceramics industry, in forestry production and in healthcare, specifically in the breast cancer prognostic
Human detection of computer simulation mistakes in engineering experiments
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering, 2010.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Cataloged from student-submitted PDF version of thesis.Includes bibliographical references (p. 97-104).This thesis investigates the notion that the more complex the experimental plan, the less likely an engineer is to discover a simulation mistake in a computer-based experiment. The author used an in vitro methodology to conduct an experiment with 54 engineers completing a design task to find the optimal configuration for a device with seven two-level control factors. Participants worked individually using a prescribed design approach dependent upon the randomly assigned experimental condition -- an adaptive one-factor-at-a-time plan for the control group or a resolution III fractional factorial plan for the treatment group -- with a flawed computer simulation of the device. A domain knowledge score was measured by quiz, and success or failure in discovering the flaw was measured by questioning during debriefing. About half (14 of 17) of the participants using the one-factor-at-a-time plan discovered the flaw, while nearly none (1 of 27) using the fractional factorial plan did so. Logistic regression analysis of the dichotomous outcome on treatment condition and domain knowledge score showed that flaw detection ability improved with increased domain knowledge, but that an advantage of two standard deviations in domain knowledge was insufficient to overcome the disadvantage of using the fractional factorial plan. Participant reactions to simulation results were judged by two independent raters for surprise as an indicator of expectation violation. Contingency analysis of the surprise rating results showed that participants using the fractional factorial plan were significantly less likely (risk ratio ~ 0.57) to appear surprised when the anomaly was elicited, but there was no difference in tendency to display surprise otherwise. The observed phenomenon has ramifications beyond simulation mistake detection. Cognitive psychologists have shown that the most effective way to learn a new concept is to observe unexpected behavior, investigate the cause, then integrate the new concept into one's mental model. If using a complex experimental plan hinders an engineer's ability to recognize anomalous data, the engineer risks losing opportunities to develop expertise. Initial screening and sensitivity analysis are recommended as countermeasures when using complex experiments, but more study is needed for verification.by Troy Brendon Savoie.Ph.D
Comparing designs for computer simulation experiments
The use of simulation as a modeling and analysis tool is wide spread. Simulation is an enabling tool for experimenting virtually on a validated computer environment. Often the underlying function for the results of a computer simulation experiment has too much curvature to be adequately modeled by a low order polynomial. In such cases finding an appropriate experimental design is not easy. This research uses prediction variance over the volume of the design region to evaluate computer simulation experiments assuming the modeler is interested in fitting a second order polynomial or a Gaussian PRocess model to the response data. Both space-filling and optimal designs were considered