58 research outputs found

    Input–output uncertainty comparisons for discrete optimization via simulation

    Get PDF
    When input distributions to a simulation model are estimated from real-world data, they naturally have estimation error causing input uncertainty in the simulation output. If an optimization via simulation (OvS) method is applied that treats the input distributions as “correct,” then there is a risk of making a suboptimal decision for the real world, which we call input model risk. This paper addresses a discrete OvS (DOvS) problem of selecting the realworld optimal from among a finite number of systems when all of them share the same input distributions estimated from common input data. Because input uncertainty cannot be reduced without collecting additional real-world data—which may be expensive or impossible—a DOvS procedure should reflect the limited resolution provided by the simulation model in distinguishing the real-world optimal solution from the others. In light of this, our input–output uncertainty comparisons (IOU-C) procedure focuses on comparisons rather than selection: it provides simultaneous confidence intervals for the difference between each system’s real-world mean and the best mean of the rest with any desired probability, while accounting for both stochastic and input uncertainty. To make the resolution as high as possible (intervals as short as possible) we exploit the common input data effect to reduce uncertainty in the estimated differences. Under mild conditions we prove that the IOU-C procedure provides the desired statistical guarantee asymptotically as the real-world sample size and simulation effort increase, but it is designed to be effective in finite samples

    Selection of the Most Probable Best

    Full text link
    We consider an expected-value ranking and selection problem where all k solutions' simulation outputs depend on a common uncertain input model. Given that the uncertainty of the input model is captured by a probability simplex on a finite support, we define the most probable best (MPB) to be the solution whose probability of being optimal is the largest. To devise an efficient sampling algorithm to find the MPB, we first derive a lower bound to the large deviation rate of the probability of falsely selecting the MPB, then formulate an optimal computing budget allocation (OCBA) problem to find the optimal static sampling ratios for all solution-input model pairs that maximize the lower bound. We devise a series of sequential algorithms that apply interpretable and computationally efficient sampling rules and prove their sampling ratios achieve the optimality conditions for the OCBA problem as the simulation budget increases. The algorithms are benchmarked against a state-of-the-art sequential sampling algorithm designed for contextual ranking and selection problems and demonstrated to have superior empirical performances at finding the MPB

    Gaussian Markov random fields for discrete optimization via simulation:framework and algorithms

    Get PDF
    We consider optimizing the expected value of some performance measure of a dynamic stochastic simulation with a statistical guarantee for optimality when the decision variables are discrete, in particular, integer-ordered; the number of feasible solutions is large; and the model execution is too slow to simulate even a substantial fraction of them. Our goal is to create algorithms that stop searching when they can provide inference about the remaining optimality gap similar to the correct-selection guarantee of ranking and selection when it simulates all solutions. Further, our algorithm remains competitive with fixed-budget algorithms that search efficiently but do not provide such inference. To accomplish this we learn and exploit spatial relationships among the decision variables and objective function values using a Gaussian Markov random field (GMRF). Gaussian random fields on continuous domains are already used in deterministic and stochastic optimization because they facilitate the computation of measures, such as expected improvement, that balance exploration and exploitation. We show that GMRFs are particularly well suited to the discrete decision–variable problem, from both a modeling and a computational perspective. Specifically, GMRFs permit the definition of a sensible neighborhood structure, and they are defined by their precision matrices, which can be constructed to be sparse. Using this framework, we create both single and multiresolution algorithms, prove the asymptotic convergence of both, and evaluate their finite-time performance empirically

    GaAs droplet quantum dots with nanometer-thin capping layer for plasmonic applications

    Full text link
    We report on the growth and optical characterisation of droplet GaAs quantum dots with extremely-thin (11 nm) capping layers. To achieve such result, an internal thermal heating step is introduced during the growth and its role in the morphological properties of the quantum dots obtained is investigated via scanning electron and atomic force microscopy. Photoluminescence measurements at cryogenic temperatures show optically stable, sharp and bright emission from single quantum dots, at near-infrared wavelengths. Given the quality of their optical properties and the proximity to the surface, such emitters are ideal candidates for the investigation of near field effects, like the coupling to plasmonic modes, in order to strongly control the directionality of the emission and/or the spontaneous emission rate, crucial parameters for quantum photonic applications.Comment: 1 pages, 3 figure

    Rapid Discrete Optimization via Simulation with Gaussian Markov Random Fields

    Get PDF
    Inference-based optimization via simulation, which substitutes Gaussian process (GP) learning for the structural properties exploited in mathematical programming,is a powerful paradigm that has been shown to be remarkably effective in problems of modest feasible-region size and decision-variable dimension. The limitation to “modest” problems is a result of the computational overhead and numerical challenges encountered in computing the GP conditional (posterior) distribution on each iteration. In this paper, we substantially expand the size of discrete-decision-variable optimization-via-simulation problems that can be attacked in this way by exploiting a particular GP—discrete Gaussian Markov random fields—and carefully tailored computational methods. The result is the rapid Gaussian Markov Improvement Algorithm (rGMIA), an algorithm that delivers both a global convergence guarantee and finite-sample optimality-gap inference for significantly larger problems. Between infrequent evaluations of the global conditional distribution, rGMIA applies the full power of GP learning to rapidly search smaller sets of promising feasible solutions that need not be spatially close. We carefully document the computational savings via complexity analysis and an extensive empirical study.Summary of Contribution: The broad topic of the paper is optimization via simulation, which means optimizing some performance measure of a system that may only be estimated by executing a stochastic, discrete-event simulation. Stochastic simulation is a core topic and method of operations research. The focus of this paper is on significantly speeding-up the computations underlying an existing method that is based on Gaussian process learning, where the underlying Gaussian process is a discrete Gaussian Markov Random Field. This speed-up is accomplished by employing smart computational linear algebra, state-of-the-art algorithms, and a careful divide-and-conquer evaluation strategy. Problems of significantly greater size than any other existing algorithm with similar guarantees can solve are solved as illustrations

    Interlaboratory comparison study of the Colony Forming Efficiency assay for assessing cytotoxicity of nanomaterials

    Get PDF
    Nanotechnology has gained importance in the past years as it provides opportunities for industrial growth and innovation. However, the increasing use of manufactured nanomaterials (NMs) in a number of commercial applications and consumer products raises also safety concerns and questions regarding potential unintended risks to humans and the environment. Since several years the European Commission’s Joint Research Centre (JRC) is putting effort in the development, optimisation and harmonisation of in vitro test methods suitable for screening and hazard assessment of NMs. Work is done in collaboration with international partners, in particular the Organisation for Economic Co-operation and Development (OECD). This report presents the results from an interlaboratory comparison study of the in vitro Colony Forming Efficiency (CFE) cytotoxicity assay performed in the frame of OECD's Working Party of Manufactured Nanomaterials (WPMN). Twelve laboratories from European Commission, France, Italy, Japan, Poland, Republic of Korea, South Africa and Switzerland participated in the study coordinated by JRC. The results show that the CFE assay is a suitable and robust in vitro method to assess cytotoxicity of NMs. The assay protocol is well defined and is easily and reliably transferable to other laboratories. The results obtained show good intra and interlaboratory reproducibility of the assay for both the positive control and the tested nanomaterials. In conclusion the CFE assay can be recommended as a building block of an in vitro testing battery for NMs toxicity assessment. It could be used as a first choice method to define dose-effect relationships for other in vitro assays.JRC.I.4-Nanobioscience

    One-Dimensional Modeling of an Entrained Coal Gasification Process Using Kinetic Parameters

    No full text
    A one-dimensional reactor model was developed to simulate the performance of an entrained flow gasifier under various operating conditions. The model combined the plug flow reactor (PFR) model with the well-stirred reactor (WSR) model. Reaction kinetics was considered together with gas diffusion for the solid-phase reactions in the PFR model, while equilibrium was considered for the gas-phase reactions in the WSR model. The differential and algebraic equations of mass balance and energy balance were solved by a robust ODE solver, i.e., an semi-implicit Runge–Kutta method, and by a nonlinear algebraic solver, respectively. The computed gasifier performance was validated against experimental data from the literature. The difference in product gas concentration from the equilibrium model, and the underlying mechanisms were discussed further. The optimal condition was found after parameter studies were made for various operating conditions

    Upgrading Hydrothermal Carbonization (HTC) Hydrochar from Sewage Sludge

    No full text
    As a treatment method of sewage sludge, the hydrothermal carbonization (HTC) process was adopted in this work. HTC has a great advantage considering the economic efficiency of its process operation due to its reduced energy consumption and production of solid fuel upgraded through the increased fixed carbon and heating value. The ash of sewage sludge, however, contains up to 52.55% phosphate, which degrades the efficiency of the thermochemical conversion process such as pyrolysis, combustion, and gasification by causing slagging. In this study, three kinds of organic acids, i.e., oxalic, tartaric, and citric acid, were selected to eliminate phosphorus from hydrochars produced through the HTC of sewage sludge. The efficiency of the phosphorus removal and the properties of the corresponding HTC hydrochars were analyzed by adding 20 mmoles of organic acids per 1 g of phosphorus in the HTC sample. In addition, the phosphorus reduction effect and the applicability to an upgrading process were verified. Oxalic acid was selected as the most appropriate organic acid considering the economic efficiency of its process operation. Furthermore, the optimal conditions were selected by analyzing the efficiency of the phosphorus elimination and the characteristic property of the HTC hydrochars with the weight fraction of oxalic acid
    • 

    corecore