296,124 research outputs found

    Data Driven Surrogate Based Optimization in the Problem Solving Environment WBCSim

    Get PDF
    Large scale, multidisciplinary, engineering designs are always difficult due to the complexity and dimensionality of these problems. Direct coupling between the analysis codes and the optimization routines can be prohibitively time consuming due to the complexity of the underlying simulation codes. One way of tackling this problem is by constructing computationally cheap(er) approximations of the expensive simulations, that mimic the behavior of the simulation model as closely as possible. This paper presents a data driven, surrogate based optimization algorithm that uses a trust region based sequential approximate optimization (SAO) framework and a statistical sampling approach based on design of experiment (DOE) arrays. The algorithm is implemented using techniques from two packages—SURFPACK and SHEPPACK that provide a collection of approximation algorithms to build the surrogates and three different DOE techniques—full factorial (FF), Latin hypercube sampling (LHS), and central composite design (CCD)—are used to train the surrogates. The results are compared with the optimization results obtained by directly coupling an optimizer with the simulation code. The biggest concern in using the SAO framework based on statistical sampling is the generation of the required database. As the number of design variables grows, the computational cost of generating the required database grows rapidly. A data driven approach is proposed to tackle this situation, where the trick is to run the expensive simulation if and only if a nearby data point does not exist in the cumulatively growing database. Over time the database matures and is enriched as more and more optimizations are performed. Results show that the proposed methodology dramatically reduces the total number of calls to the expensive simulation runs during the optimization process

    The Environmental Costing Model: a tool for more efficient environmental policymaking in Flanders

    Get PDF
    The environmental costing model (Milieu-Kosten-Model or MKM in Dutch) is a tool for assessing cost-efficiency of environmental policy. The present paper describes the modelling methodology and illustrates it by presenting numerical simulations for selected multi-sector and multi-pollutant emission control problems for Flanders. First, the paper situates the concept of cost-efficiency in the context of Flemish environmental policy and motivates the chosen approach. Secondly, the structure of the numerical simulation model is laid out. The basic model input is an extensive database of potential emission reduction measures for several pollutants and several sectors. Each measure is characterized by its specific emission reduction potential and average abatement cost. The MKM determines, by means of linear programming techniques, least-cost combinations of abatement measures as to satisfy, possibly multi-pollutant, emission standards. Emission reduction targets can be imposed for Flanders as a whole, per sector or even per installation. The measures can be constrained to satisfy “equal treatment” of sectors and several other political feasibility constraints. Thirdly, the features of the model are illustrated by means of a multi-sector (non-ferrous, chemical and ceramics industry) and multi-pollutant (SO2, NOx) example. Results show clearly that important cost savings are possible by allowing for more flexibility (emission standards for Flanders as a whole instead of per sector). Cost savings from taking into account explicitly the multi-pollutant nature of environmental regulation are modest for the current test version of the database.environmental economics, cost efficiency, multi-pollutant emission control problem, numerical simulation model, linear programming

    On Discrete-Event Simulation and Integration in the Manufacturing System Development Process

    Get PDF
    DES is seldom used in the manufacturing system development process, instead it is usually used to cure problems in existent systems. This has the effect that the simulation study alone is considered being the cost driver for the analysis of the manufacturing system. It is argued that this is not a entirely correct view since the analysis has to be performed anyway, and the cost directly related to the simulation study is mainly in the model realization phase. It is concluded that it is preferred if the simulation study life cycle coincides with the corresponding manufacturing system's life cycle to increase the usability of the simulation model and to increase efficiency in the simulation study process. A model is supplied to be used for management and engineering process improvements and for improvements of the organizational issues to support simulation activities. By institutionalizing and utilizing well defined processes the conceived complexity related to DES is considered to be reduced over time. Cost is highly correlated to the time consumed in a simulation study. The presented methodology tries to reduce time consumption and lead-time in the simulation study by: (i)~reducing redundant work, (ii)~reducing rework, and (iii)~moving labor intensive activities forward in time. To reduce the time to collect and analyze input data a framework is provided that aims at delivering high granularity input data without dependencies. The input data collection framework is designed to provide data for operation and analysis of the manufacturing system in several domains. To reduce the model realization time two approaches are presented. The first approach supplies a set of modules that enables parameterized models of automated subassembly systems. The second approach builds and runs the simulation model based on a copy of an MRP database, i.e. there is no manual intervention required to build the simulation model. The approach is designed to forecast the performance of an entire enterprise. Since the model is generated from a database, the approach is highly scalable. Furthermore, the maintenance of the simulation model is reduced considerably

    Online Learning Algorithm for Time Series Forecasting Suitable for Low Cost Wireless Sensor Networks Nodes

    Full text link
    Time series forecasting is an important predictive methodology which can be applied to a wide range of problems. Particularly, forecasting the indoor temperature permits an improved utilization of the HVAC (Heating, Ventilating and Air Conditioning) systems in a home and thus a better energy efficiency. With such purpose the paper describes how to implement an Artificial Neural Network (ANN) algorithm in a low cost system-on-chip to develop an autonomous intelligent wireless sensor network. The present paper uses a Wireless Sensor Networks (WSN) to monitor and forecast the indoor temperature in a smart home, based on low resources and cost microcontroller technology as the 8051MCU. An on-line learning approach, based on Back-Propagation (BP) algorithm for ANNs, has been developed for real-time time series learning. It performs the model training with every new data that arrive to the system, without saving enormous quantities of data to create a historical database as usual, i.e., without previous knowledge. Consequently to validate the approach a simulation study through a Bayesian baseline model have been tested in order to compare with a database of a real application aiming to see the performance and accuracy. The core of the paper is a new algorithm, based on the BP one, which has been described in detail, and the challenge was how to implement a computational demanding algorithm in a simple architecture with very few hardware resources.Comment: 28 pages, Published 21 April 2015 at MDPI's journal "Sensors

    Electromagnetic modelling and simulation of a high-frequency ground penetrating radar antenna over a concrete cell with steel rods

    Get PDF
    This work focuses on the electromagnetic modelling and simulation of a highfrequency Ground-Penetrating Radar (GPR) antenna over a concrete cell with reinforcing elements. The development of realistic electromagnetic models of GPR antennas is crucial for accurately predicting GPR responses and for designing new antennas. We used commercial software implementing the Finite-Integration technique (CST Microwave Studio) to create a model that is representative of a 1.5 GHz Geophysical Survey Systems, Inc. antenna, by exploiting information published in the literature (namely, in the PhD Thesis of Dr Craig Warren); our CST model was validated, in a previous work, by comparisons with FiniteDifference Time-Domain results and with experimental data, with very good agreement, showing that the software we used is suitable for the simulation of antennas in the presence of targets in the near field. In the current paper, we firstly describe in detail how the CST model of the antenna was implemented; subsequently, we present new results calculated with the antenna over a reinforced-concrete cell. Such cell is one of the reference scenarios included in the Open Database of Radargrams of COST Action TU1208 “Civil engineering applications of Ground Penetrating Radar” and hosts five circular-section steel rods, having different diameters, embedded at different depths into the concrete. Comparisons with a simpler model, where the physical structure of the antenna is not taken into account, are carried out; the significant differences between the results of the realistic model and the results of the simplified model confirm the importance of including accurate models of the actual antennas in GPR simulations; they also emphasize how salient it is to remove antenna effects as a pre-processing step of experimental GPR data. The simulation results of the antenna over the concrete cell presented in this paper are attached to the paper as ‘Supplementary materials.

    A Methodology for Vertically Partitioning in a Multi-Relation Database Environment

    Get PDF
    Vertical partitioning, in which attributes of a relation are assigned to partitions, is aimed at improving database performance. We extend previous research that is based on a single relation to multi-relation database environment, by including referential integrity constraints, access time based heuristic, and a comprehensive cost model that considers most transaction types including updates and joins. The algorithm was applied to a real-world insurance CLAIMS database. Simulation experiments were conducted and the results show a performance improvement of 36% to 65% over unpartitioned case. Application of our method for small databases resulted in partitioning schemes that are comparable to optimal.Facultad de Informátic

    Stochastic frontier analysis by means of maximum likelihood and the method of moments

    Get PDF
    The stochastic frontier analysis (Aigner et al., 1977, Meeusen and van de Broeck, 1977) is widely used to estimate individual efficiency scores. The basic idea lies in the introduction of an additive error term consisting of a noise and an inefficiency term. Most often the assumption of a half-normal distributed inefficiency term is applied, but other distributions are also discussed in relevant literature. The natural estimation method seems to be Maximum Likelihood (ML) estimation because of the parametric assumptions. But simulation results obtained for the half normal model indicate that a method of moments approach (MOM) (Olson et al., 1980) is superior for small and medium sized samples in combination with inefficiency not strongly dominating noise (Coelli, 1995). In this paper we provide detailed simulation results comparing the two estimation approaches for both the half-normal and the exponential approach to inefficiency. Based on the simulation results we obtain decision rules for the choice of the superior estimation approach. Both estimation methods, ML and MOM, are applied to a sample of German commercial banks based on the Bankscope database for estimation of cost efficiency scores. --stochastic frontier,Maximum Likelihood,Method of moments,Bank efficiency
    corecore