8 research outputs found

    A study of response surface in simulation of emergency room systems

    Get PDF
    The purpose of this research was to characterize a response surface with respect to the changes made to the input variables of an emergency room system. Response Surface Methodology (RSM) was used to identify the behavior of the response variable with respect to the changes made to the input variable. Several factors were examined for relevancy and significance for the purpose of experimentation. The findings of this research revealed that one factor (nurses) was very significant to the performance measures (time in the system). However, the interaction between the other factors also played an important role. It was determined that a linear regression is not useful in predicting the assessed value of time in the system in emergency rooms. Non-linear models need to be explored. A series of production rales were derived. These production rales can be used in a variety of situations where a decision on how to modify model inputs needs to be made

    Multiobjective Simulation Optimization Using Enhanced Evolutionary Algorithm Approaches

    Get PDF
    In today\u27s competitive business environment, a firm\u27s ability to make the correct, critical decisions can be translated into a great competitive advantage. Most of these critical real-world decisions involve the optimization not only of multiple objectives simultaneously, but also conflicting objectives, where improving one objective may degrade the performance of one or more of the other objectives. Traditional approaches for solving multiobjective optimization problems typically try to scalarize the multiple objectives into a single objective. This transforms the original multiple optimization problem formulation into a single objective optimization problem with a single solution. However, the drawbacks to these traditional approaches have motivated researchers and practitioners to seek alternative techniques that yield a set of Pareto optimal solutions rather than only a single solution. The problem becomes much more complicated in stochastic environments when the objectives take on uncertain (or noisy ) values due to random influences within the system being optimized, which is the case in real-world environments. Moreover, in stochastic environments, a solution approach should be sufficiently robust and/or capable of handling the uncertainty of the objective values. This makes the development of effective solution techniques that generate Pareto optimal solutions within these problem environments even more challenging than in their deterministic counterparts. Furthermore, many real-world problems involve complicated, black-box objective functions making a large number of solution evaluations computationally- and/or financially-prohibitive. This is often the case when complex computer simulation models are used to repeatedly evaluate possible solutions in search of the best solution (or set of solutions). Therefore, multiobjective optimization approaches capable of rapidly finding a diverse set of Pareto optimal solutions would be greatly beneficial. This research proposes two new multiobjective evolutionary algorithms (MOEAs), called fast Pareto genetic algorithm (FPGA) and stochastic Pareto genetic algorithm (SPGA), for optimization problems with multiple deterministic objectives and stochastic objectives, respectively. New search operators are introduced and employed to enhance the algorithms\u27 performance in terms of converging fast to the true Pareto optimal frontier while maintaining a diverse set of nondominated solutions along the Pareto optimal front. New concepts of solution dominance are defined for better discrimination among competing solutions in stochastic environments. SPGA uses a solution ranking strategy based on these new concepts. Computational results for a suite of published test problems indicate that both FPGA and SPGA are promising approaches. The results show that both FPGA and SPGA outperform the improved nondominated sorting genetic algorithm (NSGA-II), widely-considered benchmark in the MOEA research community, in terms of fast convergence to the true Pareto optimal frontier and diversity among the solutions along the front. The results also show that FPGA and SPGA require far fewer solution evaluations than NSGA-II, which is crucial in computationally-expensive simulation modeling applications

    Efficient treatment and quantification of uncertainty in probabilistic seismic hazard and risk analysis

    Get PDF
    The main goals of this thesis are the development of a computationally efficient framework for stochastic treatment of various important uncertainties in probabilistic seismic hazard and risk assessment, its application to a newly created seismic risk model of Indonesia, and the analysis and quantification of the impact of these uncertainties on the distribution of estimated seismic losses for a large number of synthetic portfolios modeled after real-world counterparts. The treatment and quantification of uncertainty in probabilistic seismic hazard and risk analysis has already been identified as an area that could benefit from increased research attention. Furthermore, it has become evident that the lack of research considering the development and application of suitable sampling schemes to increase the computational efficiency of the stochastic simulation represents a bottleneck for applications where model runtime is an important factor. In this research study, the development and state of the art of probabilistic seismic hazard and risk analysis is first reviewed and opportunities for improved treatment of uncertainties are identified. A newly developed framework for the stochastic treatment of portfolio location uncertainty as well as ground motion and damage uncertainty is presented. The framework is then optimized with respect to computational efficiency. Amongst other techniques, a novel variance reduction scheme for portfolio location uncertainty is developed. Furthermore, in this thesis, some well-known variance reduction schemes such as Quasi Monte Carlo, Latin Hypercube Sampling and MISER (locally adaptive recursive stratified sampling) are applied for the first time to seismic hazard and risk assessment. The effectiveness and applicability of all used schemes is analyzed. Several chapters of this monograph describe the theory, implementation and some exemplary applications of the framework. To conduct these exemplary applications, a seismic hazard model for Indonesia was developed and used for the analysis and quantification of loss uncertainty for a large collection of synthetic portfolios. As part of this work, the new framework was integrated into a probabilistic seismic hazard and risk assessment software suite developed and used by Munich Reinsurance Group. Furthermore, those parts of the framework that deal with location and damage uncertainties are also used by the flood and storm natural catastrophe model development groups at Munich Reinsurance for their risk models

    Feasibility Study of Variance Reduction in the THUNDER Campaign-Level Model

    Get PDF
    As an Air Force Chief of Staff endorsed topic, Air Force Studies and Analyses Agency (AFSAA) requested an effective and efficient way to reduce the variance in analysis results from THUNDER. THUNDER is a large-scale discrete event simulation of campaign-level military operations and is used to examine issues involving the utility and effectiveness of air and space power in a theater-level, joint warfare context. Given the large number of stochastic components within THUNDER, results are produced with highly variable measures of effectiveness (MOEs), causing difficulties in evaluating alternative force structures, weapon systems, etc. This work responds to AFSAA\u27s request by examining the application of Common Random Numbers (CRN), Antithetic Variates (AV), Control Variates (CV), and a combination of AVs and CVs. The difference between the standard output and variance reduced output halfwidths for 95% confidence intervals were examined. Analysis of the correlation between MOEs and the random inputs in the CV technique provided insight into the workings of THUNDER. A new, state of the art combined multiple recursive generator was incorporated into THUNDER to synchronize the random inputs for CRN and AV. The result is methodology for implementing all four variance reduction techniques

    Metamodeling Techniques to Aid in the Aggregation Process of Large Hierarchical Simulation Models

    Get PDF
    This research investigates how aggregation is currently conducted for simulation of large systems. The purpose is to examine how to achieve suitable aggregation in the simulation of large systems. More specifically, investigating how to accurately aggregate hierarchical lower-level (higher resolution) models into the next higher-level in order to reduce the complexity of the overall simulation model. The focus is on the exploration of the different aggregation techniques for hierarchical lower-level (higher resolution) models into the next higher-level. We develop aggregation procedures between two simulation levels (e.g., aggregation of engagement level models into a mission level model) to address how much and what information needs to pass from the high resolution to the low-resolution model in order to preserve statistical fidelity. We present a mathematical representation of the simulation model based on network theory and procedures for simulation aggregation that are logical and executable. This research examines the effectiveness of several statistical techniques, to include regression and three types of artificial neural networks, as an aggregation technique in predicting outputs of the lower-level model and evaluating its effects as an input into the next higher-level model. The proposed process is a collection of various conventional statistical and aggregation techniques, to include one novel concept and extensions to the regression and neural network methods, which are compared to the truth simulation model, where the truth model is when actual lower-level model outputs are used as a direct input into the next higher-level model. The aggregation methodology developed in this research provides an analytic foundation that formally defines the necessary steps essential in appropriately and effectively simulating large hierarchical systems

    Optimisation multidisciplinaire sous incertitude en phase conceptuelle avion

    Get PDF
    Ces travaux de recherche concernent l'optimisation multidisciplinaire déployée lors de la conception de systèmes complexes. Ils sont tout particulièrement centrés sur la conception avion. À ce stade de la conception les incertitudes engendrées sont significatives. De nouvelles méthodes efficaces de modélisation et de propagation des incertitudes sont donc proposées afin de concevoir un système fiable et robuste. Elles font appel à des techniques de modélisation adaptatives, à des algorithmes d'optimisation classiques et à des techniques basées sur l'intelligence artificielle (systèmes multi-agent).These researches concern multidisciplinary optimization deployed in the design of complex systems. They are particularly focused on aircraft design. At this stage of the design, the uncertainties are significant. Effective new methods of modeling and uncertainty propagation are proposed to develop a reliable and robust system. They use techniques of adaptive modeling, optimization algorithms and classical techniques based on artificial intelligence (multi-agent systems)

    Operational management in pot plant production

    Get PDF
    Operational management in pot plant production was investigated by means of system analysis and simulation. A theoretical framework for operational decision-making consisted of elaboration decisions, progress decisions, and adoption decisions. This framework was incorporated in a pot plant nursery model, which simulated the implementation of a given tactical production plan under uncertainty. In this model, crop growth as well as price formation (of the foliage plant Schefflera arboricola 'Compacta') were affected by randomly simulated exogenous conditions, which resulted in plant sizes and plant prices deviating from planning premises. Operational decision-making related to the adaptation of cultivation-schedules (and delivery patterns) in order to restore compatibility between plan and reality.Regression metamodelling was applied to analyze simulations results with respect to differences in annual net farm income due to operational decision-making, tactical planning, price variability, and the grower's attitude to operational price risk. All differences could be explained by individual decision events triggered by the strategy of operational management applied in the particular simulation.In conclusion, the applied methodology was successful in exploring the opportunities for operational management in pot plant production based on a rather normative approach and integrating theory from various scientific disciplines. Furthermore, simulation experimentation showed significant impact of operational management on the nursery's performance. Hence, the present study indicates several opportunities for beneficial support of operational management on pot plant nurseries

    An Adaptive Simulation-based Decision-Making Framework for Small and Medium sized Enterprises

    Get PDF
    Abstract The rapid development of key mobile technology supporting the ‘Internet of Things’, such as 3G, Radio Frequency Identification (RFID), and Zigbee etc. and the advanced decision making methods have improved the Decision-Making System (DMS) significantly in the last decade. Advanced wireless technology can provide a real-time data collection to support DMS and the effective decision making techniques based on the real-time data can improve Supply Chain (SC) efficiency. However, it is difficult for Small and Medium sized Enterprises (SMEs) to effectively adopt this technology because of the complexity of technology and methods, and the limited resources of SMEs. Consequently, a suitable DMS which can support effective decision making is required in the operation of SMEs in SCs. This thesis conducts research on developing an adaptive simulation-based DMS for SMEs in the manufacturing sector. This research is to help and support SMEs to improve their competitiveness by reducing costs, and reacting responsively, rapidly and effectively to the demands of customers. An adaptive developed framework is able to answer flexible ‘what-if’ questions by finding, optimising and comparing solutions under the different scenarios for supporting SME-managers to make efficient and effective decisions and more customer-driven enterprises. The proposed framework consists of simulation blocks separated by data filter and convert layers. A simulation block may include cell simulators, optimisation blocks, and databases. A cell simulator is able to provide an initial solution under a special scenario. An optimisation block is able to output a group of optimum solutions based on the initial solution for decision makers. A two-phase optimisation algorithm integrated Conflicted Key Points Optimisation (CKPO) and Dispatching Optimisation Algorithm (DOA) is proposed for the condition of Jm|STsi,b with Lot-Streaming (LS). The feature of the integrated optimisation algorithm is demonstrated using a UK-based manufacture case study. Each simulation block is a relatively independent unit separated by the relevant data layers. Thus SMEs are able to design their simulation blocks according to their requirements and constraints, such as small budgets, limited professional staff, etc. A simulation block can communicate to the relative simulation block by the relevant data filter and convert layers and this constructs a communication and information network to support DMSs of Supply Chains (SCs). Two case studies have been conducted to validate the proposed simulation framework. An SME which produces gifts in a SC is adopted to validate the Make To Stock (MTS) production strategy by a developed stock-driven simulation-based DMS. A schedule-driven simulation-based DMS is implemented for a UK-based manufacturing case study using the Make To Order (MTO) production strategy. The two simulation-based DMSs are able to provide various data to support management decision making depending on different scenarios
    corecore