158 research outputs found

    Panel on future challenges in modeling methodology

    Get PDF
    This panel paper presents the views of six researchers and practitioners of simulation modeling. Collectively we attempt to address a range of key future challenges to modeling methodology. It is hoped that the views of this paper, and the presentations made by the panelists at the 2004 Winter Simulation Conference will raise awareness and stimulate further discussion on the future of modeling methodology in areas such as modeling problems in business applications, human factors and geographically dispersed networks; rapid model development and maintenance; legacy modeling approaches; markup languages; virtual interactive process design and simulation; standards; and Grid computing

    SCS: 60 years and counting! A time to reflect on the Society's scholarly contribution to M&S from the turn of the millennium.

    Get PDF
    The Society for Modeling and Simulation International (SCS) is celebrating its 60th anniversary this year. Since its inception, the Society has widely disseminated the advancements in the field of modeling and simulation (M&S) through its peer-reviewed journals. In this paper we profile research that has been published in the journal SIMULATION: Transactions of the Society for Modeling and Simulation International from the turn of the millennium to 2010; the objective is to acknowledge the contribution of the authors and their seminal research papers, their respective universities/departments and the geographical diversity of the authors' affiliations. Yet another objective is to contribute towards the understanding of the overall evolution of the discipline of M&S; this is achieved through the classification of M&S techniques and its frequency of use, analysis of the sectors that have seen the predomination application of M&S and the context of its application. It is expected that this paper will lead to further appreciation of the contribution of the Society in influencing the growth of M&S as a discipline and, indeed, in steering its future direction

    Understanding the Impact of Large-Scale Power Grid Architectures on Performance

    Get PDF
    Grid balancing is a critical system requirement for the power grid in matching the supply to the demand. This balancing has historically been achieved by conventional power generators. However, the increasing level of renewable penetration has brought more variability and uncertainty to the grid (Ela, Diakov et al. 2013, Bessa, Moreira et al. 2014), which has considerable impacts and implications on power system reliability and efficiency, as well as costs. Energy planners have the task of designing infrastructure power systems to provide electricity to the population, wherever and whenever needed. Deciding of the right grid architecture is no easy task, considering consumers’ economic, environmental, and security priorities, while making efficient use of existing resources. In this research, as one contribution, we explore associations between grid architectures and their performance, that is, their ability to meet consumers’ concerns. To do this, we first conduct a correlation analysis study. We propose a generative method that captures path dependency by iteratively creating grids, structurally different. The method would generate alternative grid architectures by subjecting an initial grid to a heuristic choice method for decision making over a fixed time horizon. Second, we also conduct a comparative study to evaluate differences in grid performances. We consider two balancing area operation types, presenting different structures and coordination mechanisms. Both studies are performed with the use of a grid simulation model, Spark! The aim of this model is to offer a meso-scale solution that enables the study of very large power systems over long-time horizons, with a sufficient level of fidelity to perform day-to-day grid activities and support architectural questions about the grids of the future. More importantly, the model reconciles long-term planning with short-term grid operations, enabling long-term projections validation via grid operations and response on a daily basis. This is our second contribution

    A model driven approach to web-based traffic simulation.

    Get PDF
    As the world population increases the number of vehicles in the traffic increases as well, and so the traffic becomes more complex. Problems in the urban traffic such as traffic congestion, car accidents, parking difficulties, etc. have a large impact on people's lives as well as the environment. Therefore, researchers, policy makers, decision takers and planners use expert tools to find the best solutions for traffic and transportation problems. Traffic modeling and simulation has been used for analyzing, designing, planning and managing urban traffic for many years. Various techniques have been proposed and many tools have been developed by researchers to assist the modeling and simulation activities in the traffic domain for more than half a century. However, improving the existing methods and developing new tools for traffic simulation are gaining importance due to the emerging technologies. Web-based modeling and simulation has been popular in the last decade, and has a great promise in terms of collaborative and distributed simulations. Model driven approaches are employed in the simulation field for a long time and have provided rapid development solutions. In this paper, a model driven Web-based traffic simulation framework is proposed and a prototype implementation is presented

    Scaling of Distributed Multi-Simulations on Multi-Core Clusters

    No full text
    International audienceDACCOSIM is a multi-simulation environment for continuous time systems, relying on FMI standard, making easy the design of a multi-simulation graph, and specially developed for multi-core PC clusters, in order to achieve speedup and size up. However, the distribution of the simulation graph remains complex and is still the responsibility of the simulation developer. This paper introduces DACCOSIM parallel and distributed architecture, and our strategies to achieve efficient multi-simulation graph distribution on multi-core clusters. Some performance experiments on two clusters, running up to 81 simulation components (FMU) and using up to 16 multi-core computing nodes, are shown. Performances measured on our faster cluster exhibit a good scalability, but some limitations of current DACCOSIM implementation are discussed

    The Simulation Model Partitioning Problem: an Adaptive Solution Based on Self-Clustering (Extended Version)

    Full text link
    This paper is about partitioning in parallel and distributed simulation. That means decomposing the simulation model into a numberof components and to properly allocate them on the execution units. An adaptive solution based on self-clustering, that considers both communication reduction and computational load-balancing, is proposed. The implementation of the proposed mechanism is tested using a simulation model that is challenging both in terms of structure and dynamicity. Various configurations of the simulation model and the execution environment have been considered. The obtained performance results are analyzed using a reference cost model. The results demonstrate that the proposed approach is promising and that it can reduce the simulation execution time in both parallel and distributed architectures

    Distributed Particle Filters for Data Assimilation in Simulation of Large Scale Spatial Temporal Systems

    Get PDF
    Assimilating real time sensor into a running simulation model can improve simulation results for simulating large-scale spatial temporal systems such as wildfire, road traffic and flood. Particle filters are important methods to support data assimilation. While particle filters can work effectively with sophisticated simulation models, they have high computation cost due to the large number of particles needed in order to converge to the true system state. This is especially true for large-scale spatial temporal simulation systems that have high dimensional state space and high computation cost by themselves. To address the performance issue of particle filter-based data assimilation, this dissertation developed distributed particle filters and applied them to large-scale spatial temporal systems. We first implemented a particle filter-based data assimilation framework and carried out data assimilation to estimate system state and model parameters based on an application of wildfire spread simulation. We then developed advanced particle routing methods in distributed particle filters to route particles among the Processing Units (PUs) after resampling in effective and efficient manners. In particular, for distributed particle filters with centralized resampling, we developed two routing policies named minimal transfer particle routing policy and maximal balance particle routing policy. For distributed PF with decentralized resampling, we developed a hybrid particle routing approach that combines the global routing with the local routing to take advantage of both. The developed routing policies are evaluated from the aspects of communication cost and data assimilation accuracy based on the application of data assimilation for large-scale wildfire spread simulations. Moreover, as cloud computing is gaining more and more popularity; we developed a parallel and distributed particle filter based on Hadoop & MapReduce to support large-scale data assimilation
    • …
    corecore