31,232 research outputs found
Distributed simulation with COTS simulation packages: A case study in health care supply chain simulation
The UK National Blood Service (NBS) is a public funded body that is responsible for distributing blood and asso-ciated products. A discrete-event simulation of the NBS supply chain in the Southampton area has been built using the commercial off-the-shelf simulation package (CSP) Simul8. This models the relationship in the health care supply chain between the NBS Processing, Testing and Is-suing (PTI) facility and its associated hospitals. However, as the number of hospitals increase simulation run time be-comes inconveniently large. Using distributed simulation to try to solve this problem, researchers have used techniques informed by SISO’s CSPI PDG to create a version of Simul8 compatible with the High Level Architecture (HLA). The NBS supply chain model was subsequently divided into several sub-models, each running in its own copy of Simul8. Experimentation shows that this distri-buted version performs better than its standalone, conven-tional counterpart as the number of hospitals increases
Recommended from our members
Comparing conventional and distributed approaches to simulation in complex supply-chain health systems
Decision making in modern supply chains can be extremely daunting due to their complex nature. Discrete-event simulation is a technique that can support decision making by providing what-if analysis and evaluation of quantitative data. However, modelling supply chain systems can result in massively large and complicated models that can take a very long time to run even with today's powerful desktop computers. Distributed simulation has been suggested as a possible solution to this problem, by enabling the use of multiple computers to run models. To investigate this claim, this paper presents experiences in implementing a simulation model with a 'conventional' approach and with a distributed approach. This study takes place in a healthcare setting, the supply chain of blood from donor to recipient. The study compares conventional and distributed model execution times of a supply chain model simulated in the simulation package Simul8. The results show that the execution time of the conventional approach increases almost linearly with the size of the system and also the simulation run period. However, the distributed approach to this problem follows a more linear distribution of the execution time in terms of system size and run time and appears to offer a practical alternative. On the basis of this, the paper concludes that distributed simulation can be successfully applied in certain situations
Investigating grid computing technologies for use with commercial simulation packages
As simulation experimentation in industry become more computationally demanding, grid computing can be seen as a promising technology that has the potential to bind together the computational resources needed to quickly execute such simulations. To investigate how this might be possible, this paper reviews the grid technologies that can be used together with commercial-off-the-shelf simulation packages (CSPs) used in industry. The paper identifies two specific forms of grid computing (Public Resource Computing and Enterprise-wide Desktop Grid Computing) and the middleware associated with them (BOINC and Condor) as being suitable for grid-enabling existing CSPs. It further proposes three different CSP-grid integration approaches and identifies one of them to be the most appropriate. It is hoped that this research will encourage simulation practitioners to consider grid computing as a technologically viable means of executing CSP-based experiments faster
Supporting simulation in industry through the application of grid computing
An increased need for collaborative research, together with continuing advances in communication technology and computer hardware, has facilitated the development of distributed systems that can provide users access to geographically dispersed computing resources that are administered in multiple computer domains. The term grid computing, or grids, is popularly used to refer to such distributed systems. Simulation is characterized by the need to run multiple sets of computationally intensive experiments. Large scale scientific simulations have traditionally been the primary benefactor of grid computing. The application of this technology to simulation in industry has, however, been negligible. This research investigates how grid technology can be effectively exploited by users to model simulations in industry. It introduces our desktop grid, WinGrid, and presents a case study conducted at a leading European investment bank. Results indicate that grid computing does indeed hold promise for simulation in industry
Self-Evaluation Applied Mathematics 2003-2008 University of Twente
This report contains the self-study for the research assessment of the Department of Applied Mathematics (AM) of the Faculty of Electrical Engineering, Mathematics and Computer Science (EEMCS) at the University of Twente (UT). The report provides the information for the Research Assessment Committee for Applied Mathematics, dealing with mathematical sciences at the three universities of technology in the Netherlands. It describes the state of affairs pertaining to the period 1 January 2003 to 31 December 2008
Can geocomputation save urban simulation? Throw some agents into the mixture, simmer and wait ...
There are indications that the current generation of simulation models in practical,
operational uses has reached the limits of its usefulness under existing specifications.
The relative stasis in operational urban modeling contrasts with simulation efforts in
other disciplines, where techniques, theories, and ideas drawn from computation and
complexity studies are revitalizing the ways in which we conceptualize, understand,
and model real-world phenomena. Many of these concepts and methodologies are
applicable to operational urban systems simulation. Indeed, in many cases, ideas from
computation and complexity studies—often clustered under the collective term of
geocomputation, as they apply to geography—are ideally suited to the simulation of
urban dynamics. However, there exist several obstructions to their successful use in
operational urban geographic simulation, particularly as regards the capacity of these
methodologies to handle top-down dynamics in urban systems.
This paper presents a framework for developing a hybrid model for urban geographic
simulation and discusses some of the imposing barriers against innovation in this
field. The framework infuses approaches derived from geocomputation and
complexity with standard techniques that have been tried and tested in operational
land-use and transport simulation. Macro-scale dynamics that operate from the topdown
are handled by traditional land-use and transport models, while micro-scale
dynamics that work from the bottom-up are delegated to agent-based models and
cellular automata. The two methodologies are fused in a modular fashion using a
system of feedback mechanisms. As a proof-of-concept exercise, a micro-model of
residential location has been developed with a view to hybridization. The model
mixes cellular automata and multi-agent approaches and is formulated so as to
interface with meso-models at a higher scale
The Life-Cycle Income Analysis Model (LIAM): a study of a flexible dynamic microsimulation modelling computing framework
This paper describes a flexible computing framework designed to create a dynamic microsimulation model, the Life-cycle Income Analysis Model (LIAM). The principle computing characteristics include the degree of modularisation, parameterisation, generalisation and robustness. The paper describes the decisions taken with regard to type of dynamic model used. The LIAM framework has been used to create a number of different microsimulation models, including an Irish dynamic cohort model, a spatial dynamic microsimulation model for Ireland, an indirect tax and consumption model for EU15 as part of EUROMOD and a prototype EU dynamic population microsimulation model for 5 EU countries. Particular consideration is given to issues of parameterisation, alignment and computational efficiency.flexible; modular; dynamic; alignment; parameterisation; computational efficiency
SimpactCyan 1.0 : an open-source simulator for individual-based models in HIV epidemiology with R and Python interfaces
SimpactCyan is an open-source simulator for individual-based models in HIV epidemiology. Its core algorithm is written in C++ for computational efficiency, while the R and Python interfaces aim to make the tool accessible to the fast-growing community of R and Python users. Transmission, treatment and prevention of HIV infections in dynamic sexual networks are simulated by discrete events. A generic “intervention” event allows model parameters to be changed over time, and can be used to model medical and behavioural HIV prevention programmes. First, we describe a more efficient variant of the modified Next Reaction Method that drives our continuous-time simulator. Next, we outline key built-in features and assumptions of individual-based models formulated in SimpactCyan, and provide code snippets for how to formulate, execute and analyse models in SimpactCyan through its R and Python interfaces. Lastly, we give two examples of applications in HIV epidemiology: the first demonstrates how the software can be used to estimate the impact of progressive changes to the eligibility criteria for HIV treatment on HIV incidence. The second example illustrates the use of SimpactCyan as a data-generating tool for assessing the performance of a phylodynamic inference framework
Computational Particle Physics for Event Generators and Data Analysis
High-energy physics data analysis relies heavily on the comparison between
experimental and simulated data as stressed lately by the Higgs search at LHC
and the recent identification of a Higgs-like new boson. The first link in the
full simulation chain is the event generation both for background and for
expected signals. Nowadays event generators are based on the automatic
computation of matrix element or amplitude for each process of interest.
Moreover, recent analysis techniques based on the matrix element likelihood
method assign probabilities for every event to belong to any of a given set of
possible processes. This method originally used for the top mass measurement,
although computing intensive, has shown its power at LHC to extract the new
boson signal from the background.
Serving both needs, the automatic calculation of matrix element is therefore
more than ever of prime importance for particle physics. Initiated in the
eighties, the techniques have matured for the lowest order calculations
(tree-level), but become complex and CPU time consuming when higher order
calculations involving loop diagrams are necessary like for QCD processes at
LHC. New calculation techniques for next-to-leading order (NLO) have surfaced
making possible the generation of processes with many final state particles (up
to 6). If NLO calculations are in many cases under control, although not yet
fully automatic, even higher precision calculations involving processes at
2-loops or more remain a big challenge.
After a short introduction to particle physics and to the related theoretical
framework, we will review some of the computing techniques that have been
developed to make these calculations automatic. The main available packages and
some of the most important applications for simulation and data analysis, in
particular at LHC will also be summarized.Comment: 19 pages, 11 figures, Proceedings of CCP (Conference on Computational
Physics) Oct. 2012, Osaka (Japan) in IOP Journal of Physics: Conference
Serie
Discrete events: Perspectives from system theory
Systems Theory;differentiaal/ integraal-vergelijkingen
- …