17,358 research outputs found
MultiVeStA: Statistical Model Checking for Discrete Event Simulators
The modeling, analysis and performance evaluation of large-scale systems are difficult tasks. Due to the size and complexity of the considered systems, an approach typically followed by engineers consists in performing simulations of systems models to obtain statistical estimations of quantitative properties. Similarly, a technique used by computer scientists working on quantitative analysis is Statistical Model Checking (SMC), where rigorous mathematical languages (typically logics) are used to express systems properties of interest. Such properties can then be automatically estimated by tools performing simulations of the model at hand. These property specifications languages, often not popular among engineers, provide a formal, compact and elegant way to express systems properties without needing to hard-code them in the model definition. This paper presents MultiVeStA, a statistical analysis tool which can be easily integrated with existing discrete event simulators, enriching them with efficient distributed statistical analysis and SMC capabilities
Distributed Simulation of Heterogeneous and Real-time Systems
This work describes a framework for distributed simulation of cyber-physical systems (CPS). Modern CPS comprise large numbers of heterogeneous components, typically designed in very different tools and languages that are not or not easily composeable. Evaluating such large systems requires tools that integrate all components in a systematic, well-defined manner. This work leverages existing frameworks to facilitate the integration offers validation by simulation. A framework for distributed simulation is the IEEE High-Level Architecture (HLA) compliant tool CERTI, which provides the infrastructure for co-simulation of models in various simulation environments as well as hardware components. We use CERTI in combination with Ptolemy II, an environment for modeling and simulating heterogeneous systems. In particular, we focus on models of a CPS, including the physical dynamics of a plant, the software that controls the plant, and the network that enables the communication between controllers. We describe the Ptolemy extensions for the interaction with HLA and demonstrate the approach on a flight control system simulation
Simulation modelling and visualisation: toolkits for building artificial worlds
Simulations users at all levels make heavy use of compute resources to drive computational
simulations for greatly varying applications areas of research using different simulation
paradigms. Simulations are implemented in many software forms, ranging from highly standardised
and general models that run in proprietary software packages to ad hoc hand-crafted
simulations codes for very specific applications. Visualisation of the workings or results of a
simulation is another highly valuable capability for simulation developers and practitioners.
There are many different software libraries and methods available for creating a visualisation
layer for simulations, and it is often a difficult and time-consuming process to assemble a
toolkit of these libraries and other resources that best suits a particular simulation model. We
present here a break-down of the main simulation paradigms, and discuss differing toolkits and
approaches that different researchers have taken to tackle coupled simulation and visualisation
in each paradigm
Mesmerizer: A Effective Tool for a Complete Peer-to-Peer Software Development Life-cycle
In this paper we present what are, in our experience, the best
practices in Peer-To-Peer(P2P) application development and
how we combined them in a middleware platform called Mesmerizer. We explain how simulation is an integral part of
the development process and not just an assessment tool.
We then present our component-based event-driven framework for P2P application development, which can be used
to execute multiple instances of the same application in a
strictly controlled manner over an emulated network layer
for simulation/testing, or a single application in a concurrent
environment for deployment purpose. We highlight modeling aspects that are of critical importance for designing and
testing P2P applications, e.g. the emulation of Network Address Translation and bandwidth dynamics. We show how
our simulator scales when emulating low-level bandwidth
characteristics of thousands of concurrent peers while preserving a good degree of accuracy compared to a packet-level
simulator
Recommended from our members
Developing a grid computing system for commercial-off-the-shelf simulation packages
Today simulation is becoming an increasingly
pervasive technology across major business
sectors. Advances in COTS Simulation Packages
and Commercial Simulation Software have made
it easier for users to build models, often of large complex processes. These two factors combined are to be welcomed and when used correctly can be of great benefit to organisations that make use of the technology. However, it is also the case
that users hungry for answers do not always have the time, or possibly the patience, to wait for results from multiple replications and multiple experiments as standard simulation practice would demand. There is therefore a need to support this advance in the use of simulation within today’s business with improved computing technology. Grid computing has been put forward as a potential commercial solution to this requirement. To this end, Saker Solutions and the Distributed Systems Research Group at Brunel University have developed a dedicated Grid Computing System (SakerGrid) to support the deployment of simulation models across a desktop grid of PCs. The paper identifies route taken to solve this challenging issue and suggests where the future may lie for this exciting integration of two effective but underused technologies
Recommended from our members
Semantic web services for simulation component reuse and interoperability: An ontology approach
Commercial-off-the-shelf (COTS) Simulation Packages (CSPs) are widely used in industry primarily due to economic factors associated with developing proprietary software platforms. Regardless of their widespread use, CSPs have yet to operate across organizational boundaries. The limited reuse and interoperability of CSPs are affected by the same semantic issues that restrict the inter-organizational use of software components and web services. The current representations of Web components are predominantly syntactic in nature lacking the fundamental semantic underpinning required to support discovery on the emerging Semantic Web. The authors present new research that partially alleviates the problem of limited semantic reuse and interoperability of simulation components in CSPs. Semantic models, in the form of ontologies, utilized by the authors’ Web service discovery and deployment architecture provide one approach to support simulation model reuse. Semantic interoperation is achieved through a simulation component ontology that is used to identify required components at varying levels of granularity (i.e. including both abstract and specialized components). Selected simulation components are loaded into a CSP, modified according to the requirements of the new model and executed. The research presented here is based on the development of an ontology, connector software, and a Web service discovery architecture. The ontology is extracted from simulation scenarios involving airport, restaurant and kitchen service suppliers. The ontology engineering framework and discovery architecture provide a novel approach to inter-organizational simulation, by adopting a less intrusive interface between participants Although specific to CSPs this work has wider implications for the simulation community. The reason being that the community as a whole stands to benefit through from an increased awareness of the state-of-the-art in Software Engineering (for example, ontology-supported component discovery and reuse, and service-oriented computing), and it is expected that this will eventually lead to the development of a unique Software Engineering-inspired methodology to build simulations in future
An analysis of internal/external event ordering strategies for COTS distributed simulation
Distributed simulation is a technique that is used to link together several models so that they can work together (or interoperate) as a single model. The High Level Architecture (HLA) (IEEE 1516.2000) is the de facto standard that defines the technology for this interoperation. The creation of a distributed simulation of models developed in COTS Simulation Packages (CSPs) is of interest. The motivation is to attempt to reduce lead times of simulation projects by reusing models that have already been developed. This paper discusses one of the issues involved in distributed simulation with CSPs. This is the issue of synchronising data sent between models with the simulation of a model by a CSP, the so-called external/internal event ordering problem. The motivation is that the particular algorithm employed can represent a significant overhead on performance
Distributed supply chain simulation in GRIDS
Amongst the majority of work done in supply chain simulation, papers have emerged that examine the area of model distribution. The executions of simulations on distributed hosts as a coupled model require both coordination and facilitating infrastructure. A distributed environment, the Generic Runtime Infrastructure for Distributed Simulation (GRIDS) is suggested to provide the bonding requirements for such a model. The advantages of transparently connecting the distributed components of a supply chain simulation allow the construction of a conceptual simulation while releasing the modeler from the complexities of the underlying network. The infrastructure presented demonstrates scalability without losing flexibility for future extensions based on open industry standard
- …