1,188 research outputs found

    Simulation modelling and visualisation: toolkits for building artificial worlds

    Get PDF
    Simulations users at all levels make heavy use of compute resources to drive computational simulations for greatly varying applications areas of research using different simulation paradigms. Simulations are implemented in many software forms, ranging from highly standardised and general models that run in proprietary software packages to ad hoc hand-crafted simulations codes for very specific applications. Visualisation of the workings or results of a simulation is another highly valuable capability for simulation developers and practitioners. There are many different software libraries and methods available for creating a visualisation layer for simulations, and it is often a difficult and time-consuming process to assemble a toolkit of these libraries and other resources that best suits a particular simulation model. We present here a break-down of the main simulation paradigms, and discuss differing toolkits and approaches that different researchers have taken to tackle coupled simulation and visualisation in each paradigm

    Overview on agent-based social modelling and the use of formal languages

    Get PDF
    Transdisciplinary Models and Applications investigates a variety of programming languages used in validating and verifying models in order to assist in their eventual implementation. This book will explore different methods of evaluating and formalizing simulation models, enabling computer and industrial engineers, mathematicians, and students working with computer simulations to thoroughly understand the progression from simulation to product, improving the overall effectiveness of modeling systems.Postprint (author's final draft

    Multi-agent systems for power engineering applications - part 1 : Concepts, approaches and technical challenges

    Get PDF
    This is the first part of a 2-part paper that has arisen from the work of the IEEE Power Engineering Society's Multi-Agent Systems (MAS) Working Group. Part 1 of the paper examines the potential value of MAS technology to the power industry. In terms of contribution, it describes fundamental concepts and approaches within the field of multi-agent systems that are appropriate to power engineering applications. As well as presenting a comprehensive review of the meaningful power engineering applications for which MAS are being investigated, it also defines the technical issues which must be addressed in order to accelerate and facilitate the uptake of the technology within the power and energy sector. Part 2 of the paper explores the decisions inherent in engineering multi-agent systems for applications in the power and energy sector and offers guidance and recommendations on how MAS can be designed and implemented

    When and how to develop domain-specific languages

    Get PDF
    Domain-specific languages (DSLs) are languages tailored to a specific application domain. They offer substantial gains in expressiveness and ease of use compared with general purpose programming languages in their domain of application. DSL development is hard, requiring both domain knowledge and language development expertise. Few people have both. Not surprisingly, the decision to develop a DSL is often postponed indefinitely, if considered at all, and most DSLs never get beyond the application library stage. While many articles have been written on the development of particular DSLs, there is very limited literature on DSL development methodologies and many questions remain regarding when and how to develop a DSL. To aid the DSL developer, we identify patterns in the decision, analysis, design, and implementation phases of DSL development. Our patterns try to improve on and extend earlier work on DSL design patterns, in particular by Spinellis (2001). We also discuss domain analysis tools and language development systems that may help to speed up DSL development. Finally, we state a number of open problems

    Development of a standard framework for manufacturing simulators

    Get PDF
    Discrete event simulation is now a well established modelling and experimental technique for the analysis of manufacturing systems. Since it was first employed as a technique, much of the research and commercial developments in the field have been concerned with improving the considerable task of model specification in order to improve productivity and reduce the level of modelling and programming expertise required. The main areas of research have been the development of modelling structures to bring modularity in program development, incorporating such structures in simulation software systems which would alleviate some of the programming burden, and the use of automatic programming systems to develop interfaces that would raise the model specification to a higher level of abstraction. A more recent development in the field has been the advent of a new generation of software, often referred to as manufacturing simulators, which have incorporated extensive manufacturing system domain knowledge in the model specification interface. Many manufacturing simulators are now commercially available, but their development has not been based on any common standard. This is evident in the differences that exist between their interfaces, internal data representation methods and modelling capabilities. The lack of a standard makes it impossible to reuse any part of a model when a user finds it necessary to move from one simulator to another. In such cases, not only a new modelling language has to be learnt but also the complete model has to be developed again requiring considerable time and effort. The motivation for the research was the need for the development of a standard that is necessary to improve reusability of models and is the first step towards interchangability of such models. A standard framework for manufacturing simulators has been developed. It consists of a data model that is independent of any simulator, and a translation module for converting model specification data into the internal data representation of manufacturing simulators; the translators are application specific, but the methodology is common and illustrated for three popular simulators. The data model provides for a minimum common model data specification which is based on an extensive analysis of existing simulators. It uses dialogues for interface and the frame knowledge representation method for modular storage of data. The translation methodology uses production rules for data mapping

    ROOT - A C++ Framework for Petabyte Data Storage, Statistical Analysis and Visualization

    Full text link
    ROOT is an object-oriented C++ framework conceived in the high-energy physics (HEP) community, designed for storing and analyzing petabytes of data in an efficient way. Any instance of a C++ class can be stored into a ROOT file in a machine-independent compressed binary format. In ROOT the TTree object container is optimized for statistical data analysis over very large data sets by using vertical data storage techniques. These containers can span a large number of files on local disks, the web, or a number of different shared file systems. In order to analyze this data, the user can chose out of a wide set of mathematical and statistical functions, including linear algebra classes, numerical algorithms such as integration and minimization, and various methods for performing regression analysis (fitting). In particular, ROOT offers packages for complex data modeling and fitting, as well as multivariate classification based on machine learning techniques. A central piece in these analysis tools are the histogram classes which provide binning of one- and multi-dimensional data. Results can be saved in high-quality graphical formats like Postscript and PDF or in bitmap formats like JPG or GIF. The result can also be stored into ROOT macros that allow a full recreation and rework of the graphics. Users typically create their analysis macros step by step, making use of the interactive C++ interpreter CINT, while running over small data samples. Once the development is finished, they can run these macros at full compiled speed over large data sets, using on-the-fly compilation, or by creating a stand-alone batch program. Finally, if processing farms are available, the user can reduce the execution time of intrinsically parallel tasks - e.g. data mining in HEP - by using PROOF, which will take care of optimally distributing the work over the available resources in a transparent way
    corecore