219,778 research outputs found

    Towards Data-driven Simulation Modeling for Mobile Agent-based Systems

    Get PDF
    Simulation modeling provides insight into how dynamic systems work. Current simulation modeling approaches are primarily knowledge-driven, which involves a process of converting expert knowledge into models and simulating them to understand more about the system. Knowledge-driven models are useful for exploring the dynamics of systems, but are handcrafted which means that they are expensive to develop and reflect the bias and limited knowledge of their creators. To address limitations of knowledge-driven simulation modeling, this dissertation develops a framework towards data-driven simulation modeling that discovers simulation models in an automated way based on data or behavior patterns extracted from systems under study. By using data, simulation models can be discovered automatically and with less bias than through knowledge-driven methods. Additionally, multiple models can be discovered that replicate the desired behavior. Each of these models can be thought of as a hypothesis about how the real system generates the observed behavior. This framework was developed based on the application of mobile agent-based systems. The developed framework is composed of three components: 1) model space specification; 2) search method; and 3) framework measurement metrics. The model space specification provides a formal specification for the general model structure from which various models can be generated. The search method is used to efficiently search the model space for candidate models that exhibit desired behavior. The five framework measurement metrics: flexibility, comprehensibility, controllability, compossability, and robustness, are developed to evaluate the overall framework. Furthermore, to incorporate knowledge into the data-driven simulation modeling framework, a method was developed that uses System Entity Structures (SESs) to specify incomplete knowledge to be used by the model search process. This is significant because knowledge-driven modeling requires a complete understanding of a system before it can be modeled, whereas the framework can find a model with incomplete knowledge. The developed framework has been applied to mobile agent-based systems and the results demonstrate that it is possible to discover a variety of interesting models using the framework

    Verifying linearizability on TSO architectures

    Get PDF
    Linearizability is the standard correctness criterion for fine-grained, non-atomic concurrent algorithms, and a variety of methods for verifying linearizability have been developed. However, most approaches assume a sequentially consistent memory model, which is not always realised in practice. In this paper we define linearizability on a weak memory model: the TSO (Total Store Order) memory model, which is implemented in the x86 multicore architecture. We also show how a simulation-based proof method can be adapted to verify linearizability for algorithms running on TSO architectures. We demonstrate our approach on a typical concurrent algorithm, spinlock, and prove it linearizable using our simulation-based approach. Previous approaches to proving linearizabilty on TSO architectures have required a modification to the algorithm's natural abstract specification. Our proof method is the first, to our knowledge, for proving correctness without the need for such modification

    Analysis, Simulation, and Verification of Knowledge-Based, Rule-Based, and Expert Systems

    Get PDF
    Mathematically sound techniques are used to view a knowledge-based system (KBS) as a set of processes executing in parallel and being enabled in response to specific rules being fired. The set of processes can be manipulated, examined, analyzed, and used in a simulation. The tool that embodies this technology may warn developers of errors in their rules, but may also highlight rules (or sets of rules) in the system that are underspecified (or overspecified) and need to be corrected for the KBS to operate as intended. The rules embodied in a KBS specify the allowed situations, events, and/or results of the system they describe. In that sense, they provide a very abstract specification of a system. The system is implemented through the combination of the system specification together with an appropriate inference engine, independent of the algorithm used in that inference engine. Viewing the rule base as a major component of the specification, and choosing an appropriate specification notation to represent it, reveals how additional power can be derived from an approach to the knowledge-base system that involves analysis, simulation, and verification. This innovative approach requires no special knowledge of the rules, and allows a general approach where standardized analysis, verification, simulation, and model checking techniques can be applied to the KBS

    SURE-Autometrics algorithm for model selection in multiple equations

    Get PDF
    The ambiguous process of model building can be explained by expert modellers due to their tacit knowledge acquired through research experiences. Meanwhile, practitioners who are usually non-experts and lack of statistical knowledge will face difficulties during the modelling process. Hence, algorithm with a step by step guidance is beneficial in model building, testing and selection. However, most model selection algorithms such as Autometrics only concentrate on single equation modelling which has limited application. Thus, this study aims to develop an algorithm for model selection in multiple equations focusing on seemingly unrelated regression equations (SURE) model. The algorithm is developed by integrating the SURE model with the Autometrics search strategy; hence, it is named as SURE-Autometrics. Its performance is assessed using Monte Carlo simulation experiments based on five specification models, three strengths of correlation disturbances and two sample sizes. Two sets of general unrestricted models (GUMS) are then formulated by adding a number of irrelevant variables to the specification models. The performance is measured by the percentages of SURE-Autometrics algorithm that are able to eliminate the irrelevant variables from the initial GUMS of two, four and six equations. The SURE-Autometrics is also validated using two sets of real data by comparing the forecast error measures with five model selection algorithms and three non-algorithm procedures. The findings from simulation experiments suggested that SURE-Autometrics performed well when the number of equations and number of relevant variables in the true specification model were minimal. Its application on real data indicated that several models are able to forecast accurately if the data has no quality problem. This automatic model selection algorithm is better than non-algorithm procedure which requires knowledge and extra time. In conclusion, the performance of model selection in multiple equations using SURE-Autometrics is dependent upon data quality and complexities of the SURE model

    Development of a standard framework for manufacturing simulators

    Get PDF
    Discrete event simulation is now a well established modelling and experimental technique for the analysis of manufacturing systems. Since it was first employed as a technique, much of the research and commercial developments in the field have been concerned with improving the considerable task of model specification in order to improve productivity and reduce the level of modelling and programming expertise required. The main areas of research have been the development of modelling structures to bring modularity in program development, incorporating such structures in simulation software systems which would alleviate some of the programming burden, and the use of automatic programming systems to develop interfaces that would raise the model specification to a higher level of abstraction. A more recent development in the field has been the advent of a new generation of software, often referred to as manufacturing simulators, which have incorporated extensive manufacturing system domain knowledge in the model specification interface. Many manufacturing simulators are now commercially available, but their development has not been based on any common standard. This is evident in the differences that exist between their interfaces, internal data representation methods and modelling capabilities. The lack of a standard makes it impossible to reuse any part of a model when a user finds it necessary to move from one simulator to another. In such cases, not only a new modelling language has to be learnt but also the complete model has to be developed again requiring considerable time and effort. The motivation for the research was the need for the development of a standard that is necessary to improve reusability of models and is the first step towards interchangability of such models. A standard framework for manufacturing simulators has been developed. It consists of a data model that is independent of any simulator, and a translation module for converting model specification data into the internal data representation of manufacturing simulators; the translators are application specific, but the methodology is common and illustrated for three popular simulators. The data model provides for a minimum common model data specification which is based on an extensive analysis of existing simulators. It uses dialogues for interface and the frame knowledge representation method for modular storage of data. The translation methodology uses production rules for data mapping

    Towards a Formal Verification Methodology for Collective Robotic Systems

    Get PDF
    We introduce a UML-based notation for graphically modeling systems’ security aspects in a simple and intuitive way and a model-driven process that transforms graphical specifications of access control policies in XACML. These XACML policies are then translated in FACPL, a policy language with a formal semantics, and the resulting policies are evaluated by means of a Java-based software tool

    What Am I Testing and Where? Comparing Testing Procedures based on Lightweight Requirements Annotations

    Get PDF
    [Context] The testing of software-intensive systems is performed in different test stages each having a large number of test cases. These test cases are commonly derived from requirements. Each test stages exhibits specific demands and constraints with respect to their degree of detail and what can be tested. Therefore, specific test suites are defined for each test stage. In this paper, the focus is on the domain of embedded systems, where, among others, typical test stages are Software- and Hardware-in-the-loop. [Objective] Monitoring and controlling which requirements are verified in which detail and in which test stage is a challenge for engineers. However, this information is necessary to assure a certain test coverage, to minimize redundant testing procedures, and to avoid inconsistencies between test stages. In addition, engineers are reluctant to state their requirements in terms of structured languages or models that would facilitate the relation of requirements to test executions. [Method] With our approach, we close the gap between requirements specifications and test executions. Previously, we have proposed a lightweight markup language for requirements which provides a set of annotations that can be applied to natural language requirements. The annotations are mapped to events and signals in test executions. As a result, meaningful insights from a set of test executions can be directly related to artifacts in the requirements specification. In this paper, we use the markup language to compare different test stages with one another. [Results] We annotate 443 natural language requirements of a driver assistance system with the means of our lightweight markup language. The annotations are then linked to 1300 test executions from a simulation environment and 53 test executions from test drives with human drivers. Based on the annotations, we are able to analyze how similar the test stages are and how well test stages and test cases are aligned with the requirements. Further, we highlight the general applicability of our approach through this extensive experimental evaluation. [Conclusion] With our approach, the results of several test levels are linked to the requirements and enable the evaluation of complex test executions. By this means, practitioners can easily evaluate how well a systems performs with regards to its specification and, additionally, can reason about the expressiveness of the applied test stage.TU Berlin, Open-Access-Mittel - 202

    Overview on agent-based social modelling and the use of formal languages

    Get PDF
    Transdisciplinary Models and Applications investigates a variety of programming languages used in validating and verifying models in order to assist in their eventual implementation. This book will explore different methods of evaluating and formalizing simulation models, enabling computer and industrial engineers, mathematicians, and students working with computer simulations to thoroughly understand the progression from simulation to product, improving the overall effectiveness of modeling systems.Postprint (author's final draft
    corecore