52 research outputs found

    Multi-level agent-based modeling - A literature survey

    Full text link
    During last decade, multi-level agent-based modeling has received significant and dramatically increasing interest. In this article we present a comprehensive and structured review of literature on the subject. We present the main theoretical contributions and application domains of this concept, with an emphasis on social, flow, biological and biomedical models.Comment: v2. Ref 102 added. v3-4 Many refs and text added v5-6 bibliographic statistics updated. v7 Change of the name of the paper to reflect what it became, many refs and text added, bibliographic statistics update

    Toward composing variable structure models and their interfaces: a case of intensional coupling definitions

    Get PDF
    In this thesis, we investigate a combination of traditional component-based and variable structure modeling. The focus is on a structural consistent specification of couplings in modular, hierarchical models with a variable structure. For this, we exploitintensional definitions, as known from logic, and introduce a novel intensional coupling definition, which allows a concise yet expressive specification of complex communication and interaction patterns in static as well as variable structure models, without the need to worryabout structural consistency.In der Arbeit untersuchen wir ein Zusammenbringen von klassischer komponenten-basierter und variabler Strukturmodellierung. Der Fokus liegt dabei auf der Spezifikation von strukturkonsistenten Kopplungen in modular-hierarchischen Modellen mit einer variablen Struktur. Dafür nutzen wir intensionale Definitionen, wie sie aus der Logik bekannt sind, und führen ein neuartiges Konzept von intensionalen Kopplungen ein, welches kompakte gleichzeitig ausdrucksstarke Spezifikationen von komplexen Kommunikations- und Interaktionsmuster in statischen und variablen Strukturmodellen erlaubt

    A FLEXIBLE AND SCALABLE EXPERIMENTATION LAYER

    Get PDF
    Modeling and simulation frameworks for use in different application domains, throughout the complete development process, and in different hardware environments need to be highly scalable. For achieving an efficient execution, different simulation algorithms and data structures must be provided to compute a concrete model on a concrete platform efficiently. The support of parallel simulation techniques becomes increasingly important in this context, which is due to the growing availability of multi-core processors and network-based computers. This leads to more complex simulation systems that are harder to configure correctly. We present an experimentation layer for the modeling and simulation framework JAMES II. It greatly facilitates the configuration and usage of the system for a user and supports distributed optimization, on-demand observation, and various distributed and non-distributed scenarios.

    IJMSSC

    Get PDF
    DEVS is a sound Modeling and Simulation (M&S) framework that describes a model in a modular and hierarchical way. It comes along with an abstract simulation algorithm which defines its operational semantics. Many variants of such an algorithm have been proposed by DEVS researchers. Yet, the proper interpretation and analysis of the computational complexity of such approaches have not been systematically addressed and defined. As systems become larger and more complex, the efficiency of the DEVS simulation algorithms in terms of time complexity measure becomes a major issue. Therefore, it is necessary to devise a method for computing this complexity. This paper proposes a generic method to address such an issue, taking advantage of the recursion embedded in the triggered-by-message principle of the DEVS simulation protocol. The applicability of the method is shown through the complexity analysis of various DEVS simulation algorithms

    The 1990 progress report and future plans

    Get PDF
    This document describes the progress and plans of the Artificial Intelligence Research Branch (RIA) at ARC in 1990. Activities span a range from basic scientific research to engineering development and to fielded NASA applications, particularly those applications that are enabled by basic research carried out at RIA. Work is conducted in-house and through collaborative partners in academia and industry. Our major focus is on a limited number of research themes with a dual commitment to technical excellence and proven applicability to NASA short, medium, and long-term problems. RIA acts as the Agency's lead organization for research aspects of artificial intelligence, working closely with a second research laboratory at JPL and AI applications groups at all NASA centers

    Front Propagation in Random Media

    Get PDF
    This PhD thesis deals with the problem of the propagation of fronts under random circumstances. A statistical model to represent the motion of fronts when are evolving in a media characterized by microscopical randomness is discussed and expanded, in order to cope with three distinct applications: wild-land fire simulation, turbulent premixed combustion, biofilm modeling. In the studied formalism, the position of the average front is computed by making use of a sharp-front evolution method, such as the level set method. The microscopical spread of particles which takes place around the average front is given by the probability density function linked to the underlying diffusive process, that is supposedly known in advance. The adopted statistical front propagation framework allowed a deeper understanding of any studied field of application. The application of this model introduced eventually parameters whose impact on the physical observables of the front spread have been studied with Uncertainty Quantification and Sensitivity Analysis tools. In particular, metamodels for the front propagation system have been constructed in a non intrusive way, by making use of generalized Polynomial Chaos expansions and Gaussian Processes.The Thesis received funding from Basque Government through the BERC 2014-2017 program. It was also funded by the Spanish Ministry of Economy and Competitiveness MINECO via the BCAM Severo Ochoa SEV-2013-0323 accreditation. The PhD is fundend by La Caixa Foundation through the PhD grant “La Caixa 2014”. Funding from “Programma Operativo Nazionale Ricerca e Innovazione” (PONRI 2014-2020) , “Innotavive PhDs with Industrial Characterization” is kindly acknowledged for a research visit at the department of Mathematics and Applications “Renato Caccioppoli” of University “Federico II” of Naples

    Simulation Modelling of Distributed-Shared Memory Multiprocessors

    Get PDF
    Institute for Computing Systems ArchitectureDistributed shared memory (DSM) systems have been recognised as a compelling platform for parallel computing due to the programming advantages and scalability. DSM systems allow applications to access data in a logically shared address space by abstracting away the distinction of physical memory location. As the location of data is transparent, the sources of overhead caused by accessing the distant memories are difficult to analyse. This memory locality problem has been identified as crucial to DSM performance. Many researchers have investigated the problem using simulation as a tool for conducting experiments resulting in the progressive evolution of DSM systems. Nevertheless, both the diversity of architectural configurations and the rapid advance of DSM implementations impose constraints on simulation model designs in two issues: the limitation of the simulation framework on model extensibility and the lack of verification applicability during a simulation run causing the delay in verification process. This thesis studies simulation modelling techniques for memory locality analysis of various DSM systems implemented on top of a cluster of symmetric multiprocessors. The thesis presents a simulation technique to promote model extensibility and proposes a technique for verification applicability, called a Specification-based Parameter Model Interaction (SPMI). The proposed techniques have been implemented in a new interpretation-driven simulation called DSiMCLUSTER on top of a discrete event simulation (DES) engine known as HASE. Experiments have been conducted to determine which factors are most influential on the degree of locality and to determine the possibility to maximise the stability of performance. DSiMCLUSTER has been validated against a SunFire 15K server and has achieved similarity of cache miss results, an average of +-6% with the worst case less than 15% of difference. These results confirm that the techniques used in developing the DSiMCLUSTER can contribute ways to achieve both (a) a highly extensible simulation framework to keep up with the ongoing innovation of the DSM architecture, and (b) the verification applicability resulting in an efficient framework for memory analysis experiments on DSM architecture

    DevOps for Trustworthy Smart IoT Systems

    Get PDF
    ENACT is a research project funded by the European Commission under its H2020 program. The project consortium consists of twelve industry and research member organisations spread across the whole EU. The overall goal of the ENACT project was to provide a novel set of solutions to enable DevOps in the realm of trustworthy Smart IoT Systems. Smart IoT Systems (SIS) are complex systems involving not only sensors but also actuators with control loops distributed all across the IoT, Edge and Cloud infrastructure. Since smart IoT systems typically operate in a changing and often unpredictable environment, the ability of these systems to continuously evolve and adapt to their new environment is decisive to ensure and increase their trustworthiness, quality and user experience. DevOps has established itself as a software development life-cycle model that encourages developers to continuously bring new features to the system under operation without sacrificing quality. This book reports on the ENACT work to empower the development and operation as well as the continuous and agile evolution of SIS, which is necessary to adapt the system to changes in its environment, such as newly appearing trustworthiness threats
    corecore