16,297 research outputs found

    Variance in System Dynamics and Agent Based Modelling Using the SIR Model of Infectious Disease

    Get PDF
    Classical deterministic simulations of epidemiological processes, such as those based on System Dynamics, produce a single result based on a fixed set of input parameters with no variance between simulations. Input parameters are subsequently modified on these simulations using Monte-Carlo methods, to understand how changes in the input parameters affect the spread of results for the simulation. Agent Based simulations are able to produce different output results on each run based on knowledge of the local interactions of the underlying agents and without making any changes to the input parameters. In this paper we compare the influence and effect of variation within these two distinct simulation paradigms and show that the Agent Based simulation of the epidemiological SIR (Susceptible, Infectious, and Recovered) model is more effective at capturing the natural variation within SIR compared to an equivalent model using System Dynamics with Monte-Carlo simulation. To demonstrate this effect, the SIR model is implemented using both System Dynamics (with Monte-Carlo simulation) and Agent Based Modelling based on previously published empirical data.Comment: Proceedings of the 26th European Conference on Modelling and Simulation (ECMS), Koblenz, Germany, May 2012, pp 9-15, 201

    Investigating biocomplexity through the agent-based paradigm.

    Get PDF
    Capturing the dynamism that pervades biological systems requires a computational approach that can accommodate both the continuous features of the system environment as well as the flexible and heterogeneous nature of component interactions. This presents a serious challenge for the more traditional mathematical approaches that assume component homogeneity to relate system observables using mathematical equations. While the homogeneity condition does not lead to loss of accuracy while simulating various continua, it fails to offer detailed solutions when applied to systems with dynamically interacting heterogeneous components. As the functionality and architecture of most biological systems is a product of multi-faceted individual interactions at the sub-system level, continuum models rarely offer much beyond qualitative similarity. Agent-based modelling is a class of algorithmic computational approaches that rely on interactions between Turing-complete finite-state machines--or agents--to simulate, from the bottom-up, macroscopic properties of a system. In recognizing the heterogeneity condition, they offer suitable ontologies to the system components being modelled, thereby succeeding where their continuum counterparts tend to struggle. Furthermore, being inherently hierarchical, they are quite amenable to coupling with other computational paradigms. The integration of any agent-based framework with continuum models is arguably the most elegant and precise way of representing biological systems. Although in its nascence, agent-based modelling has been utilized to model biological complexity across a broad range of biological scales (from cells to societies). In this article, we explore the reasons that make agent-based modelling the most precise approach to model biological systems that tend to be non-linear and complex

    Optimizing radiation therapy treatments by exploring tumour ecosystem dynamics in-silico

    Get PDF
    In this contribution, we propose a system-level compartmental population dynamics model of tumour cells that interact with the patient (innate) immune system under the impact of radiation therapy (RT). The resulting in silico - model enables us to analyse the system-level impact of radiation on the tumour ecosystem. The Tumour Control Probability (TCP) was calculated for varying conditions concerning therapy fractionation schemes, radio-sensitivity of tumour sub-clones, tumour population doubling time, repair speed and immunological elimination parameters. The simulations exhibit a therapeutic benefit when applying the initial 3 fractions in an interval of 2 days instead of daily delivered fractions. This effect disappears for fast-growing tumours and in the case of incomplete repair. The results suggest some optimisation potential for combined hyperthermia-radiotherapy. Regarding the sensitivity of the proposed model, cellular repair of radiation-induced damages is a key factor for tumour control. In contrast to this, the radio-sensitivity of immune cells does not influence the TCP as long as the radio-sensitivity is higher than those for tumour cells. The influence of the tumour sub-clone structure is small (if no competition is included). This work demonstrates the usefulness of in silico – modelling for identifying optimisation potentials

    StochKit-FF: Efficient Systems Biology on Multicore Architectures

    Full text link
    The stochastic modelling of biological systems is an informative, and in some cases, very adequate technique, which may however result in being more expensive than other modelling approaches, such as differential equations. We present StochKit-FF, a parallel version of StochKit, a reference toolkit for stochastic simulations. StochKit-FF is based on the FastFlow programming toolkit for multicores and exploits the novel concept of selective memory. We experiment StochKit-FF on a model of HIV infection dynamics, with the aim of extracting information from efficiently run experiments, here in terms of average and variance and, on a longer term, of more structured data.Comment: 14 pages + cover pag

    Networked buffering: a basic mechanism for distributed robustness in complex adaptive systems

    Get PDF
    A generic mechanism - networked buffering - is proposed for the generation of robust traits in complex systems. It requires two basic conditions to be satisfied: 1) agents are versatile enough to perform more than one single functional role within a system and 2) agents are degenerate, i.e. there exists partial overlap in the functional capabilities of agents. Given these prerequisites, degenerate systems can readily produce a distributed systemic response to local perturbations. Reciprocally, excess resources related to a single function can indirectly support multiple unrelated functions within a degenerate system. In models of genome:proteome mappings for which localized decision-making and modularity of genetic functions are assumed, we verify that such distributed compensatory effects cause enhanced robustness of system traits. The conditions needed for networked buffering to occur are neither demanding nor rare, supporting the conjecture that degeneracy may fundamentally underpin distributed robustness within several biotic and abiotic systems. For instance, networked buffering offers new insights into systems engineering and planning activities that occur under high uncertainty. It may also help explain recent developments in understanding the origins of resilience within complex ecosystems. \ud \u

    A Review on the Application of Natural Computing in Environmental Informatics

    Get PDF
    Natural computing offers new opportunities to understand, model and analyze the complexity of the physical and human-created environment. This paper examines the application of natural computing in environmental informatics, by investigating related work in this research field. Various nature-inspired techniques are presented, which have been employed to solve different relevant problems. Advantages and disadvantages of these techniques are discussed, together with analysis of how natural computing is generally used in environmental research.Comment: Proc. of EnviroInfo 201

    Mobility traces and spreading of COVID-19

    Get PDF
    We use human mobility models, for which we are experts, and attach a virus infection dynamics to it, for which we are not experts but have taken it from the literature, including recent publications. This results in a virus spreading dynamics model. The results should be verified, but because of the current time pressure, we publish them in their current state. Recommendations for improvement are welcome. We come to the following conclusions: 1. Complete lockdown works. About 10 days after lockdown, the infection dynamics dies down. This assumes that lockdown is complete, which can be guaranteed in the simulation, but not in reality. Still, it gives strong support to the argument that it is never too late for complete lockdown. 2. As a rule of thumb, we would suggest complete lockdown no later than once 10% of hospital capacities available for COVID-19 are in use, and possibly much earlier. This is based on the following insights: a. Even after lockdown, the infection dynamics continues at home, leading to another tripling of the cases before the dynamics is slowed. b. There will be many critical cases coming from people who were infected before lockdown. Because of the exponential growth dynamics, their number will be large. c. Researchers with more detailed disease progression models should improve upon these statements. 3. Our simulations say that complete removal of infections at child care, primary schools, workplaces and during leisure activities will not be enough to sufficiently slow down the infection dynamics. It would have been better, but still not sufficient, if initiated earlier. 4. Infections in public transport play an important role. In the simulations shown later, removing infections in the public transport system reduces the infection speed and the height of the peak by approximately 20%. Evidently, this depends on the infection parameters, which are not well known. – This does not point to reducing public transport capacities as a reaction to the reduced demand, but rather use it for lower densities of passengers and thus reduced infection rates. 5. In our simulations, removal of infections at child care, primary schools, workplaces, leisure activities, and in public transport may barely have been sufficient to control the infection dynamics if implemented early on. Now according to our simulations it is too late for this, and (even) harsher measures will have to be initiated until possibly a return to such a restrictive, but still somewhat functional regime will again be possible. Evidently, all of these results have to be taken with care. They are based on preliminary infection parameters taken from the literature, used inside a model that has more transport/movement details than all others that we are aware of but still not enough to describe all aspects of reality, and suffer from having to write computer code under time pressure. Optimally, they should be confirmed independently. Short of that, given current knowledge we believe that they provide justification for “complete lockdown” at the latest when about 10% of available hospital capacities for COVID-19 are in use (and possibly earlier; we are no experts of hospital capabilities). What was not investigated in detail in our simulations was contact tracing, i.e. tracking down the infection chains and moving all people along infection chains into quarantine. The case of Singapore has so far shown that this may be successful. Preliminary simulation of that tactic shows that it is difficult to implement for COVID-19, since the incubation time is rather long, people are contagious before they feel sick, or maybe never feel sufficiently sick at all. We will investigate in future work if and how contact tracing can be used together with a restrictive, but not totally locked down regime. When opening up after lockdown, it would be important to know the true fraction of people who are already immune, since that would slow down the infection dynamics by itself. For Wuhan, the currently available numbers report that only about 0.1% of the population was infected, which would be very far away from “herd immunity”. However, there have been and still may be many unknown infections (Frankfurter Allgemeine Zeitung GmbH 2020)

    Comparing System Dynamics and Agent-Based Simulation for Tumour Growth and its Interactions with Effector Cells

    Get PDF
    There is little research concerning comparisons and combination of System Dynamics Simulation (SDS) and Agent Based Simulation (ABS). ABS is a paradigm used in many levels of abstraction, including those levels covered by SDS. We believe that the establishment of frameworks for the choice between these two simulation approaches would contribute to the simulation research. Hence, our work aims for the establishment of directions for the choice between SDS and ABS approaches for immune system-related problems. Previously, we compared the use of ABS and SDS for modelling agents' behaviour in an environment with nomovement or interactions between these agents. We concluded that for these types of agents it is preferable to use SDS, as it takes up less computational resources and produces the same results as those obtained by the ABS model. In order to move this research forward, our next research question is: if we introduce interactions between these agents will SDS still be the most appropriate paradigm to be used? To answer this question for immune system simulation problems, we will use, as case studies, models involving interactions between tumour cells and immune effector cells. Experiments show that there are cases where SDS and ABS can not be used interchangeably, and therefore, their comparison is not straightforward.Comment: 8 pages, 8 figures, 2 tables, International Summer Computer Simulation Conference 201
    corecore