22 research outputs found

    First steps on asynchronous lattice-gas models with an application to a swarming rule

    Get PDF
    International audienceLattice-gas cellular automata are often considered as a particular case of cellular automata in which additional constraints apply, such as conservation of particles or spatial exclusion. But what about their updating? How to deal with non-perfect synchrony? Novel definitions of asynchronism are proposed that respect the specific hypotheses of lattice-gas models. These definitions are then applied to a swarming rule in order to explore the robustness of the global emergent behaviour. In particular, we compare the synchronous and asynchronous case, and remark that anti-alignment of particles is no longer observed when a small critical amount of asynchronism is added

    An agent-based approach for modelling collective dynamics in animal groups distinguishing individual speed and orientation: Particle model for animal group dynamics

    Get PDF
    Collective dynamics in animal groups is a challenging theme for the model- ling community, being treated with a wide range of approaches. This topic is here tackled by a discrete model. Entering in more details, each agent, rep- resented by a material point, is assumed to move following a first-order Newtonian law, which distinguishes speed and orientation. In particular, the latter results from the balance of a given set of behavioural stimuli, each of them defined by a direction and a weight, that quantifies its relative importance. A constraint on the sum of the weights then avoids implausible simultaneous maximization/minimization of all movement traits. Our framework is based on a minimal set of rules and parameters and is able to capture and classify a number of collective group dynamics emerging from different individual preferred behaviour, which possibly includes attrac- tive, repulsive and alignment stimuli. In the case of a system of animals subjected only to the first two behavioural inputs, we also show how analytical arguments allow us to a priori relate the equilibrium interparticle spacing to critical model coefficients. Our approach is then extended to account for the presence of predators with different hunting strategies, which impact on the behaviour of a prey population. Hints for model refinement and applications are finally given in the conclusive part of the article

    Working With Incremental Spatial Data During Parallel (GPU) Computation

    Get PDF
    Central to many complex systems, spatial actors require an awareness of their local environment to enable behaviours such as communication and navigation. Complex system simulations represent this behaviour with Fixed Radius Near Neighbours (FRNN) search. This algorithm allows actors to store data at spatial locations and then query the data structure to find all data stored within a fixed radius of the search origin. The work within this thesis answers the question: What techniques can be used for improving the performance of FRNN searches during complex system simulations on Graphics Processing Units (GPUs)? It is generally agreed that Uniform Spatial Partitioning (USP) is the most suitable data structure for providing FRNN search on GPUs. However, due to the architectural complexities of GPUs, the performance is constrained such that FRNN search remains one of the most expensive common stages between complex systems models. Existing innovations to USP highlight a need to take advantage of recent GPU advances, reducing the levels of divergence and limiting redundant memory accesses as viable routes to improve the performance of FRNN search. This thesis addresses these with three separate optimisations that can be used simultaneously. Experiments have assessed the impact of optimisations to the general case of FRNN search found within complex system simulations and demonstrated their impact in practice when applied to full complex system models. Results presented show the performance of the construction and query stages of FRNN search can be improved by over 2x and 1.3x respectively. These improvements allow complex system simulations to be executed faster, enabling increases in scale and model complexity

    Flood-pedestrian simulator: an agent-based modelling framework for urban evacuation planning

    Get PDF
    Agent-Based Modelling (ABM) is an increasingly used approach for characterisation of human behaviour in evacuation simulation modelling. ABM-based evacuation models used in flood emergency are developed mostly for vehicular scenarios at regional scale. Only a few models exist for simulating evacuations of on-foot pedestrians responding to floods in small and congested urban areas. These models do not include the heterogeneity and variability of individuals’ behaviour influenced by their dynamic interactions with the floodwater properties. This limitation is due to the modelling restrictions pertaining to the computational complexity and the modelling flexibility for agent characterisation. This PhD research has aimed to develop a new ABM-based pedestrian evacuation model that overcomes these challenges through an ABM platform called Flexible Large-scale Agent Modelling Environment for the Graphics Processing Units (FLAME GPU). To achieve this aim, a hydrodynamic model has been integrated into a pedestrian model within the FLAME GPU framework. The dynamic interactions between the flood and pedestrians have been formulated based on a number of behavioural rules driving the mobility states and way-finding decisions of individuals in and around the floodwaters as well as the local changes in the floodwater properties as a result of pedestrians’ crowding. These rules have been progressively improved and their added value has been explored systematically by diagnostically comparing the simulation results obtained from the base setup and the augmented version of the model applied to a synthetic test case. A real-world case study has been further used to specifically evaluate the added value of rules relating the individuals’ way-finding mechanism to various levels of flood-risk perception. The findings from this research have shown that increasing the level of pedestrians’ heterogeneity and the effect of pedestrians’ crowding on the floodwater hydrodynamics yield to a considerably different prediction of flood risk and evacuation time. Besides, accounting for pedestrians’ various levels of flood-risk perception has been found to be one determinant factor in the analysis of flood risk and evacuation time when there are multiple destinations. Finally, the sensitivity analysis on the simulation results have shown that the deviations in the simulation outcomes increases in line with the increase in the sophistication of human behavioural rules

    Flowing matter

    Get PDF
    This open access book, published in the Soft and Biological Matter series, presents an introduction to selected research topics in the broad field of flowing matter, including the dynamics of fluids with a complex internal structure -from nematic fluids to soft glasses- as well as active matter and turbulent phenomena.Flowing matter is a subject at the crossroads between physics, mathematics, chemistry, engineering, biology and earth sciences, and relies on a multidisciplinary approach to describe the emergence of the macroscopic behaviours in a system from the coordinated dynamics of its microscopic constituents.Depending on the microscopic interactions, an assembly of molecules or of mesoscopic particles can flow like a simple Newtonian fluid, deform elastically like a solid or behave in a complex manner. When the internal constituents are active, as for biological entities, one generally observes complex large-scale collective motions. Phenomenology is further complicated by the invariable tendency of fluids to display chaos at the large scales or when stirred strongly enough. This volume presents several research topics that address these phenomena encompassing the traditional micro-, meso-, and macro-scales descriptions, and contributes to our understanding of the fundamentals of flowing matter.This book is the legacy of the COST Action MP1305 “Flowing Matter”

    Generalized averaged Gaussian quadrature and applications

    Get PDF
    A simple numerical method for constructing the optimal generalized averaged Gaussian quadrature formulas will be presented. These formulas exist in many cases in which real positive GaussKronrod formulas do not exist, and can be used as an adequate alternative in order to estimate the error of a Gaussian rule. We also investigate the conditions under which the optimal averaged Gaussian quadrature formulas and their truncated variants are internal

    MS FT-2-2 7 Orthogonal polynomials and quadrature: Theory, computation, and applications

    Get PDF
    Quadrature rules find many applications in science and engineering. Their analysis is a classical area of applied mathematics and continues to attract considerable attention. This seminar brings together speakers with expertise in a large variety of quadrature rules. It is the aim of the seminar to provide an overview of recent developments in the analysis of quadrature rules. The computation of error estimates and novel applications also are described

    On Improving Stochastic Simulation for Systems Biology

    Get PDF
    Mathematical modeling and computer simulation are powerful approaches for understanding the complexity of biological systems. In particular, computer simulation represents a strong validation and fast hypothesis verification tool. In the course of the years, several successful attempts have been made to simulate complex biological processes like metabolic pathways, gene regulatory networks and cell signaling pathways. These processes are stochastic in nature, and furthermore they are characterized by multiple time scale evolutions and great variability in the population size of molecules. The most known method to capture random time evolutions of well-stirred chemical reacting systems is the Gillespie's Stochastic Simulation Algorithm. This Monte carlo method generates exact realizations of the state of the system by stochastically determining when a reaction will occurs and what reaction it will be. Most of the assumptions and hypothesis are clearly simplifications but in many cases this method have been proved useful to capture the randomness typical of realistic biological systems. Unfortunately, often the Gillespie's stochastic simulation method results slow in practice. This posed a great challenge and a motivation toward the development of new efficient methods able to simulate stochastic and multiscale biological systems. In this thesis we address the problems of simulating metabolic experiments and develop efficient simulation methods for well-stirred chemically reacting systems. We showed as a Systems Biology approach can provide a cheap, fast and powerful method for validating models proposed in literature. In the present case, we specified the model of SRI photocycle proposed by Hoff et al. in a suitable developed simulator. This simulator was specifically designed to reproduce in silico wet-lab experiments performed on metabolic networks with several possible controls exerted on them by the operator. Thanks to this, we proved that the screened model is able to explain correctly many light responses but unfortunately it was unable to explain some critical experiments, due to some unresolvable time scale problems. This confirm that our simulator is useful to simulate metabolic experiments. Furthermore, it can be downloaded at the URL http://sourceforge.net/projects/gillespie-qdc. In order to accelerate the simulation of SSA we first proposed a data parallel implementation on General Purpose Graphics Processing Units of a revised version of the Gillespie's First Reaction Method. The simulations performed on a GeForce 8600M GS Graphic Card with 16 stream processors showed that the parallel computations halves the execution time, and this performance scales with the number of steps of the simulation. We also highlighted some specific problem of the programming environment to execute non trivial general purpose applications. Concluding we proved the extreme computational power of these low cost and widespread technologies, but the limitations emerged demonstrate that we are far from a general purpose application for GPU. In our investigation we also attempted to achieve higher simulation speed focusing on tau-leaping methods. We revealed that these methods implement a common basic algorithmic convention. This convention is the pre-computation of information necessary to estimate the size of the leap and the number of reactions that will fire on it. Often these pre-processing operations are used to avoid negative populations. The computational cost to perform these operations is often proportional to the size of the model (i.e. number of reactions). This means that larger models involve larger computational cost. The pre-processing operations result in very efficient simulation when the leap are long and many reactions can be fired. But at the contrary they represent a burden when leap are short and few reactions occur. So to efficiently deal with the latter cases we proposed a method that works differently respect to the trend. The SSALeaping method, SSAL for short, is a new method which lays in the middle between the direct method (DM) and a tau-leaping. The SSALeaping method adaptively builds leaps and stepwise updates the system state. Differently from methods like the Modified tau-leaping (MTL), SSAL neither shifts from tau-leaping to DM nor pre-selects the largest leap time consistent with the leap condition. Additionally whereas MTL prevents negative populations taking apart critical and non critical reactions, SSAL generates sequentially the reactions to fire verifying the leap condition after each reaction selection. We proved that a reaction overdraws one of its reactants if and only if the leap condition is violated. Therefore, this makes it impossible for the population to become negatives, because SSAL stops the leap generation in advance. To test the accuracy and the performance of our method we performed a large number of simulations upon realistic biological models. The tests aimed to span the number of reactions fired in a leap and the number of reactions of the system as much as possible. Sometimes orders of magnitude. Results showed that our method performs better than MTL for many of the tested cases, but not in all. Then to augment the number of models eligible to be simulated efficiently we exploiting the complementarity emerged between SSAL and MTL, and we proposed a new adaptive method, called Adaptive Modified SSALeaping (AMS). During the simulation, our method switches between SSALeaping (SSAL) and Modified tau-leaping, according to conditions on the number of reactions of the model and the predicted number of reactions firing in a leap. We were able to find both theoretically and experimentally how to estimate the number of reactions that will fire in a leap and the threshold that determines the switch from one method to the other and viceversa. Results obtained from realistic biological models showed that in practice AMS performs better than SSAL and MTL by augmenting the number of models eligible ro be simulated efficiently. In fact, the method selects correctly the best algorithm between SSAL and MTL according to the cases. In this thesis we also investigated other new parallelization techniques. The parallelization of biological systems stimulated the interest of many researchers because the nature of these systems is parallel and sometimes distributed. However, the nature of the Gillespie's SSA is strictly sequential. We presented a novel exact formulation of SSA based on the idea of partitioning the volume. We proved the equivalence between our method and DM, and we have given a simple test to show its accuracy in practice. Then we proposed a variant of SSALeaping based on the partitioning of the volume, called Partitioned SSALeaping. The main feature we pointed out is that the dynamics of a system in a leap can be obtained by the composition of the dynamics processed by each sub-volume of the partition. This form of independency gives a different view with respect to existing methods. We only tested the method on a simple model, and we showed that the method accurately matched the results of DM, independently of the number of sub-volumes in the partition. This confirmed that the method works and that independency is effective. We have not already given parallel implementation of this method because this work is still in progress and much work has to be done. Nevertheless, the Partitioned SSAleaping is a promising approach for a future parallelization on multi core (e.g. GPU's) or in many core (e.g. cluster) technologies
    corecore