4,973 research outputs found

    BioNessie - a grid enabled biochemical networks simulation environment

    Get PDF
    The simulation of biochemical networks provides insight and understanding about the underlying biochemical processes and pathways used by cells and organisms. BioNessie is a biochemical network simulator which has been developed at the University of Glasgow. This paper describes the simulator and focuses in particular on how it has been extended to benefit from a wide variety of high performance compute resources across the UK through Grid technologies to support larger scale simulations

    RELEASE: A High-level Paradigm for Reliable Large-scale Server Software

    Get PDF
    Erlang is a functional language with a much-emulated model for building reliable distributed systems. This paper outlines the RELEASE project, and describes the progress in the first six months. The project aim is to scale the Erlang’s radical concurrency-oriented programming paradigm to build reliable general-purpose software, such as server-based systems, on massively parallel machines. Currently Erlang has inherently scalable computation and reliability models, but in practice scalability is constrained by aspects of the language and virtual machine. We are working at three levels to address these challenges: evolving the Erlang virtual machine so that it can work effectively on large scale multicore systems; evolving the language to Scalable Distributed (SD) Erlang; developing a scalable Erlang infrastructure to integrate multiple, heterogeneous clusters. We are also developing state of the art tools that allow programmers to understand the behaviour of massively parallel SD Erlang programs. We will demonstrate the effectiveness of the RELEASE approach using demonstrators and two large case studies on a Blue Gene

    Investigating biocomplexity through the agent-based paradigm.

    Get PDF
    Capturing the dynamism that pervades biological systems requires a computational approach that can accommodate both the continuous features of the system environment as well as the flexible and heterogeneous nature of component interactions. This presents a serious challenge for the more traditional mathematical approaches that assume component homogeneity to relate system observables using mathematical equations. While the homogeneity condition does not lead to loss of accuracy while simulating various continua, it fails to offer detailed solutions when applied to systems with dynamically interacting heterogeneous components. As the functionality and architecture of most biological systems is a product of multi-faceted individual interactions at the sub-system level, continuum models rarely offer much beyond qualitative similarity. Agent-based modelling is a class of algorithmic computational approaches that rely on interactions between Turing-complete finite-state machines--or agents--to simulate, from the bottom-up, macroscopic properties of a system. In recognizing the heterogeneity condition, they offer suitable ontologies to the system components being modelled, thereby succeeding where their continuum counterparts tend to struggle. Furthermore, being inherently hierarchical, they are quite amenable to coupling with other computational paradigms. The integration of any agent-based framework with continuum models is arguably the most elegant and precise way of representing biological systems. Although in its nascence, agent-based modelling has been utilized to model biological complexity across a broad range of biological scales (from cells to societies). In this article, we explore the reasons that make agent-based modelling the most precise approach to model biological systems that tend to be non-linear and complex

    Co-Swarming and Local Collapse: Quorum Sensing Conveys Resilience to Bacterial Communities by Localizing Cheater Mutants in Pseudomonas aeruginosa

    Get PDF
    Background: Members of swarming bacterial consortia compete for nutrients but also use a co-operation mechanism called quorum sensing (QS) that relies on chemical signals as well as other secreted products (‘‘public goods’’) necessary for swarming. Deleting various genes of this machinery leads to cheater mutants impaired in various aspects of swarming cooperation. Methodology/Principal Findings: Pairwise consortia made of Pseudomonas aeruginosa, its QS mutants as well as B. cepacia cells show that a interspecies consortium can ‘‘combine the skills’ ’ of its participants so that the strains can cross together barriers that they could not cross alone. In contrast, deleterious mutants are excluded from consortia either by competition or by local population collapse. According to modeling, both scenarios are the consequence of the QS signalling mechanism itself. Conclusion/Significance: The results indirectly explain why it is an advantage for bacteria to maintain QS systems that can cross-talk among different species, and conversely, why certain QS mutants which can be abundant in isolated niches

    Methods and Tools for the Microsimulation and Forecasting of Household Expenditure

    Get PDF
    This paper reviews potential methods and tools for the microsimulation and forecasting of household expenditure. It begins with a discussion of a range of approaches to the forecasting of household populations via agent-based modelling tools. Then it evaluates approaches to the modelling of household expenditure. A prototype implementation is described and the paper concludes with an outline of an approach to be pursued in future work

    Methods and Tools for the Microsimulation and Forecasting of Household Expenditure - A Review

    Get PDF
    This paper reviews potential methods and tools for the microsimulation and forecasting of household expenditure. It begins with a discussion of a range of approaches to the forecasting of household populations via agent-based modelling tools. Then it evaluates approaches to the modelling of household expenditure. A prototype implementation is described and the paper concludes with an outline of an approach to be pursued in future work

    On the Design of Generalist Strategies for Swarms of Simulated Robots Engaged in Task-allocation Scenarios

    Get PDF
    This study focuses on issues related to the evolutionary design of task-allocation mechanisms for swarm robotics systems with agents potentially capable of performing different tasks. Task allocation in swarm robotics refers to a process that results in the distribution of robots to different concurrent tasks without any central or hierarchical control. In this paper, we investigate a scenario with two concurrent tasks (i.e. foraging and nest patrolling) and two environments in which the task priorities vary. We are interested in generating successful groups made of behaviourally plastic agents (i.e. agents that are capable of carrying out different tasks in different environmental conditions), which could adapt their task preferences to those of their group mates as well as to the environmental conditions. We compare the results of three different evolutionary design approaches, which differ in terms of the agents’ genetic relatedness (i.e. groups of clones and groups of unrelated individuals), and/or the selection criteria used to create new populations (i.e. single and multi-objective evolutionary optimisation algorithms). We show results indicating that the evolutionary approach based on the use of genetically unrelated individuals in combination with a multi-objective evolutionary optimisation algorithm has a better success rate then an evolutionary approach based on the use of genetically related agents. Moreover, the multi-objective approach, when compared to a single-objective approach and genetically unrelated individual, significantly limits the tendency towards task specialisation by favouring the emergence of generalist agents without introducing extra computational costs. The significance of this result is discussed in view of the relationship between individual behavioural skills and swarm effectiveness
    corecore