117,971 research outputs found

    Parallel Load Balancing Strategies for Ensembles of Stochastic Biochemical Simulations

    Get PDF
    The evolution of biochemical systems where some chemical species are present with only a small number of molecules, is strongly influenced by discrete and stochastic effects that cannot be accurately captured by continuous and deterministic models. The budding yeast cell cycle provides an excellent example of the need to account for stochastic effects in biochemical reactions. To obtain statistics of the cell cycle progression, a stochastic simulation algorithm must be run thousands of times with different initial conditions and parameter values. In order to manage the computational expense involved, the large ensemble of runs needs to be executed in parallel. The CPU time for each individual task is unknown before execution, so a simple strategy of assigning an equal number of tasks per processor can lead to considerable work imbalances and loss of parallel efficiency. Moreover, deterministic analysis approaches are ill suited for assessing the effectiveness of load balancing algorithms in this context. Biological models often require stochastic simulation. Since generating an ensemble of simulation results is computationally intensive, it is important to make efficient use of computer resources. This paper presents a new probabilistic framework to analyze the performance of dynamic load balancing algorithms when applied to large ensembles of stochastic biochemical simulations. Two particular load balancing strategies (point-to-point and all-redistribution) are discussed in detail. Simulation results with a stochastic budding yeast cell cycle model confirm the theoretical analysis. While this work is motivated by cell cycle modeling, the proposed analysis framework is general and can be directly applied to any ensemble simulation of biological systems where many tasks are mapped onto each processor, and where the individual compute times vary considerably among tasks

    Synthesis and Stochastic Assessment of Cost-Optimal Schedules

    Get PDF
    We present a novel approach to synthesize good schedules for a class of scheduling problems that is slightly more general than the scheduling problem FJm,a|gpr,r_j,d_j|early/tardy. The idea is to prime the schedule synthesizer with stochastic information more meaningful than performance factors with the objective to minimize the expected cost caused by storage or delay. The priming information is obtained by stochastic simulation of the system environment. The generated schedules are assessed again by simulation. The approach is demonstrated by means of a non-trivial scheduling problem from lacquer production. The experimental results show that our approach achieves in all considered scenarios better results than the extended processing times approach

    Configuration of Distributed Message Converter Systems using Performance Modeling

    Get PDF
    To find a configuration of a distributed system satisfying performance goals is a complex search problem that involves many design parameters, like hardware selection, job distribution and process configuration. Performance models are a powerful tools to analyse potential system configurations, however, their evaluation is expensive, such that only a limited number of possible configurations can be evaluated. In this paper we present a systematic method to find a satisfactory configuration with feasible effort, based on a two-step approach. First, using performance estimates a hardware configuration is determined and then the software configuration is incrementally optimized evaluating Layered Queueing Network models. We applied this method to the design of performant EDI converter systems in the financial domain, where increasing message volumes need to be handled due to the increasing importance of B2B interaction

    MOLNs: A cloud platform for interactive, reproducible and scalable spatial stochastic computational experiments in systems biology using PyURDME

    Full text link
    Computational experiments using spatial stochastic simulations have led to important new biological insights, but they require specialized tools, a complex software stack, as well as large and scalable compute and data analysis resources due to the large computational cost associated with Monte Carlo computational workflows. The complexity of setting up and managing a large-scale distributed computation environment to support productive and reproducible modeling can be prohibitive for practitioners in systems biology. This results in a barrier to the adoption of spatial stochastic simulation tools, effectively limiting the type of biological questions addressed by quantitative modeling. In this paper, we present PyURDME, a new, user-friendly spatial modeling and simulation package, and MOLNs, a cloud computing appliance for distributed simulation of stochastic reaction-diffusion models. MOLNs is based on IPython and provides an interactive programming platform for development of sharable and reproducible distributed parallel computational experiments

    Stochastic accumulation of feature information in perception and memory

    Get PDF
    It is now well established that the time course of perceptual processing influences the first second or so of performance in a wide variety of cognitive tasks. Over the last20 years, there has been a shift from modeling the speed at which a display is processed, to modeling the speed at which different features of the display are perceived and formalizing how this perceptual information is used in decision making. The first of these models(Lamberts, 1995) was implemented to fit the time course of performance in a speeded perceptual categorization task and assumed a simple stochastic accumulation of feature information. Subsequently, similar approaches have been used to model performance in a range of cognitive tasks including identification, absolute identification, perceptual matching, recognition, visual search, and word processing, again assuming a simple stochastic accumulation of feature information from both the stimulus and representations held in memory. These models are typically fit to data from signal-to-respond experiments whereby the effects of stimulus exposure duration on performance are examined, but response times (RTs) and RT distributions have also been modeled. In this article, we review this approach and explore the insights it has provided about the interplay between perceptual processing, memory retrieval, and decision making in a variety of tasks. In so doing, we highlight how such approaches can continue to usefully contribute to our understanding of cognition

    A new tool for the performance analysis of massively parallel computer systems

    Full text link
    We present a new tool, GPA, that can generate key performance measures for very large systems. Based on solving systems of ordinary differential equations (ODEs), this method of performance analysis is far more scalable than stochastic simulation. The GPA tool is the first to produce higher moment analysis from differential equation approximation, which is essential, in many cases, to obtain an accurate performance prediction. We identify so-called switch points as the source of error in the ODE approximation. We investigate the switch point behaviour in several large models and observe that as the scale of the model is increased, in general the ODE performance prediction improves in accuracy. In the case of the variance measure, we are able to justify theoretically that in the limit of model scale, the ODE approximation can be expected to tend to the actual variance of the model
    corecore