12,724 research outputs found

    Many-Task Computing and Blue Waters

    Full text link
    This report discusses many-task computing (MTC) generically and in the context of the proposed Blue Waters systems, which is planned to be the largest NSF-funded supercomputer when it begins production use in 2012. The aim of this report is to inform the BW project about MTC, including understanding aspects of MTC applications that can be used to characterize the domain and understanding the implications of these aspects to middleware and policies. Many MTC applications do not neatly fit the stereotypes of high-performance computing (HPC) or high-throughput computing (HTC) applications. Like HTC applications, by definition MTC applications are structured as graphs of discrete tasks, with explicit input and output dependencies forming the graph edges. However, MTC applications have significant features that distinguish them from typical HTC applications. In particular, different engineering constraints for hardware and software must be met in order to support these applications. HTC applications have traditionally run on platforms such as grids and clusters, through either workflow systems or parallel programming systems. MTC applications, in contrast, will often demand a short time to solution, may be communication intensive or data intensive, and may comprise very short tasks. Therefore, hardware and software for MTC must be engineered to support the additional communication and I/O and must minimize task dispatch overheads. The hardware of large-scale HPC systems, with its high degree of parallelism and support for intensive communication, is well suited for MTC applications. However, HPC systems often lack a dynamic resource-provisioning feature, are not ideal for task communication via the file system, and have an I/O system that is not optimized for MTC-style applications. Hence, additional software support is likely to be required to gain full benefit from the HPC hardware

    Parallel Load Balancing Strategies for Ensembles of Stochastic Biochemical Simulations

    Get PDF
    The evolution of biochemical systems where some chemical species are present with only a small number of molecules, is strongly influenced by discrete and stochastic effects that cannot be accurately captured by continuous and deterministic models. The budding yeast cell cycle provides an excellent example of the need to account for stochastic effects in biochemical reactions. To obtain statistics of the cell cycle progression, a stochastic simulation algorithm must be run thousands of times with different initial conditions and parameter values. In order to manage the computational expense involved, the large ensemble of runs needs to be executed in parallel. The CPU time for each individual task is unknown before execution, so a simple strategy of assigning an equal number of tasks per processor can lead to considerable work imbalances and loss of parallel efficiency. Moreover, deterministic analysis approaches are ill suited for assessing the effectiveness of load balancing algorithms in this context. Biological models often require stochastic simulation. Since generating an ensemble of simulation results is computationally intensive, it is important to make efficient use of computer resources. This paper presents a new probabilistic framework to analyze the performance of dynamic load balancing algorithms when applied to large ensembles of stochastic biochemical simulations. Two particular load balancing strategies (point-to-point and all-redistribution) are discussed in detail. Simulation results with a stochastic budding yeast cell cycle model confirm the theoretical analysis. While this work is motivated by cell cycle modeling, the proposed analysis framework is general and can be directly applied to any ensemble simulation of biological systems where many tasks are mapped onto each processor, and where the individual compute times vary considerably among tasks

    A Framework to Analyze the Performance of Load Balancing Schemes for Ensembles of Stochastic Simulations

    Get PDF
    Ensembles of simulations are employed to estimate the statistics of possible future states of a system, and are widely used in important applications such as climate change and biological modeling. Ensembles of runs can naturally be executed in parallel. However, when the CPU times of individual simulations vary considerably, a simple strategy of assigning an equal number of tasks per processor can lead to serious work imbalances and low parallel efficiency. This paper presents a new probabilistic framework to analyze the performance of dynamic load balancing algorithms for ensembles of simulations where many tasks are mapped onto each processor, and where the individual compute times vary considerably among tasks. Four load balancing strategies are discussed: most-dividing, all-redistribution, random-polling, and neighbor-redistribution. Simulation results with a stochastic budding yeast cell cycle model is consistent with the theoretical analysis. It is especially significant that there is a provable global decrease in load imbalance for the local rebalancing algorithms due to scalability concerns for the global rebalancing algorithms. The overall simulation time is reduced by up to 25%, and the total processor idle time by 85%

    Real-Time Task Migration for Dynamic Resource Management in Many-Core Systems

    Get PDF

    Power Management Techniques for Data Centers: A Survey

    Full text link
    With growing use of internet and exponential growth in amount of data to be stored and processed (known as 'big data'), the size of data centers has greatly increased. This, however, has resulted in significant increase in the power consumption of the data centers. For this reason, managing power consumption of data centers has become essential. In this paper, we highlight the need of achieving energy efficiency in data centers and survey several recent architectural techniques designed for power management of data centers. We also present a classification of these techniques based on their characteristics. This paper aims to provide insights into the techniques for improving energy efficiency of data centers and encourage the designers to invent novel solutions for managing the large power dissipation of data centers.Comment: Keywords: Data Centers, Power Management, Low-power Design, Energy Efficiency, Green Computing, DVFS, Server Consolidatio
    corecore