4,434 research outputs found

    Load-Balancing Models for Scheduling Divisible Load on Large Scale Data Grids

    Get PDF
    In many data grid applications, data can be decomposed into multiple independent sub datasets and distributed for parallel execution. This property has been successfully employed using Divisible Load Theory (DLT) , which has been proven to be a powerful tool for modeling divisible load problems in large scale data grid. Load balancing in such environment plays a critical role in achieving high utilization of resources to schedule the applications efficiently through join consideration of communication and computation time. There are some scheduling models, which have been studied, such as Constraint DLT (CDLT), Task Data Present (TDP) and Genetic Algorithm (GA). However, there has been no optimal solution reached. At the same time, effective schedulers are not only required to minimize the maximum completion time (makespan) of the jobs, but also the execution time of the schedulers.This thesis proposes several load balancing models for scheduling divisible load on large scale data grids, when both processor and communication link speed are heterogeneous. The proposed models can be decomposed into three stages. The first stage is to develop new DLT based models for multiple sources scheduling. Closed form solutions for the load allocation are derived. The new models are called Adaptive DLT (ADLT) and A2DLT models. In the second stage, an Iterative DLT (IDLT) model is proposed. Recursive numerical equations are derived to find the optimal workload assigned to the grid node. The closed form solutions are derived for the optimal load allocation. Although the IDLT model is proposed for single source, it has been applied in the case of multiple sources. The third stage integrates the proposed DLT based models with GA algorithm to solve the time consuming problem. In addition, the integration of the proposed DLT model with Simulated Annealing (SA) algorithm has been also developed. The experimental results have proven that the proposed models yield better perform ance than previous models in terms of makespan and scheduler execution time. The ADLT and A2DLT models have reduced the makespan by 21% and 37% respectively compared to CDLT model. The IDLT model is capable of producing almost optimal solution for single source scheduling with low time complexity. In addition, the integration of the proposed DLT model with GA and SA algorithms has also significantly improved the performance. The SA is 64.70% better than GA in terms of makespan. Thus, the proposed models can balance the processing loads efficiently so that they can be integrated in the existing data grid schedulers to improve the performance

    Tackling Exascale Software Challenges in Molecular Dynamics Simulations with GROMACS

    Full text link
    GROMACS is a widely used package for biomolecular simulation, and over the last two decades it has evolved from small-scale efficiency to advanced heterogeneous acceleration and multi-level parallelism targeting some of the largest supercomputers in the world. Here, we describe some of the ways we have been able to realize this through the use of parallelization on all levels, combined with a constant focus on absolute performance. Release 4.6 of GROMACS uses SIMD acceleration on a wide range of architectures, GPU offloading acceleration, and both OpenMP and MPI parallelism within and between nodes, respectively. The recent work on acceleration made it necessary to revisit the fundamental algorithms of molecular simulation, including the concept of neighborsearching, and we discuss the present and future challenges we see for exascale simulation - in particular a very fine-grained task parallelism. We also discuss the software management, code peer review and continuous integration testing required for a project of this complexity.Comment: EASC 2014 conference proceedin

    Smart Grid Technologies in Europe: An Overview

    Get PDF
    The old electricity network infrastructure has proven to be inadequate, with respect to modern challenges such as alternative energy sources, electricity demand and energy saving policies. Moreover, Information and Communication Technologies (ICT) seem to have reached an adequate level of reliability and flexibility in order to support a new concept of electricity network—the smart grid. In this work, we will analyse the state-of-the-art of smart grids, in their technical, management, security, and optimization aspects. We will also provide a brief overview of the regulatory aspects involved in the development of a smart grid, mainly from the viewpoint of the European Unio

    Runtime-guided mitigation of manufacturing variability in power-constrained multi-socket NUMA nodes

    Get PDF
    This work has been supported by the Spanish Government (Severo Ochoa grants SEV2015-0493, SEV-2011-00067), by the Spanish Ministry of Science and Innovation (contracts TIN2015-65316-P), by Generalitat de Catalunya (contracts 2014-SGR-1051 and 2014-SGR-1272), by the RoMoL ERC Advanced Grant (GA 321253) and the European HiPEAC Network of Excellence. M. Moretó has been partially supported by the Ministry of Economy and Competitiveness under Juan de la Cierva postdoctoral fellowship number JCI-2012-15047. M. Casas is supported by the Secretary for Universities and Research of the Ministry of Economy and Knowledge of the Government of Catalonia and the Cofund programme of the Marie Curie Actions of the 7th R&D Framework Programme of the European Union (Contract 2013 BP B 00243). This work was also partially performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344 (LLNL-CONF-689878). Finally, the authors are grateful to the reviewers for their valuable comments, to the RoMoL team, to Xavier Teruel and Kallia Chronaki from the Programming Models group of BSC and the Computation Department of LLNL for their technical support and useful feedback.Peer ReviewedPostprint (published version
    corecore