51 research outputs found

    CMS distributed computing workflow experience

    Get PDF
    The vast majority of the CMS Computing capacity, which is organized in a tiered hierarchy, is located away from CERN. The 7 Tier-1 sites archive the LHC proton-proton collision data that is initially processed at CERN. These sites provide access to all recorded and simulated data for the Tier-2 sites, via wide-area network (WAN) transfers. All central data processing workflows are executed at the Tier-1 level, which contain re-reconstruction and skimming workflows of collision data as well as reprocessing of simulated data to adapt to changing detector conditions. This paper describes the operation of the CMS processing infrastructure at the Tier-1 level. The Tier-1 workflows are described in detail. The operational optimization of resource usage is described. In particular, the variation of different workflows during the data taking period of 2010, their efficiencies and latencies as well as their impact on the delivery of physics results is discussed and lessons are drawn from this experience. The simulation of proton-proton collisions for the CMS experiment is primarily carried out at the second tier of the CMS computing infrastructure. Half of the Tier-2 sites of CMS are reserved for central Monte Carlo (MC) production while the other half is available for user analysis. This paper summarizes the large throughput of the MC production operation during the data taking period of 2010 and discusses the latencies and efficiencies of the various types of MC production workflows. We present the operational procedures to optimize the usage of available resources and we the operational model of CMS for including opportunistic resources, such as the larger Tier-3 sites, into the central production operation

    Energy Efficient Algorithms based on VM Consolidation for Cloud Computing: Comparisons and Evaluations

    Get PDF
    Cloud Computing paradigm has revolutionized IT industry and be able to offer computing as the fifth utility. With the pay-as-you-go model, cloud computing enables to offer the resources dynamically for customers anytime. Drawing the attention from both academia and industry, cloud computing is viewed as one of the backbones of the modern economy. However, the high energy consumption of cloud data centers contributes to high operational costs and carbon emission to the environment. Therefore, Green cloud computing is required to ensure energy efficiency and sustainability, which can be achieved via energy efficient techniques. One of the dominant approaches is to apply energy efficient algorithms to optimize resource usage and energy consumption. Currently, various virtual machine consolidation-based energy efficient algorithms have been proposed to reduce the energy of cloud computing environment. However, most of them are not compared comprehensively under the same scenario, and their performance is not evaluated with the same experimental settings. This makes users hard to select the appropriate algorithm for their objectives. To provide insights for existing energy efficient algorithms and help researchers to choose the most suitable algorithm, in this paper, we compare several state-of-the-art energy efficient algorithms in depth from multiple perspectives, including architecture, modelling and metrics. In addition, we also implement and evaluate these algorithms with the same experimental settings in CloudSim toolkit. The experimental results show the performance comparison of these algorithms with comprehensive results. Finally, detailed discussions of these algorithms are provided

    Towards Trust-Aware Resource Management

    No full text
    system. In a large-scale wide-area system such as the Grid, security is a prime concern. One approach is to be conservative and implement techniques such as sandboxing, encryption, and other access control mechanisms on all elements of the Grid. However, the overhead caused by such a design may negate the advantages of Grid computing. This study examines the integration of the notion of "trust" into resource management such that the allocation process is aware of the security implications. We present a formal definition of trust and discuss a model for incorporating trust into Grid systems. As an example application of the ideas proposed, a resource management algorithm that incorporates trust is presented. The performance of the algorithm is examined via simulations

    Scheduling Advance Reservations With Priorities

    No full text
    Grid computing systems utilize distributively owned and geographically dispersed resources for providing a wide variety of services for various applications. One of the key considerations in Grid computing systems is resource management with quality of service constraints. The quality of service constraints dictate that submitted tasks should be completed by the Grid in a timely fashion while delivering at least a certain level of service for the duration of execution. Because the Grid is a highly "dynamic" system due to the arrival and departure of tasks and resources, it is necessary to perform advance reservations of resources to ensure their availability to meet the requirements of the different tasks. This paper introduces a new scheduling algorithm for advance reservations. Simulations are performed to compare our algorithm with an existing approach. The results indicate that the proposed algorithm can improve the overall the performance by satisfying larger number of reservation requests

    Distributed Dynamic Scheduling of Composite Tasks

    No full text
    This paper examines the issue of dynamically scheduling applications on a wide-area network computing system. We construct a simulation model for wide-area task allocation problem and study the performance of the proposed algorithm under different conditions. The simulation results indicate that the wide-area scheduling algorithm is sensitive to several parameters including machine failure rates, the local queuing policies, and arrival rates

    Towards a Micro-Economic Model for Resource Allocation

    No full text
    Due to the expected scale of the Grid computing systems, we need to develop highly distributed and extensible resource allocation frameworks for such systems. Microeconomic principles such as auctioning and commodity market are two approaches that are being pursued by several researchers for the Grid resource allocation problem. In this paper, we use a commodity market based approach to allocate resources, where resources are classified into different classes based on the hardware components, network connectivity, and operating system. In commodity market, the prices of the commodities ("resources") are fixed using individual supply and demand functions. In this paper we have developed an algorithm to determine the price of the resource. The simulation results show the performance of the pricing algorithm used in the commodity market

    Magnetic Resonance Imaging (mri) Simulation

    No full text
    In this paper, we present the implementation of a Magnetic Resonance Imaging (MRI) simulator on a GRID computing architecture. The simulation process is based on the resolution of Bloch equation [1] in a 3D space. The computation kernel of the simulator is distributed to the grid nodes using MPICH-G2 [2]. The results presented show that simulation of 3D MRI data is achieved with a reasonable cost which gives new perspectives to MRI simulations usage
    corecore