2,280 research outputs found

    Decentralized Resource Availability Prediction in Peer-to-Peer Desktop Grids

    Get PDF
    Grid computing is a form of distributed computing which is used by an organiza­ tion to handle its long-running computational tasks. Volunteer computing (desktop grid) is a type of grid computing that uses idle CPU cycles donated voluntarily by users, to run its tasks. In a desktop grid model, the resources are not dedicated. The job (computational task) is submitted for execution in the resource only when the resource is idle. There is no guarantee that the job which has started to execute in a resource will complete its execution without any disruption from user activity (such as keyboard click or mouse move). This problem becomes more challenging in a Peer-to-Peer (P2P) model of desktop grids where there is no central server which takes the decision on whether to allocate a job to a resource. In this thesis we propose and implement a P2P desktop grid framework which does resource availability prediction. We try to improve the predictability of the system, by submitting the jobs on machines which have a higher probability of being available at a given time. We benchmark our framework and provide an analysis of our results

    On the feasibility of collaborative green data center ecosystems

    Get PDF
    The increasing awareness of the impact of the IT sector on the environment, together with economic factors, have fueled many research efforts to reduce the energy expenditure of data centers. Recent work proposes to achieve additional energy savings by exploiting, in concert with customers, service workloads and to reduce data centers’ carbon footprints by adopting demand-response mechanisms between data centers and their energy providers. In this paper, we debate about the incentives that customers and data centers can have to adopt such measures and propose a new service type and pricing scheme that is economically attractive and technically realizable. Simulation results based on real measurements confirm that our scheme can achieve additional energy savings while preserving service performance and the interests of data centers and customers.Peer ReviewedPostprint (author's final draft

    PFS: A Productivity Forecasting System for Desktop Computers to Improve Grid Applications Performance in Enterprise Desktop Grid

    Get PDF
    An Enterprise Desktop Grid (EDG) is a low cost platform that gathers desktop computers spread over different institutions. This platform uses desktop computers idle time to run Grid applications. We argue that computers in these environments have a predictable productivity that affects a Grid application execution time. In this paper, we propose a system called PFS for computer productivity forecasting that improves Grid applications performance. We simulated 157.500 applications and compared the performance achieved by our proposal against two recent strategies. Our experiments show that a Grid scheduler based on PFS runs applications faster than schedulers based on other selection strategies

    PFS: A Productivity Forecasting System For Desktop Computers To Improve Grid Applications Performance In Enterprise Desktop Grid

    Get PDF
    An Enterprise Desktop Grid (EDG) is a low cost platform that gathers desktop computers spread over different institutions. This platform uses desktop computers idle time to run Grid applications. We argue that computers in these environments have a predictable productivity that affects a Grid application execution time. In this paper, we propose a system called PFS for computer productivity forecasting that improves Grid applications performance. We simulated 157.500 applications and compared the performance achieved by our proposal against two recent strategies. Our experiments show that a Grid scheduler based on PFS runs applications faster than schedulers based on other selection strategies.Fil: Salinas, Sergio Ariel. Universidad Nacional de Cuyo; ArgentinaFil: Garcia Garino, Carlos Gabriel. Universidad Nacional de Cuyo; ArgentinaFil: Zunino Suarez, Alejandro Octavio. Consejo Nacional de Investigaciones CientĂ­ficas y TĂ©cnicas. Centro CientĂ­fico TecnolĂłgico Tandil. Instituto Superior de Ingenieria del Software; Argentin

    Flexible distributed computing with volunteered resources

    Get PDF
    PhDNowadays, computational grids have evolved to a stage where they can comprise many volunteered resources owned by different individual users and/or institutions, such as desktop grids and volunteered computing grids. This brings benefits for large-scale computing, as more resources are available to exploit. On the other hand, the inherent characteristics of the volunteered resources bring some challenges for efficiently exploiting them. For example, jobs may not be able to be executed by some resources, as the computing resources can be heterogeneous. Furthermore, the resources can be volatile as the resource owners usually have the right to decide when and how to donate the idle Central Processing Unit (CPU) cycles of their computers. Therefore, in order to utilise volunteered resources efficiently, this research investigated solutions from different aspects. Firstly, this research proposes a new computational Grid architecture based on Java and Java application migration technologies to provide fundamental support for coping with these challenges. This proposed architecture supports heterogeneous resources, ensuring local activities are not affected by Grid jobs and enabling resources to carry out live and automatic Java application migration. Secondly, this research work proposes some job-scheduling and migration algorithms based on resource availability prediction and/or artificial intelligence techniques. To examine the proposed algorithms, this work includes a series of experiments in both synthetic and practical scenarios and compares the performance of the proposed algorithms with existing ones across a variety of scenarios. According to the critical assessment, each algorithm has its own distinct advantages and performs well when certain conditions are met. In addition, this research analyses the characteristics of resources in terms of the availability pattern of practical volunteer-based grids. The analysis shows that each environment has its own characteristics and each volunteered resource’s availability tends to possess weak correlations across different days and times-of-day.British Telco

    Power Analysis and Optimization Techniques for Energy Efficient Computer Systems

    Get PDF
    Reducing power consumption has become a major challenge in the design and operation of to-day’s computer systems. This chapter describes different techniques addressing this challenge at different levels of system hardware, such as CPU, memory, and internal interconnection network, as well as at different levels of software components, such as compiler, operating system and user applications. These techniques can be broadly categorized into two types: Design time power analysis versus run-time dynamic power management. Mechanisms in the first category use ana-lytical energy models that are integrated into existing simulators to measure the system’s power consumption and thus help engineers to test power-conscious hardware and software during de-sign time. On the other hand, dynamic power management techniques are applied during run-time, and are used to monitor system workload and adapt the system’s behavior dynamically to save energy

    Monitorable network and CPU load statistics and their application to scheduling

    Get PDF
    Recent trends in high-speed computing have moved towards the use of networks of workstations as a cost-effective approach to parallel computing. One recently proposed solution involves the use of an existing network of workstation-class computers as a single multiprocessor, and much research is ongoing in this area;This dissertation describes work in the area of process scheduling on networks of workstations, specifically in the area of load analysis. After presenting extensive background in the field, measures of CPU and network load are defined, and a test parallel application program presented, written for a network-multiprocessing software package called PVM. A series of experiments is then detailed, whose goal was to discover the relationship between the run time of the test application and the loads on the participating workstations and networks. The experiments include measurement of CPU loading and network loading, both during test application runs, during artificially elevated loads, and during quiet conditions. Results of the experiments are presented, and the applications of the results to the problem of task scheduling examined. It is then claimed that several easily measured load measures are useful to task scheduling, by allowing run time to be predicted within a margin of error, and allowing limiting network segments to be detected and avoided

    Architecting Efficient Data Centers.

    Full text link
    Data center power consumption has become a key constraint in continuing to scale Internet services. As our society’s reliance on “the Cloud” continues to grow, companies require an ever-increasing amount of computational capacity to support their customers. Massive warehouse-scale data centers have emerged, requiring 30MW or more of total power capacity. Over the lifetime of a typical high-scale data center, power-related costs make up 50% of the total cost of ownership (TCO). Furthermore, the aggregate effect of data center power consumption across the country cannot be ignored. In total, data center energy usage has reached approximately 2% of aggregate consumption in the United States and continues to grow. This thesis addresses the need to increase computational efficiency to address this grow- ing problem. It proposes a new classes of power management techniques: coordinated full-system idle low-power modes to increase the energy proportionality of modern servers. First, we introduce the PowerNap server architecture, a coordinated full-system idle low- power mode which transitions in and out of an ultra-low power nap state to save power during brief idle periods. While effective for uniprocessor systems, PowerNap relies on full-system idleness and we show that such idleness disappears as the number of cores per processor continues to increase. We expose this problem in a case study of Google Web search in which we demonstrate that coordinated full-system active power modes are necessary to reach energy proportionality and that PowerNap is ineffective because of a lack of idleness. To recover full-system idleness, we introduce DreamWeaver, architectural support for deep sleep. DreamWeaver allows a server to exchange latency for full-system idleness, allowing PowerNap-enabled servers to be effective and provides a better latency- power savings tradeoff than existing approaches. Finally, this thesis investigates workloads which achieve efficiency through methodical cluster provisioning techniques. Using the popular memcached workload, this thesis provides examples of provisioning clusters for cost-efficiency given latency, throughput, and data set size targets.Ph.D.Computer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/91499/1/meisner_1.pd

    Many-Task Computing and Blue Waters

    Full text link
    This report discusses many-task computing (MTC) generically and in the context of the proposed Blue Waters systems, which is planned to be the largest NSF-funded supercomputer when it begins production use in 2012. The aim of this report is to inform the BW project about MTC, including understanding aspects of MTC applications that can be used to characterize the domain and understanding the implications of these aspects to middleware and policies. Many MTC applications do not neatly fit the stereotypes of high-performance computing (HPC) or high-throughput computing (HTC) applications. Like HTC applications, by definition MTC applications are structured as graphs of discrete tasks, with explicit input and output dependencies forming the graph edges. However, MTC applications have significant features that distinguish them from typical HTC applications. In particular, different engineering constraints for hardware and software must be met in order to support these applications. HTC applications have traditionally run on platforms such as grids and clusters, through either workflow systems or parallel programming systems. MTC applications, in contrast, will often demand a short time to solution, may be communication intensive or data intensive, and may comprise very short tasks. Therefore, hardware and software for MTC must be engineered to support the additional communication and I/O and must minimize task dispatch overheads. The hardware of large-scale HPC systems, with its high degree of parallelism and support for intensive communication, is well suited for MTC applications. However, HPC systems often lack a dynamic resource-provisioning feature, are not ideal for task communication via the file system, and have an I/O system that is not optimized for MTC-style applications. Hence, additional software support is likely to be required to gain full benefit from the HPC hardware
    • …
    corecore