2 research outputs found
Strategies for storage of checkpointing data using non-dedicated repositories on grid systems
Dealing with the large amounts of data generated by longrunning parallel applications is one of the most challenging aspects of Grid Computing. Periodic checkpoints might be taken to guarantee application progression, producing even more data. The classical approach is to employ highthroughput checkpoint servers connected to the computational nodes by high speed networks. In the case of Opportunistic Grid Computing, we do not want to be forced to rely on such dedicated hardware. Instead, we want to use the shared Grid nodes to store application data in a distributed fashion. In this work, we evaluate several strategies to store checkpoints on distributed non-dedicated repositories. We consider the tradeoff among computational overhead, storage overhead, and degree of fault-tolerance of these strategies. We compare the use of replication, parity information, and information dispersal (IDA). We used InteGrade, an objectoriented Grid middleware, to implement the storage strategies and perform evaluation experiments. Categories and Subject Descriptors C.2.4 [Computer-communication Networks]: Distributed Systems—distributed applications; C.4 [Performanc
Operating policies for energy efficient large scale computing
PhD ThesisEnergy costs now dominate IT infrastructure total cost of ownership, with datacentre
operators predicted to spend more on energy than hardware infrastructure in the
next five years. With Western European datacentre power consumption estimated at
56 TWh/year in 2007 and projected to double by 2020, improvements in energy efficiency
of IT operations is imperative. The issue is further compounded by social and
political factors and strict environmental legislation governing organisations.
One such example of large IT systems includes high-throughput cycle stealing distributed
systems such as HTCondor and BOINC, which allow organisations to leverage
spare capacity on existing infrastructure to undertake valuable computation.
As a consequence of increased scrutiny of the energy impact of these systems, aggressive
power management policies are often employed to reduce the energy impact
of institutional clusters, but in doing so these policies severely restrict the computational
resources available for high-throughput systems. These policies are often configured
to quickly transition servers and end-user cluster machines into low power
states after only short idle periods, further compounding the issue of reliability.
In this thesis, we evaluate operating policies for energy efficiency in large-scale
computing environments by means of trace-driven discrete event simulation, leveraging
real-world workload traces collected within Newcastle University.
The major contributions of this thesis are as follows:
i) Evaluation of novel energy efficient management policies for a decentralised
peer-to-peer (P2P) BitTorrent environment.
ii) Introduce a novel simulation environment for the evaluation of energy efficiency
of large scale high-throughput computing systems, and propose a generalisable
model of energy consumption in high-throughput computing systems.
iii
iii) Proposal and evaluation of resource allocation strategies for energy consumption
in high-throughput computing systems for a real workload.
iv) Proposal and evaluation for a realworkload ofmechanisms to reduce wasted task
execution within high-throughput computing systems to reduce energy consumption.
v) Evaluation of the impact of fault tolerance mechanisms on energy consumption