4 research outputs found

    Competitive Power Down Methods in Green Computing

    Full text link
    For the power-down problem one considers a device which has states OFF, ON, and a number of intermediate states. The state of the device can be switched at any time. In the OFF state the device consumes zero energy and in the ON state it works at its full power consumption. The intermediate states consume only some fraction of energy proportional to the usage time but switching back to the ON state has has different constant setup cost depending on the current state. Requests for service (i.e. for when the device has to be in the ON state) are not known in advance, thus power-down problems are studied in the framework of online algorithms, where a system has to react without knowledge of future requests. Online algorithms are analyzed in terms of competitiveness, a measure of performance that compares the solution obtained online with the optimal online solution for the same problem, where the lowest possible competitiveness is best. Power-down mechanisms are widely used to save energy and were one of the first problems to be studied in green computing. They can be used to optimize energy usage in cloud computing, or for scheduling energy supply in the smart grid. However, many approaches are simplistic, and do not work well in practice nor do they have a good theoretical underpinning. In fact, it is surprising that only very few algorithmic techniques exist. This thesis widens the algorithmic base for such problems in a number of ways. We study systems with few states which are especially relevant in real wold applications. We give exact ratios for systems with three and five states. We then introduce a new technique, called “decrease and reset”, where the algorithm automatically attunes itself to the frequency of requests, and gives a better performance for real world inputs than currently existing algorithms. We further refine this approach by a budget-based methods which keeps a tally of gains and losses as requests are processed. We also analyze systems with infinite states and devise several strategies to transition between states. The thesis gives results both in terms of theoretical analysis as well as a result of extensive simulation

    Alternative Measures for the Analysis of Online Algorithms

    Get PDF
    In this thesis we introduce and evaluate several new models for the analysis of online algorithms. In an online problem, the algorithm does not know the entire input from the beginning; the input is revealed in a sequence of steps. At each step the algorithm should make its decisions based on the past and without any knowledge about the future. Many important real-life problems such as paging and routing are intrinsically online and thus the design and analysis of online algorithms is one of the main research areas in theoretical computer science. Competitive analysis is the standard measure for analysis of online algorithms. It has been applied to many online problems in diverse areas ranging from robot navigation, to network routing, to scheduling, to online graph coloring. While in several instances competitive analysis gives satisfactory results, for certain problems it results in unrealistically pessimistic ratios and/or fails to distinguish between algorithms that have vastly differing performance under any practical characterization. Addressing these shortcomings has been the subject of intense research by many of the best minds in the field. In this thesis, building upon recent advances of others we introduce some new models for analysis of online algorithms, namely Bijective Analysis, Average Analysis, Parameterized Analysis, and Relative Interval Analysis. We show that they lead to good results when applied to paging and list update algorithms. Paging and list update are two well known online problems. Paging is one of the main examples of poor behavior of competitive analysis. We show that LRU is the unique optimal online paging algorithm according to Average Analysis on sequences with locality of reference. Recall that in practice input sequences for paging have high locality of reference. It has been empirically long established that LRU is the best paging algorithm. Yet, Average Analysis is the first model that gives strict separation of LRU from all other online paging algorithms, thus solving a long standing open problem. We prove a similar result for the optimality of MTF for list update on sequences with locality of reference. A technique for the analysis of online algorithms has to be effective to be useful in day-to-day analysis of algorithms. While Bijective and Average Analysis succeed at providing fine separation, their application can be, at times, cumbersome. Thus we apply a parameterized or adaptive analysis framework to online algorithms. We show that this framework is effective, can be applied more easily to a larger family of problems and leads to finer analysis than the competitive ratio. The conceptual innovation of parameterizing the performance of an algorithm by something other than the input size was first introduced over three decades ago [124, 125]. By now it has been extensively studied and understood in the context of adaptive analysis (for problems in P) and parameterized algorithms (for NP-hard problems), yet to our knowledge this thesis is the first systematic application of this technique to the study of online algorithms. Interestingly, competitive analysis can be recast as a particular form of parameterized analysis in which the performance of opt is the parameter. In general, for each problem we can choose the parameter/measure that best reflects the difficulty of the input. We show that in many instances the performance of opt on a sequence is a coarse approximation of the difficulty or complexity of a given input sequence. Using a finer, more natural measure we can separate paging and list update algorithms which were otherwise indistinguishable under the classical model. This creates a performance hierarchy of algorithms which better reflects the intuitive relative strengths between them. Lastly, we show that, surprisingly, certain randomized algorithms which are superior to MTF in the classical model are not so in the parameterized case, which matches experimental results. We test list update algorithms in the context of a data compression problem known to have locality of reference. Our experiments show MTF outperforms other list update algorithms in practice after BWT. This is consistent with the intuition that BWT increases locality of reference
    corecore