6 research outputs found

    Energy-efficient algorithms for non-preemptive speed-scaling

    Full text link
    We improve complexity bounds for energy-efficient speed scheduling problems for both the single processor and multi-processor cases. Energy conservation has become a major concern, so revisiting traditional scheduling problems to take into account the energy consumption has been part of the agenda of the scheduling community for the past few years. We consider the energy minimizing speed scaling problem introduced by Yao et al. where we wish to schedule a set of jobs, each with a release date, deadline and work volume, on a set of identical processors. The processors may change speed as a function of time and the energy they consume is the α\alphath power of its speed. The objective is then to find a feasible schedule which minimizes the total energy used. We show that in the setting with an arbitrary number of processors where all work volumes are equal, there is a 2(1+Δ)(5(1+Δ))α−1B~α=Oα(1)2(1+\varepsilon)(5(1+\varepsilon))^{\alpha -1}\tilde{B}_{\alpha}=O_{\alpha}(1) approximation algorithm, where B~α\tilde{B}_{\alpha} is the generalized Bell number. This is the first constant factor algorithm for this problem. This algorithm extends to general unequal processor-dependent work volumes, up to losing a factor of ((1+r)r2)α(\frac{(1+r)r}{2})^{\alpha} in the approximation, where rr is the maximum ratio between two work volumes. We then show this latter problem is APX-hard, even in the special case when all release dates and deadlines are equal and rr is 4. In the single processor case, we introduce a new linear programming formulation of speed scaling and prove that its integrality gap is at most 12α−112^{\alpha -1}. As a corollary, we obtain a (12(1+Δ))α−1(12(1+\varepsilon))^{\alpha -1} approximation algorithm where there is a single processor, improving on the previous best bound of 2α−1(1+Δ)αB~α2^{\alpha-1}(1+\varepsilon)^{\alpha}\tilde{B}_{\alpha} when α≄25\alpha \ge 25

    Speed-scaling with no Preemptions

    Full text link
    We revisit the non-preemptive speed-scaling problem, in which a set of jobs have to be executed on a single or a set of parallel speed-scalable processor(s) between their release dates and deadlines so that the energy consumption to be minimized. We adopt the speed-scaling mechanism first introduced in [Yao et al., FOCS 1995] according to which the power dissipated is a convex function of the processor's speed. Intuitively, the higher is the speed of a processor, the higher is the energy consumption. For the single-processor case, we improve the best known approximation algorithm by providing a (1+Ï”)αB~α(1+\epsilon)^{\alpha}\tilde{B}_{\alpha}-approximation algorithm, where B~α\tilde{B}_{\alpha} is a generalization of the Bell number. For the multiprocessor case, we present an approximation algorithm of ratio B~α((1+Ï”)(1+wmax⁥wmin⁥))α\tilde{B}_{\alpha}((1+\epsilon)(1+\frac{w_{\max}}{w_{\min}}))^{\alpha} improving the best known result by a factor of (52)α−1(wmax⁥wmin⁥)α(\frac{5}{2})^{\alpha-1}(\frac{w_{\max}}{w_{\min}})^{\alpha}. Notice that our result holds for the fully heterogeneous environment while the previous known result holds only in the more restricted case of parallel processors with identical power functions

    A survey of offline algorithms for energy minimization under deadline constraints

    Get PDF
    Modern computers allow software to adjust power management settings like speed and sleep modes to decrease the power consumption, possibly at the price of a decreased performance. The impact of these techniques mainly depends on the schedule of the tasks. In this article, a survey on underlying theoretical results on power management, as well as offline scheduling algorithms that aim at minimizing the energy consumption under real-time constraints, is given

    Energy-Efficient Transaction Scheduling in Data Systems

    Get PDF
    Natural short term fluctuations in the load of transactional data systems present an opportunity for power savings. For example, a system handling 1000 requests per second on average can expect more than 1000 requests in some seconds, fewer in others. By quickly adjusting processing capacity to match such fluctuations, power consumption can be reduced. Many systems do this already, using dynamic voltage and frequency scaling (DVFS) to reduce processor performance and power consumption when the load is low. DVFS is typically controlled by frequency governors in the operating system or by the processor itself. The work presented in this dissertation shows that transactional data systems can manage DVFS more effectively than the underlying operating system. This is because data systems have more information about the workload, and more control over that workload, than is available to the operating system. Our goal is to minimize power consumption while ensuring that transaction requests meet specified latency targets. We present energy-efficient scheduling algorithms and systems that manage CPU power consumption and performance within data systems. These algorithms are workload-aware and can accommodate concurrent workloads with different characteristics and latency budgets. The first technique we present is called POLARIS. It directly manages processor DVFS and controls database transaction scheduling. We show that POLARIS can simultaneously reduce power consumption and reduce missed latency targets, relative to operating-system-based DVFS governors. Second, we present PLASM, an energy-efficient scheduler that generalizes POLARIS to support multi-core, multi-processor systems. PLASM controls the distribution of requests to the processors, and it employs POLARIS to manage power consumption locally at each core. We show that PLASM can save power and reduce missed latency targets compared to generic routing techniques such as round-robin

    Robuste und großumfĂ€ngliche Netzwerkoptimierung in der Logistik

    Get PDF
    This thesis explores possibilities and limitations of extending classical combinatorial optimization problems for network flows and network design. We propose new mathematical models for logistics networks that feature commodities with multidimensional properties, e.g. their mass and volume, to capture consolidation effects of commodities with complementing properties. We provide new theoretical insights and solution methods with immediate practical impact that we test on real-world instances from the automotive, chemical, and retail industry. The first model is for tactical transportation planning with temporal consolidation effects. We propose various heuristics and prove for our instances, that most of our solutions are within a single-digit percentage of the optimum. We also study problem variants where commodities are routed unsplittably and give hardness results for various special cases and a dynamic program that finds optimal forest solutions, which overestimate real costs. The second model is for strategic route planning under uncertainty. We provide for a robust optimization method that anticipates fluctuations of demands by minimizing worst-case costs over a restricted scenario set. We show that the adversary problem is NP-hard. To still find solutions with very good worst-case cost, we derive a carefully relaxed and simplified MILP, which solves well for large instances. It can be extended to include hub decisions leading to a robust M-median hub location problem. We find a price of robustness for our instances that is moderate for scenarios using average demand values as lower bounds. Trend based scenarios show a considerable tradeoff between historical average costs and worst case costs. Another robustness concept are incremental hub chains that provide solutions for every number of hubs to operate, such that they are robust under changes of this number. A comparison of incremental solutions with M-median solutions obtained with an LP-based search suggests that a price of being incremental is low for our instances. Finally, we investigate the problem of scheduling the maintenance of edges in a network. We focus on maintaining connectivity between two nodes over time. We show that the problem can be solved in polynomial time in arbitrary networks if preemption is allowed. If preemption is restricted to integral time points, the problem is NP-hard and for the non-preemptive case, we show strong non-approximability results.Diese Arbeit untersucht Möglichkeiten, klassische kombinatorische Optimierungsprobleme fĂŒr NetzwerkflĂŒsse und Netzwerkdesign zu erweitern. Wir stellen neue mathematische Modelle fĂŒr Logistiknetzwerke vor, die mehrdimensionale Eigenschaften der GĂŒter berĂŒcksichtigen, etwa Masse oder Volumen, um Konsolidierungseffekte von GĂŒtern mit komplementĂ€ren Eigenschaften zu nutzen. Wir erarbeiten neue theoretische Einsichten und Lösungsmethoden von praktischer Relevanz, die wir an realen Instanzen aus der Automobilindustrie, der Chemiebranche und aus dem Einzelhandel evaluieren. FĂŒr die taktische Transportplanung mit zeitlichen Konsolidierungseffekte erarbeiten wir verschiedene Heuristiken, welche fĂŒr unsere Instanzen die OptimalitĂ€tslĂŒcke zu 10% schließen. Wir geben HĂ€rteresultate fĂŒr verschiedene SpezialfĂ€lle mit unteilbaren GĂŒtern an, sowie ein dynamisches Programm, welches Lösungen mit optimalen Baumkosten berechnet; eine ÜberschĂ€tzung der realen Kosten. FĂŒr die strategische Routenplanung unter Unsicherheit entwickeln wir eine robuste Optimierungsmethode, welche Nachfrageschwankungen antizipiert, indem Worstcase-Kosten ĂŒber einer beschrĂ€nkten Szenarienmenge minimiert werden. Wir zeigen, dass das Gegenspielerproblem NP-schwer ist. Um Lösungen mit guten Worstcase-Kosten zu finden, leiten wir ein sorgfĂ€ltig relaxiertes MILP her. Seine natĂŒrliche Erweiterung fĂŒr Hubentscheidungen fĂŒhrt auf ein robustes M-Median Hub Location Problem. Wir finden einen moderaten Preis der Robustheit fĂŒr Szenarien, die Durchschnittsnachfragemengen als untere Intervallgrenze verwenden. Trendbasierten Szenarien zeigen einen deutlichen Tradeoff zwischen historischen Durchschnittskosten und Worstcase-Kosten. Ein weiteres Robustheitskonzept stellen inkrementale Hubketten dar, welche Lösungen fĂŒr jede Anzahl an Hubstandorten angeben, sodass sie gegen Änderungen dieser Anzahl robust sind. Ein Vergleich mit entsprechenden M-Median Lösungen, die wir mit einer LP-basierten Hubsuche erhalten, zeigt einen geringen Preis der InkrementalitĂ€t bei unseren Instanzen auf. Zuletzt untersuchen wir das Problem Wartungsarbeiten an Kanten in einem Netzwerk zu planen, um KonnektivitĂ€t zwischen zwei Knoten zu bewahren. Wir zeigen, dass sich das Problem polynomiell in beliebigen Netzen lösen lĂ€sst, falls Wartungsarbeiten unterbrochen werden dĂŒrfen. Falls dies nur zu ganzzahligen Zeitpunkten erlaubt ist, ist es bereits NP-schwer. FĂŒr den Fall ohne Unterbrechungen zeigen wir starke Nichtapproximierbarkeitsresultate
    corecore