78 research outputs found
Throughput Maximization in Multiprocessor Speed-Scaling
We are given a set of jobs that have to be executed on a set of
speed-scalable machines that can vary their speeds dynamically using the energy
model introduced in [Yao et al., FOCS'95]. Every job is characterized by
its release date , its deadline , its processing volume if
is executed on machine and its weight . We are also given a budget
of energy and our objective is to maximize the weighted throughput, i.e.
the total weight of jobs that are completed between their respective release
dates and deadlines. We propose a polynomial-time approximation algorithm where
the preemption of the jobs is allowed but not their migration. Our algorithm
uses a primal-dual approach on a linearized version of a convex program with
linear constraints. Furthermore, we present two optimal algorithms for the
non-preemptive case where the number of machines is bounded by a fixed
constant. More specifically, we consider: {\em (a)} the case of identical
processing volumes, i.e. for every and , for which we
present a polynomial-time algorithm for the unweighted version, which becomes a
pseudopolynomial-time algorithm for the weighted throughput version, and {\em
(b)} the case of agreeable instances, i.e. for which if and only
if , for which we present a pseudopolynomial-time algorithm. Both
algorithms are based on a discretization of the problem and the use of dynamic
programming
Throughput Maximization in the Speed-Scaling Setting
We are given a set of jobs and a single processor that can vary its speed
dynamically. Each job is characterized by its processing requirement
(work) , its release date and its deadline . We are also given
a budget of energy and we study the scheduling problem of maximizing the
throughput (i.e. the number of jobs which are completed on time). We propose a
dynamic programming algorithm that solves the preemptive case of the problem,
i.e. when the execution of the jobs may be interrupted and resumed later, in
pseudo-polynomial time. Our algorithm can be adapted for solving the weighted
version of the problem where every job is associated with a weight and
the objective is the maximization of the sum of the weights of the jobs that
are completed on time. Moreover, we provide a strongly polynomial time
algorithm to solve the non-preemptive unweighed case when the jobs have the
same processing requirements. For the weighted case, our algorithm can be
adapted for solving the non-preemptive version of the problem in
pseudo-polynomial time.Comment: submitted to SODA 201
Speed-scaling with no Preemptions
We revisit the non-preemptive speed-scaling problem, in which a set of jobs
have to be executed on a single or a set of parallel speed-scalable
processor(s) between their release dates and deadlines so that the energy
consumption to be minimized. We adopt the speed-scaling mechanism first
introduced in [Yao et al., FOCS 1995] according to which the power dissipated
is a convex function of the processor's speed. Intuitively, the higher is the
speed of a processor, the higher is the energy consumption. For the
single-processor case, we improve the best known approximation algorithm by
providing a -approximation algorithm,
where is a generalization of the Bell number. For the
multiprocessor case, we present an approximation algorithm of ratio
improving the best known result by a factor of
. Notice that our
result holds for the fully heterogeneous environment while the previous known
result holds only in the more restricted case of parallel processors with
identical power functions
New Results on Online Resource Minimization
We consider the online resource minimization problem in which jobs with hard
deadlines arrive online over time at their release dates. The task is to
determine a feasible schedule on a minimum number of machines. We rigorously
study this problem and derive various algorithms with small constant
competitive ratios for interesting restricted problem variants. As the most
important special case, we consider scheduling jobs with agreeable deadlines.
We provide the first constant ratio competitive algorithm for the
non-preemptive setting, which is of particular interest with regard to the
known strong lower bound of n for the general problem. For the preemptive
setting, we show that the natural algorithm LLF achieves a constant ratio for
agreeable jobs, while for general jobs it has a lower bound of Omega(n^(1/3)).
We also give an O(log n)-competitive algorithm for the general preemptive
problem, which improves upon the known O(p_max/p_min)-competitive algorithm.
Our algorithm maintains a dynamic partition of the job set into loose and tight
jobs and schedules each (temporal) subset individually on separate sets of
machines. The key is a characterization of how the decrease in the relative
laxity of jobs influences the optimum number of machines. To achieve this we
derive a compact expression of the optimum value, which might be of independent
interest. We complement the general algorithmic result by showing lower bounds
that rule out that other known algorithms may yield a similar performance
guarantee
Energy-Efficient Transaction Scheduling in Data Systems
Natural short term fluctuations in the load of transactional data systems present an opportunity for power savings. For example, a system handling 1000 requests per second on average can expect more than 1000 requests in some seconds, fewer in others. By quickly adjusting processing capacity to match such fluctuations, power consumption can be reduced. Many systems do this already, using dynamic voltage and frequency scaling (DVFS) to reduce processor performance and power consumption when the load is low.
DVFS is typically controlled by frequency governors in the operating system or by the processor itself. The work presented in this dissertation shows that transactional data systems can manage DVFS more effectively than the underlying operating system. This is because data systems have more information about the workload, and more control over that workload, than is available to the operating system.
Our goal is to minimize power consumption while ensuring that transaction requests meet specified latency targets. We present energy-efficient scheduling algorithms and systems that manage CPU power consumption and performance within data systems. These algorithms are workload-aware and can accommodate concurrent workloads with different characteristics and latency budgets.
The first technique we present is called POLARIS. It directly manages processor DVFS and controls database transaction scheduling. We show that POLARIS can simultaneously reduce power consumption and reduce missed latency targets, relative to operating-system-based DVFS governors.
Second, we present PLASM, an energy-efficient scheduler that generalizes POLARIS to support multi-core, multi-processor systems. PLASM controls the distribution of requests to the processors, and it employs POLARIS to manage power consumption locally at each core. We show that PLASM can save power and reduce missed latency targets compared to generic routing techniques such as round-robin
A general framework for handling commitment in online throughput maximization
We study a fundamental online job admission problem where jobs with deadlines
arrive online over time at their release dates, and the task is to determine a
preemptive single-server schedule which maximizes the number of jobs that
complete on time. To circumvent known impossibility results, we make a standard
slackness assumption by which the feasible time window for scheduling a job is
at least times its processing time, for some .
We quantify the impact that different provider commitment requirements have on
the performance of online algorithms. Our main contribution is one universal
algorithmic framework for online job admission both with and without
commitments. Without commitment, our algorithm with a competitive ratio of
is the best possible (deterministic) for this problem. For
commitment models, we give the first non-trivial performance bounds. If the
commitment decisions must be made before a job's slack becomes less than a
-fraction of its size, we prove a competitive ratio of
, for .
When a provider must commit upon starting a job, our bound is
. Finally, we observe that for scheduling with commitment
the restriction to the `unweighted' throughput model is essential; if jobs have
individual weights, we rule out competitive deterministic algorithms
Politiques de gestion d’Énergie et de Température dans les Systèmes Informatiques
Nowadays, the energy consumption and the heat dissipation of computing environmentshave emerged as crucial issues. Indeed, large data centers consume as much electricityas a city while modern processors attain high temperatures degrading their performanceand decreasing their reliability. In this thesis, we study various energy and temperatureaware scheduling problems and we focus on their complexity and approximability.A dominant technique for saving energy is by proper scheduling of the jobs through theoperating system combined with appropriate scaling of the processor’s speed. This techniqueis referred to as speed scaling in the literature. The theoretical study of speed scalingwas initiated by Yao, Demers and Shenker (1995) who considered the single-processorproblem of scheduling preemptively a set of jobs, each one specified by an amount ofwork, a release date and a deadline, so as to minimize the total energy consumption.In order to measure the energy consumption of a processor, the authors considered thewell-known rule according to which the processor’s power consumption is P(t) = s(t)α ateach time t, where s(t) is the processor’s speed at t and α > 1 is a machine-dependentconstant (usually α ∈ [2, 3]). Here, we study speed scaling problems on a single processor,on homogeneous parallel processors, on heterogeneous environments and on shopenvironments. In most cases, the objective is the minimization of the energy but we alsoaddress problems in which we are interested in capturing the trade-off between energyand performance.We tackle speed scaling problems through different approaches. For non-preemptiveproblems, we explore the idea of transforming optimal preemptive schedules to nonpreemptiveones. Moreover, we exploit the fact that some problems can be formulatedas convex programs and we propose greedy algorithms that produce optimal solutionssatisfying the KKT conditions which are necessary and sufficient for optimality in convexprogramming. In the context of convex programming and KKT conditions, we also studythe design of primal-dual algorithms. Additionally, we solve speed scaling problems byformulating them as convex cost flow or minimum weighted bipartite matching problems.Finally, we elaborate on approximating energy minimization problems that can be formulatedas integer configuration linear programs. We can obtain an approximate solutionfor such a problem by solving the fractional relaxation of an integer configuration linearprogram for it and applying randomized rounding.In this thesis, we solve some new energy aware scheduling problems and we improvethe best-known algorithms for some other problems. For instance, we improve the bestknownapproximation algorithm for the single-processor non-preemptive energy minimizationproblem which is strongly NP-hard. When α = 3, we decrease the approximationratio from 2048 to 20. Furthermore, we propose a faster optimal combinatorial algorithmviiviiifor the preemptive migratory energy minimization problem on power-homogeneous processors,while the best-known algorithm was based on solving linear programs. Last butnot least, we improve the best-known approximation algorithm for the preemptive nonmigratoryenergy minimization problem on power-homogeneous processors for fractionalvalues of α. Our algorithm can be applied even in the more general case where the processorsare heterogeneous and, for αmax = 2.5 (which is the maximum constant α amongall processors), we get an improvement of the approximation ratio from 5 to 3.08.In order to manage the thermal behavior of a computing device, we adopt the approachof Chrobak, Dürr, Hurand and Robert (2011). The main assumption is that some jobsare more CPU intensive than others and more heat is generated during their execution.So, each job is associated with a heat contribution which is the impact of the job on theprocessor’s temperature. In this setting, we study the complexity and the approximabilityof multiprocessor scheduling problems where either there is a constraint on the processors’temperature and our aim is to optimize some performance metric or the temperature isthe optimization goal itself.La gestion de la consommation d’énergie et de la température est devenue un enjeucrucial dans les systèmes informatiques. En effet, un grand centre de données consommeautant d’électricité qu’une ville et les processeurs modernes atteignent des températuresimportantes dégradant ainsi leurs performances et leur fiabilité. Dans cette thèse, nousétudions différents problèmes d’ordonnancement prenant en compte la consommationd’énergie et la température des processeurs en se focalisant sur leur complexité et leurapproximabilité. Pour cela, nous utilisons le modèle de Yao et al. (1995) (modèle devariation de vitesse) pour la gestion d’énergie et le modèle de Chrobak et al. (2008) pourla gestion de la température
Modeling and Algorithmic Development for Selected Real-World Optimization Problems with Hard-to-Model Features
Mathematical optimization is a common tool for numerous real-world optimization problems.
However, in some application domains there is a scope for improvement of currently used optimization techniques.
For example, this is typically the case for applications that contain features which are difficult to model, and applications of interdisciplinary nature where no strong optimization knowledge is available.
The goal of this thesis is to demonstrate how to overcome these challenges by considering five problems from two application domains.
The first domain that we address is scheduling in Cloud computing systems, in which we investigate three selected problems.
First, we study scheduling problems where jobs are required to start immediately when they are submitted to the system.
This requirement is ubiquitous in Cloud computing but has not yet been addressed in mathematical scheduling.
Our main contributions are (a) providing the formal model, (b) the development of exact and efficient solution algorithms, and (c) proofs of correctness of the algorithms.
Second, we investigate the problem of energy-aware scheduling in Cloud data centers.
The objective is to assign computing tasks to machines such that the energy required to operate the data center, i.e., the energy required to operate computing devices plus the energy required to cool computing devices, is minimized.
Our main contributions are (a) the mathematical model, and (b) the development of efficient heuristics.
Third, we address the problem of evaluating scheduling algorithms in a realistic environment.
To this end we develop an approach that supports mathematicians to evaluate scheduling algorithms through simulation with realistic instances.
Our main contributions are the development of (a) a formal model, and (b) efficient heuristics.
The second application domain considered is powerline routing.
We are given two points on a geographic area and respective terrain characteristics.
The objective is to find a ``good'' route (which depends on the terrain), connecting both points along which a powerline should be built.
Within this application domain, we study two selected problems.
First, we study a geometric shortest path problem, an abstract and simplified version of the powerline routing problem.
We introduce the concept of the k-neighborhood and contribute various analytical results.
Second, we investigate the actual powerline routing problem.
To this end, we develop algorithms that are built upon the theoretical insights obtained in the previous study.
Our main contributions are (a) the development of exact algorithms and efficient heuristics, and (b) a comprehensive evaluation through two real-world case studies.
Some parts of the research presented in this thesis have been published in refereed publications [119], [110], [109]
Cloud Computing cost and energy optimization through Federated Cloud SoS
2017 Fall.Includes bibliographical references.The two most significant differentiators amongst contemporary Cloud Computing service providers have increased green energy use and datacenter resource utilization. This work addresses these two issues from a system's architectural optimization viewpoint. The proposed approach herein, allows multiple cloud providers to utilize their individual computing resources in three ways by: (1) cutting the number of datacenters needed, (2) scheduling available datacenter grid energy via aggregators to reduce costs and power outages, and lastly by (3) utilizing, where appropriate, more renewable and carbon-free energy sources. Altogether our proposed approach creates an alternative paradigm for a Federated Cloud SoS approach. The proposed paradigm employs a novel control methodology that is tuned to obtain both financial and environmental advantages. It also supports dynamic expansion and contraction of computing capabilities for handling sudden variations in service demand as well as for maximizing usage of time varying green energy supplies. Herein we analyze the core SoS requirements, concept synthesis, and functional architecture with an eye on avoiding inadvertent cascading conditions. We suggest a physical architecture that diminishes unwanted outcomes while encouraging desirable results. Finally, in our approach, the constituent cloud services retain their independent ownership, objectives, funding, and sustainability means. This work analyzes the core SoS requirements, concept synthesis, and functional architecture. It suggests a physical structure that simulates the primary SoS emergent behavior to diminish unwanted outcomes while encouraging desirable results. The report will analyze optimal computing generation methods, optimal energy utilization for computing generation as well as a procedure for building optimal datacenters using a unique hardware computing system design based on the openCompute community as an illustrative collaboration platform. Finally, the research concludes with security features cloud federation requires to support to protect its constituents, its constituents tenants and itself from security risks
- …