12,369 research outputs found
Reclaiming the energy of a schedule: models and algorithms
We consider a task graph to be executed on a set of processors. We assume
that the mapping is given, say by an ordered list of tasks to execute on each
processor, and we aim at optimizing the energy consumption while enforcing a
prescribed bound on the execution time. While it is not possible to change the
allocation of a task, it is possible to change its speed. Rather than using a
local approach such as backfilling, we consider the problem as a whole and
study the impact of several speed variation models on its complexity. For
continuous speeds, we give a closed-form formula for trees and series-parallel
graphs, and we cast the problem into a geometric programming problem for
general directed acyclic graphs. We show that the classical dynamic voltage and
frequency scaling (DVFS) model with discrete modes leads to a NP-complete
problem, even if the modes are regularly distributed (an important particular
case in practice, which we analyze as the incremental model). On the contrary,
the VDD-hopping model leads to a polynomial solution. Finally, we provide an
approximation algorithm for the incremental model, which we extend for the
general DVFS model.Comment: A two-page extended abstract of this work appeared as a short
presentation in SPAA'2011, while the long version has been accepted for
publication in "Concurrency and Computation: Practice and Experience
Parameterizing by the Number of Numbers
The usefulness of parameterized algorithmics has often depended on what
Niedermeier has called, "the art of problem parameterization". In this paper we
introduce and explore a novel but general form of parameterization: the number
of numbers. Several classic numerical problems, such as Subset Sum, Partition,
3-Partition, Numerical 3-Dimensional Matching, and Numerical Matching with
Target Sums, have multisets of integers as input. We initiate the study of
parameterizing these problems by the number of distinct integers in the input.
We rely on an FPT result for ILPF to show that all the above-mentioned problems
are fixed-parameter tractable when parameterized in this way. In various
applied settings, problem inputs often consist in part of multisets of integers
or multisets of weighted objects (such as edges in a graph, or jobs to be
scheduled). Such number-of-numbers parameterized problems often reduce to
subproblems about transition systems of various kinds, parameterized by the
size of the system description. We consider several core problems of this kind
relevant to number-of-numbers parameterization. Our main hardness result
considers the problem: given a non-deterministic Mealy machine M (a finite
state automaton outputting a letter on each transition), an input word x, and a
census requirement c for the output word specifying how many times each letter
of the output alphabet should be written, decide whether there exists a
computation of M reading x that outputs a word y that meets the requirement c.
We show that this problem is hard for W[1]. If the question is whether there
exists an input word x such that a computation of M on x outputs a word that
meets c, the problem becomes fixed-parameter tractable
Stochastic Analysis of Power-Aware Scheduling
Energy consumption in a computer system can be reduced by dynamic speed scaling, which adapts the processing speed to the current load. This paper studies the optimal way to adjust speed to balance mean response time and mean energy consumption, when jobs arrive as a Poisson process and processor sharing scheduling is used. Both bounds and asymptotics for the optimal speeds are provided. Interestingly, a simple scheme that halts when the system is idle and uses a static rate while the system is busy provides nearly the same performance as the optimal dynamic speed scaling. However, dynamic speed scaling which allocates a higher speed when more jobs are present significantly improves robustness to bursty traffic and mis-estimation of workload parameters
Framework for sustainable TVET-Teacher Education Program in Malaysia Public Universities
Studies had stated that less attention was given to the education aspect, such as
teaching and learning in planning for improving the TVET system. Due to the 21st
Century context, the current paradigm of teaching for the TVET educators also has
been reported to be fatal and need to be shifted. All these disadvantages reported
hindering the country from achieving the 5th strategy in the Strategic Plan for
Vocational Education Transformation to transform TVET system as a whole.
Therefore, this study aims to develop a framework for sustainable TVET Teacher
Education program in Malaysia. This study had adopted an Exploratory Sequential
Mix-Method design, which involves a semi-structured interview (phase one) and
survey method (phase two). Nine experts had involved in phase one chosen by using
Purposive Sampling Technique. As in phase two, 118 TVET-TE program lecturers
were selected as the survey sample chosen through random sampling method. After
data analysis in phase one (thematic analysis) and phase two (Principal Component
Analysis), eight domains and 22 elements have been identified for the framework for
sustainable TVET-TE program in Malaysia. This framework was identified to embed
the elements of 21st Century Education, thus filling the gap in this research. The
research findings also indicate that the developed framework was unidimensional and
valid for the development and research regarding TVET-TE program in Malaysia.
Lastly, it is in the hope that this research can be a guide for the nations in producing a
quality TVET teacher in the future
Scheduling Algorithms for Procrastinators
This paper presents scheduling algorithms for procrastinators, where the
speed that a procrastinator executes a job increases as the due date
approaches. We give optimal off-line scheduling policies for linearly
increasing speed functions. We then explain the computational/numerical issues
involved in implementing this policy. We next explore the online setting,
showing that there exist adversaries that force any online scheduling policy to
miss due dates. This impossibility result motivates the problem of minimizing
the maximum interval stretch of any job; the interval stretch of a job is the
job's flow time divided by the job's due date minus release time. We show that
several common scheduling strategies, including the "hit-the-highest-nail"
strategy beloved by procrastinators, have arbitrarily large maximum interval
stretch. Then we give the "thrashing" scheduling policy and show that it is a
\Theta(1) approximation algorithm for the maximum interval stretch.Comment: 12 pages, 3 figure
Reducing Electricity Demand Charge for Data Centers with Partial Execution
Data centers consume a large amount of energy and incur substantial
electricity cost. In this paper, we study the familiar problem of reducing data
center energy cost with two new perspectives. First, we find, through an
empirical study of contracts from electric utilities powering Google data
centers, that demand charge per kW for the maximum power used is a major
component of the total cost. Second, many services such as Web search tolerate
partial execution of the requests because the response quality is a concave
function of processing time. Data from Microsoft Bing search engine confirms
this observation.
We propose a simple idea of using partial execution to reduce the peak power
demand and energy cost of data centers. We systematically study the problem of
scheduling partial execution with stringent SLAs on response quality. For a
single data center, we derive an optimal algorithm to solve the workload
scheduling problem. In the case of multiple geo-distributed data centers, the
demand of each data center is controlled by the request routing algorithm,
which makes the problem much more involved. We decouple the two aspects, and
develop a distributed optimization algorithm to solve the large-scale request
routing problem. Trace-driven simulations show that partial execution reduces
cost by for one data center, and by for geo-distributed
data centers together with request routing.Comment: 12 page
Lattice QCD Thermodynamics on the Grid
We describe how we have used simultaneously nodes of the
EGEE Grid, accumulating ca. 300 CPU-years in 2-3 months, to determine an
important property of Quantum Chromodynamics. We explain how Grid resources
were exploited efficiently and with ease, using user-level overlay based on
Ganga and DIANE tools above standard Grid software stack. Application-specific
scheduling and resource selection based on simple but powerful heuristics
allowed to improve efficiency of the processing to obtain desired scientific
results by a specified deadline. This is also a demonstration of combined use
of supercomputers, to calculate the initial state of the QCD system, and Grids,
to perform the subsequent massively distributed simulations. The QCD simulation
was performed on a lattice. Keeping the strange quark mass at
its physical value, we reduced the masses of the up and down quarks until,
under an increase of temperature, the system underwent a second-order phase
transition to a quark-gluon plasma. Then we measured the response of this
system to an increase in the quark density. We find that the transition is
smoothened rather than sharpened. If confirmed on a finer lattice, this finding
makes it unlikely for ongoing experimental searches to find a QCD critical
point at small chemical potential
- …