4,718 research outputs found
Approximating k-Forest with Resource Augmentation: A Primal-Dual Approach
In this paper, we study the -forest problem in the model of resource
augmentation. In the -forest problem, given an edge-weighted graph ,
a parameter , and a set of demand pairs , the
objective is to construct a minimum-cost subgraph that connects at least
demands. The problem is hard to approximate---the best-known approximation
ratio is . Furthermore, -forest is as hard to
approximate as the notoriously-hard densest -subgraph problem.
While the -forest problem is hard to approximate in the worst-case, we
show that with the use of resource augmentation, we can efficiently approximate
it up to a constant factor.
First, we restate the problem in terms of the number of demands that are {\em
not} connected. In particular, the objective of the -forest problem can be
viewed as to remove at most demands and find a minimum-cost subgraph that
connects the remaining demands. We use this perspective of the problem to
explain the performance of our algorithm (in terms of the augmentation) in a
more intuitive way.
Specifically, we present a polynomial-time algorithm for the -forest
problem that, for every , removes at most demands and has
cost no more than times the cost of an optimal algorithm
that removes at most demands
Models and algorithms for energy-efficient scheduling with immediate start of jobs
We study a scheduling model with speed scaling for machines and the immediate start requirement for jobs. Speed scaling improves the system performance, but incurs the energy cost. The immediate start condition implies that each job should be started exactly at its release time. Such a condition is typical for modern Cloud computing systems with abundant resources. We consider two cost functions, one that represents the quality of service and the other that corresponds to the cost of running. We demonstrate that the basic scheduling model to minimize the aggregated cost function with n jobs is solvable in O(nlogn) time in the single-machine case and in O(n²m) time in the case of m parallel machines. We also address additional features, e.g., the cost of job rejection or the cost of initiating a machine. In the case of a single machine, we present algorithms for minimizing one of the cost functions subject to an upper bound on the value of the other, as well as for finding a Pareto-optimal solution
Multiprocessor speed scaling for jobs with arbitrary sizes and deadlines
In this paper we study energy efficient deadline scheduling on multiprocessors in which the processors consumes power at a rate of sα when running at speeds, where α ≥ 2. The problem is to dispatch jobs to processors and determine the speed and jobs to run for each processor so as to complete all jobs by their deadlines using the minimum energy. The problem has been well studied for the single processor case. For the multiprocessor setting, constant competitive online algorithms for special cases of unit size jobs or arbitrary size jobs with agreeable deadlines have been proposed by Albers et al. (2007). A randomized algorithm has been proposed for jobs of arbitrary sizes and arbitrary deadlines by Greiner et al. (2009). We propose a deterministic online algorithm for the general setting and show that it is O(logαP)-competitive, where P is the ratio of the maximum and minimum job size
Online Non-Preemptive Scheduling to Minimize Maximum Weighted Flow-Time on Related Machines
We consider the problem of scheduling jobs to minimize the maximum weighted flow-time on a set of related machines. When jobs can be preempted this problem is well-understood; for example, there exists a constant competitive algorithm using speed augmentation. When jobs must be scheduled non-preemptively, only hardness results are known. In this paper, we present the first online guarantees for the non-preemptive variant. We present the first constant competitive algorithm for minimizing the maximum weighted flow-time on related machines by relaxing the problem and assuming that the online algorithm can reject a small fraction of the total weight of jobs. This is essentially the best result possible given the strong lower bounds on the non-preemptive problem without rejection
Rejecting Jobs to Minimize Load and Maximum Flow-time
Online algorithms are usually analyzed using the notion of competitive ratio
which compares the solution obtained by the algorithm to that obtained by an
online adversary for the worst possible input sequence. Often this measure
turns out to be too pessimistic, and one popular approach especially for
scheduling problems has been that of "resource augmentation" which was first
proposed by Kalyanasundaram and Pruhs. Although resource augmentation has been
very successful in dealing with a variety of objective functions, there are
problems for which even a (arbitrary) constant speedup cannot lead to a
constant competitive algorithm. In this paper we propose a "rejection model"
which requires no resource augmentation but which permits the online algorithm
to not serve an epsilon-fraction of the requests.
The problems considered in this paper are in the restricted assignment
setting where each job can be assigned only to a subset of machines. For the
load balancing problem where the objective is to minimize the maximum load on
any machine, we give O(\log^2 1/\eps)-competitive algorithm which rejects at
most an \eps-fraction of the jobs. For the problem of minimizing the maximum
weighted flow-time, we give an O(1/\eps^4)-competitive algorithm which can
reject at most an \eps-fraction of the jobs by weight. We also extend this
result to a more general setting where the weights of a job for measuring its
weighted flow-time and its contribution towards total allowed rejection weight
are different. This is useful, for instance, when we consider the objective of
minimizing the maximum stretch. We obtain an O(1/\eps^6)-competitive
algorithm in this case.
Our algorithms are immediate dispatch, though they may not be immediate
reject. All these problems have very strong lower bounds in the speed
augmentation model
Performance of a linear robust control strategy on a nonlinear model of spatially developing flows
International audienceThis paper investigates the control of self-excited oscillations in spatially developing flow systems such as jets and wakes using H 8 control theory on a complex Ginzburg-Landau (CGL) model. The coefficients used in this one-dimensional equation, which serves as a simple model of the evolution of hydrodynamic instability waves, are those selected by Roussopoulos & Monkewitz (Physica D 1996, vol. 97, p. 264) to model the behaviour of the near-wake of a circular cylinder. Based on noisy measurements at a point sensor typically located inside the cylinder wake, the compensator uses a linear H 8 filter based on the CGL model to construct a state estimate. This estimate is then used to compute linear H 8 control feedback at a point actuator location, which is typically located upstream of the sensor. The goal of the control scheme is to stabilize the system by minimizing a weighted average of the 'system response' and the 'control effort' while rigorously bounding the response of the controlled linear system to external disturbances. The application of such modern control and estimation rules stabilizes the linear CGL system at Reynolds numbers far above the critical Reynolds number Rec ˜ 47 at which linear global instability appears in the uncontrolled system. In so doing, many unstable modes of the uncontrolled CGL system are linearly stabilized by the single actuator/sensor pair and the model-based feedback control strategy. Further, the linear performance of the closed-loop system, in terms of the relevant transfer function norms quantifying the linear response of the controlled system to external disturbances, is substantially improved beyond that possible with the simple proportional measurement feedback proposed in previous studies. Above Re ˜ 84, the control designs significantly outperform the corresponding control designs in terms of their ability to stabilize the CGL system in the presence of worst-case disturbances. The extension of these control and estimation rules to the nonlinear CGL system on its attractor (a simple limit cycle) stabilizes the full nonlinear system back to the stationary state at Reynolds numbers up to Re ˜ 97 using a single actuator/sensor pair, fixed-gain linear feedback and an extended Kalman filter incorporating the system nolinearity. © 2004 Cambridge University Press
Voltage Constrained Heavy Duty Vehicle Electrification: Formulation and Case Study
The electrification of heavy-duty vehicles (HDEVs) is a rapidly emerging
avenue for decarbonization of energy and transportation sectors. Compared to
light duty vehicles, HDEVs exhibit unique travel and charging patterns over
long distances. In this paper, we formulate an analytically tractable model
that considers the routing decisions for the HDEVs and their charging
implications on the power grid. Our model captures the impacts of increased
vehicle electrification on the transmission grid, with particular focus on
HDEVs. We jointly model transportation and power networks coupling them through
the demand generated for charging requirements of HDEVs. In particular, the
voltage constraint violation is explicitly accounted for in the proposed model
given the signifcant amount of charging power imposed by HDEVs. We obtain
optimal routing schedules and generator dispatch satisfying mobility
constraints of HDEVs while minimizing voltage violations in electric
transmission network. Case study based on an IEEE 24-bus system is presented
using realistic data of transit data of HDEVs. The numerical results suggest
that the proposed model and algorithm effectively mitigate the voltage
violation when a significant amount of HDEVs are integrated to the power
transmission network. Such mitigation includes reduction in the voltage
magnitude, geographical dispersion of voltage violations and worst-case voltage
violations at critical nodes.Comment: Accepted at CDC 202
Modeling and Algorithmic Development for Selected Real-World Optimization Problems with Hard-to-Model Features
Mathematical optimization is a common tool for numerous real-world optimization problems.
However, in some application domains there is a scope for improvement of currently used optimization techniques.
For example, this is typically the case for applications that contain features which are difficult to model, and applications of interdisciplinary nature where no strong optimization knowledge is available.
The goal of this thesis is to demonstrate how to overcome these challenges by considering five problems from two application domains.
The first domain that we address is scheduling in Cloud computing systems, in which we investigate three selected problems.
First, we study scheduling problems where jobs are required to start immediately when they are submitted to the system.
This requirement is ubiquitous in Cloud computing but has not yet been addressed in mathematical scheduling.
Our main contributions are (a) providing the formal model, (b) the development of exact and efficient solution algorithms, and (c) proofs of correctness of the algorithms.
Second, we investigate the problem of energy-aware scheduling in Cloud data centers.
The objective is to assign computing tasks to machines such that the energy required to operate the data center, i.e., the energy required to operate computing devices plus the energy required to cool computing devices, is minimized.
Our main contributions are (a) the mathematical model, and (b) the development of efficient heuristics.
Third, we address the problem of evaluating scheduling algorithms in a realistic environment.
To this end we develop an approach that supports mathematicians to evaluate scheduling algorithms through simulation with realistic instances.
Our main contributions are the development of (a) a formal model, and (b) efficient heuristics.
The second application domain considered is powerline routing.
We are given two points on a geographic area and respective terrain characteristics.
The objective is to find a ``good'' route (which depends on the terrain), connecting both points along which a powerline should be built.
Within this application domain, we study two selected problems.
First, we study a geometric shortest path problem, an abstract and simplified version of the powerline routing problem.
We introduce the concept of the k-neighborhood and contribute various analytical results.
Second, we investigate the actual powerline routing problem.
To this end, we develop algorithms that are built upon the theoretical insights obtained in the previous study.
Our main contributions are (a) the development of exact algorithms and efficient heuristics, and (b) a comprehensive evaluation through two real-world case studies.
Some parts of the research presented in this thesis have been published in refereed publications [119], [110], [109]
A new hybrid meta-heuristic algorithm for solving single machine scheduling problems
A dissertation submitted in partial ful lment of the
degree of Master of Science in Engineering (Electrical) (50/50)
in the
Faculty of Engineering and the Built Environment
Department of Electrical and Information Engineering
May 2017Numerous applications in a wide variety of elds has resulted in a rich history of research
into optimisation for scheduling. Although it is a fundamental form of the problem, the
single machine scheduling problem with two or more objectives is known to be NP-hard.
For this reason we consider the single machine problem a good test bed for solution
algorithms. While there is a plethora of research into various aspects of scheduling
problems, little has been done in evaluating the performance of the Simulated Annealing
algorithm for the fundamental problem, or using it in combination with other techniques.
Speci cally, this has not been done for minimising total weighted earliness and tardiness,
which is the optimisation objective of this work.
If we consider a mere ten jobs for scheduling, this results in over 3.6 million possible
solution schedules. It is thus of de nite practical necessity to reduce the search space in
order to nd an optimal or acceptable suboptimal solution in a shorter time, especially
when scaling up the problem size. This is of particular importance in the application
area of packet scheduling in wireless communications networks where the tolerance for
computational delays is very low. The main contribution of this work is to investigate
the hypothesis that inserting a step of pre-sampling by Markov Chain Monte Carlo
methods before running the Simulated Annealing algorithm on the pruned search space
can result in overall reduced running times.
The search space is divided into a number of sections and Metropolis-Hastings Markov
Chain Monte Carlo is performed over the sections in order to reduce the search space for
Simulated Annealing by a factor of 20 to 100. Trade-o s are found between the run time
and number of sections of the pre-sampling algorithm, and the run time of Simulated
Annealing for minimising the percentage deviation of the nal result from the optimal
solution cost. Algorithm performance is determined both by computational complexity
and the quality of the solution (i.e. the percentage deviation from the optimal). We
nd that the running time can be reduced by a factor of 4.5 to ensure a 2% deviation
from the optimal, as compared to the basic Simulated Annealing algorithm on the full
search space. More importantly, we are able to reduce the complexity of nding the
optimal from O(n:n!) for a complete search to O(nNS) for Simulated Annealing to
O(n(NMr +NS)+m) for the input variables n jobs, NS SA iterations, NM Metropolis-
Hastings iterations, r inner samples and m sections.MT 201
- …