3,319 research outputs found
Cardinality Constrained Scheduling in Online Models
Makespan minimization on parallel identical machines is a classical and
intensively studied problem in scheduling, and a classic example for online
algorithm analysis with Graham's famous list scheduling algorithm dating back
to the 1960s. In this problem, jobs arrive over a list and upon an arrival, the
algorithm needs to assign the job to a machine. The goal is to minimize the
makespan, that is, the maximum machine load. In this paper, we consider the
variant with an additional cardinality constraint: The algorithm may assign at
most jobs to each machine where is part of the input. While the offline
(strongly NP-hard) variant of cardinality constrained scheduling is well
understood and an EPTAS exists here, no non-trivial results are known for the
online variant. We fill this gap by making a comprehensive study of various
different online models. First, we show that there is a constant competitive
algorithm for the problem and further, present a lower bound of on the
competitive ratio of any online algorithm. Motivated by the lower bound, we
consider a semi-online variant where upon arrival of a job of size , we are
allowed to migrate jobs of total size at most a constant times . This
constant is called the migration factor of the algorithm. Algorithms with small
migration factors are a common approach to bridge the performance of online
algorithms and offline algorithms. One can obtain algorithms with a constant
migration factor by rounding the size of each incoming job and then applying an
ordinal algorithm to the resulting rounded instance. With this in mind, we also
consider the framework of ordinal algorithms and characterize the competitive
ratio that can be achieved using the aforementioned approaches.Comment: An extended abstract will appear in the proceedings of STACS'2
Optimal Data Collection For Informative Rankings Expose Well-Connected Graphs
Given a graph where vertices represent alternatives and arcs represent
pairwise comparison data, the statistical ranking problem is to find a
potential function, defined on the vertices, such that the gradient of the
potential function agrees with the pairwise comparisons. Our goal in this paper
is to develop a method for collecting data for which the least squares
estimator for the ranking problem has maximal Fisher information. Our approach,
based on experimental design, is to view data collection as a bi-level
optimization problem where the inner problem is the ranking problem and the
outer problem is to identify data which maximizes the informativeness of the
ranking. Under certain assumptions, the data collection problem decouples,
reducing to a problem of finding multigraphs with large algebraic connectivity.
This reduction of the data collection problem to graph-theoretic questions is
one of the primary contributions of this work. As an application, we study the
Yahoo! Movie user rating dataset and demonstrate that the addition of a small
number of well-chosen pairwise comparisons can significantly increase the
Fisher informativeness of the ranking. As another application, we study the
2011-12 NCAA football schedule and propose schedules with the same number of
games which are significantly more informative. Using spectral clustering
methods to identify highly-connected communities within the division, we argue
that the NCAA could improve its notoriously poor rankings by simply scheduling
more out-of-conference games.Comment: 31 pages, 10 figures, 3 table
Separable Convex Optimization with Nested Lower and Upper Constraints
We study a convex resource allocation problem in which lower and upper bounds
are imposed on partial sums of allocations. This model is linked to a large
range of applications, including production planning, speed optimization,
stratified sampling, support vector machines, portfolio management, and
telecommunications. We propose an efficient gradient-free divide-and-conquer
algorithm, which uses monotonicity arguments to generate valid bounds from the
recursive calls, and eliminate linking constraints based on the information
from sub-problems. This algorithm does not need strict convexity or
differentiability. It produces an -approximate solution for the
continuous problem in time
and an integer solution in time, where is
the number of decision variables, is the number of constraints, and is
the resource bound. A complexity of is also achieved
for the linear and quadratic cases. These are the best complexities known to
date for this important problem class. Our experimental analyses confirm the
good performance of the method, which produces optimal solutions for problems
with up to 1,000,000 variables in a few seconds. Promising applications to the
support vector ordinal regression problem are also investigated
On Neighborhood Tree Search
We consider the neighborhood tree induced by alternating the use of different
neighborhood structures within a local search descent. We investigate the issue
of designing a search strategy operating at the neighborhood tree level by
exploring different paths of the tree in a heuristic way. We show that allowing
the search to 'backtrack' to a previously visited solution and resuming the
iterative variable neighborhood descent by 'pruning' the already explored
neighborhood branches leads to the design of effective and efficient search
heuristics. We describe this idea by discussing its basic design components
within a generic algorithmic scheme and we propose some simple and intuitive
strategies to guide the search when traversing the neighborhood tree. We
conduct a thorough experimental analysis of this approach by considering two
different problem domains, namely, the Total Weighted Tardiness Problem
(SMTWTP), and the more sophisticated Location Routing Problem (LRP). We show
that independently of the considered domain, the approach is highly
competitive. In particular, we show that using different branching and
backtracking strategies when exploring the neighborhood tree allows us to
achieve different trade-offs in terms of solution quality and computing cost.Comment: Genetic and Evolutionary Computation Conference (GECCO'12) (2012
Toward Robust Manufacturing Scheduling: Stochastic Job-Shop Scheduling
Manufacturing plays a significant role in promoting economic development,
production, exports, and job creation, which ultimately contribute to improving
the quality of life. The presence of manufacturing defects is, however,
inevitable leading to products being discarded, i.e. scrapped. In some cases,
defective products can be repaired through rework. Scrap and rework cause a
longer completion time, which can contribute to the order being shipped late.
In addition, complex manufacturing scheduling becomes much more challenging
when the above uncertainties are present. Motivated by the presence of
uncertainties as well as combinatorial complexity, this paper addresses the
challenge illustrated through a case study of stochastic job-shop scheduling
problems arising within low-volume high-variety manufacturing. To ensure
on-time delivery, high-quality solutions are required, and near-optimal
solutions must be obtained within strict time constraints to ensure smooth
operations on the job-shop floor. To efficiently solve the stochastic job-shop
scheduling (JSS) problem, a recently-developed Surrogate "Level-Based"
Lagrangian Relaxation is used to reduce computational effort while efficiently
exploiting the geometric convergence potential inherent to Polyak's step-sizing
formula thereby leading to fast convergence. Numerical testing demonstrates
that the new method is more than two orders of magnitude faster as compared to
commercial solvers
- …