13,975 research outputs found
Scheduling with Outliers
In classical scheduling problems, we are given jobs and machines, and have to
schedule all the jobs to minimize some objective function. What if each job has
a specified profit, and we are no longer required to process all jobs -- we can
schedule any subset of jobs whose total profit is at least a (hard) target
profit requirement, while still approximately minimizing the objective
function?
We refer to this class of problems as scheduling with outliers. This model
was initiated by Charikar and Khuller (SODA'06) on the minimum max-response
time in broadcast scheduling. We consider three other well-studied scheduling
objectives: the generalized assignment problem, average weighted completion
time, and average flow time, and provide LP-based approximation algorithms for
them. For the minimum average flow time problem on identical machines, we give
a logarithmic approximation algorithm for the case of unit profits based on
rounding an LP relaxation; we also show a matching integrality gap. For the
average weighted completion time problem on unrelated machines, we give a
constant factor approximation. The algorithm is based on randomized rounding of
the time-indexed LP relaxation strengthened by the knapsack-cover inequalities.
For the generalized assignment problem with outliers, we give a simple
reduction to GAP without outliers to obtain an algorithm whose makespan is
within 3 times the optimum makespan, and whose cost is at most (1 + \epsilon)
times the optimal cost.Comment: 23 pages, 3 figure
Minimizing Flow-Time on Unrelated Machines
We consider some flow-time minimization problems in the unrelated machines
setting. In this setting, there is a set of machines and a set of jobs,
and each job has a machine dependent processing time of on machine
. The flow-time of a job is the total time the job spends in the system
(completion time minus its arrival time), and is one of the most natural
quality of service measure. We show the following two results: an
approximation algorithm for minimizing the
total-flow time, and an approximation for minimizing the maximum
flow-time. Here is the ratio of maximum to minimum job size. These are the
first known poly-logarithmic guarantees for both the problems.Comment: The new version fixes some typos in the previous version. The paper
is accepted for publication in STOC 201
Spatial-temporal data modelling and processing for personalised decision support
The purpose of this research is to undertake the modelling of dynamic data without losing any of the temporal relationships, and to be able to predict likelihood of outcome as far in advance of actual occurrence as possible. To this end a novel computational architecture for personalised ( individualised) modelling of spatio-temporal data based on spiking neural network methods (PMeSNNr), with a three dimensional visualisation of relationships between variables is proposed. In brief, the architecture is able to transfer spatio-temporal data patterns from a multidimensional input stream into internal patterns in the spiking neural network reservoir. These patterns are then analysed to produce a personalised model for either classification or prediction dependent on the specific needs of the situation. The architecture described above was constructed using MatLab© in several individual modules linked together to form NeuCube (M1). This methodology has been applied to two real world case studies. Firstly, it has been applied to data for the prediction of stroke occurrences on an individual basis. Secondly, it has been applied to ecological data on aphid pest abundance prediction. Two main objectives for this research when judging outcomes of the modelling are accurate prediction and to have this at the earliest possible time point. The implications of these findings are not insignificant in terms of health care management and environmental control. As the case studies utilised here represent vastly different application fields, it reveals more of the potential and usefulness of NeuCube (M1) for modelling data in an integrated manner. This in turn can identify previously unknown (or less understood) interactions thus both increasing the level of reliance that can be placed on the model created, and enhancing our human understanding of the complexities of the world around us without the need for over simplification. Read less
Keywords
Personalised modelling; Spiking neural network; Spatial-temporal data modelling; Computational intelligence; Predictive modelling; Stroke risk predictio
Spatial-temporal data modelling and processing for personalised decision support
The purpose of this research is to undertake the modelling of dynamic data without losing any of the temporal relationships, and to be able to predict likelihood of outcome as far in advance of actual occurrence as possible. To this end a novel computational architecture for personalised ( individualised) modelling of spatio-temporal data based on spiking neural network methods (PMeSNNr), with a three dimensional visualisation of relationships between variables is proposed. In brief, the architecture is able to transfer spatio-temporal data patterns from a multidimensional input stream into internal patterns in the spiking neural network reservoir. These patterns are then analysed to produce a personalised model for either classification or prediction dependent on the specific needs of the situation. The architecture described above was constructed using MatLab© in several individual modules linked together to form NeuCube (M1). This methodology has been applied to two real world case studies. Firstly, it has been applied to data for the prediction of stroke occurrences on an individual basis. Secondly, it has been applied to ecological data on aphid pest abundance prediction. Two main objectives for this research when judging outcomes of the modelling are accurate prediction and to have this at the earliest possible time point. The implications of these findings are not insignificant in terms of health care management and environmental control. As the case studies utilised here represent vastly different application fields, it reveals more of the potential and usefulness of NeuCube (M1) for modelling data in an integrated manner. This in turn can identify previously unknown (or less understood) interactions thus both increasing the level of reliance that can be placed on the model created, and enhancing our human understanding of the complexities of the world around us without the need for over simplification. Read less
Keywords
Personalised modelling; Spiking neural network; Spatial-temporal data modelling; Computational intelligence; Predictive modelling; Stroke risk predictio
Scheduling to Minimize Total Weighted Completion Time via Time-Indexed Linear Programming Relaxations
We study approximation algorithms for scheduling problems with the objective
of minimizing total weighted completion time, under identical and related
machine models with job precedence constraints. We give algorithms that improve
upon many previous 15 to 20-year-old state-of-art results. A major theme in
these results is the use of time-indexed linear programming relaxations. These
are natural relaxations for their respective problems, but surprisingly are not
studied in the literature.
We also consider the scheduling problem of minimizing total weighted
completion time on unrelated machines. The recent breakthrough result of
[Bansal-Srinivasan-Svensson, STOC 2016] gave a -approximation for the
problem, based on some lift-and-project SDP relaxation. Our main result is that
a -approximation can also be achieved using a natural and
considerably simpler time-indexed LP relaxation for the problem. We hope this
relaxation can provide new insights into the problem
Scheduling Jobs in Flowshops with the Introduction of Additional Machines in the Future
This is the author's peer-reviewed final manuscript, as accepted by the publisher. The published article is copyrighted by Elsevier and can be found at: http://www.journals.elsevier.com/expert-systems-with-applications/.The problem of scheduling jobs to minimize total weighted tardiness in flowshops,\ud
with the possibility of evolving into hybrid flowshops in the future, is investigated in\ud
this paper. As this research is guided by a real problem in industry, the flowshop\ud
considered has considerable flexibility, which stimulated the development of an\ud
innovative methodology for this research. Each stage of the flowshop currently has\ud
one or several identical machines. However, the manufacturing company is planning\ud
to introduce additional machines with different capabilities in different stages in the\ud
near future. Thus, the algorithm proposed and developed for the problem is not only\ud
capable of solving the current flow line configuration but also the potential new\ud
configurations that may result in the future. A meta-heuristic search algorithm based\ud
on Tabu search is developed to solve this NP-hard, industry-guided problem. Six\ud
different initial solution finding mechanisms are proposed. A carefully planned\ud
nested split-plot design is performed to test the significance of different factors and\ud
their impact on the performance of the different algorithms. To the best of our\ud
knowledge, this research is the first of its kind that attempts to solve an industry-guided\ud
problem with the concern for future developments
SELFISHMIGRATE: A Scalable Algorithm for Non-clairvoyantly Scheduling Heterogeneous Processors
We consider the classical problem of minimizing the total weighted flow-time
for unrelated machines in the online \emph{non-clairvoyant} setting. In this
problem, a set of jobs arrive over time to be scheduled on a set of
machines. Each job has processing length , weight , and is
processed at a rate of when scheduled on machine . The online
scheduler knows the values of and upon arrival of the job,
but is not aware of the quantity . We present the {\em first} online
algorithm that is {\em scalable} ((1+\eps)-speed
-competitive for any constant \eps > 0) for the
total weighted flow-time objective. No non-trivial results were known for this
setting, except for the most basic case of identical machines. Our result
resolves a major open problem in online scheduling theory. Moreover, we also
show that no job needs more than a logarithmic number of migrations. We further
extend our result and give a scalable algorithm for the objective of minimizing
total weighted flow-time plus energy cost for the case of unrelated machines
and obtain a scalable algorithm. The key algorithmic idea is to let jobs
migrate selfishly until they converge to an equilibrium. Towards this end, we
define a game where each job's utility which is closely tied to the
instantaneous increase in the objective the job is responsible for, and each
machine declares a policy that assigns priorities to jobs based on when they
migrate to it, and the execution speeds. This has a spirit similar to
coordination mechanisms that attempt to achieve near optimum welfare in the
presence of selfish agents (jobs). To the best our knowledge, this is the first
work that demonstrates the usefulness of ideas from coordination mechanisms and
Nash equilibria for designing and analyzing online algorithms
Better Unrelated Machine Scheduling for Weighted Completion Time via Random Offsets from Non-Uniform Distributions
In this paper we consider the classic scheduling problem of minimizing total
weighted completion time on unrelated machines when jobs have release times,
i.e, using the three-field notation. For this
problem, a 2-approximation is known based on a novel convex programming (J. ACM
2001 by Skutella). It has been a long standing open problem if one can improve
upon this 2-approximation (Open Problem 8 in J. of Sched. 1999 by Schuurman and
Woeginger). We answer this question in the affirmative by giving a
1.8786-approximation. We achieve this via a surprisingly simple linear
programming, but a novel rounding algorithm and analysis. A key ingredient of
our algorithm is the use of random offsets sampled from non-uniform
distributions.
We also consider the preemptive version of the problem, i.e, . We again use the idea of sampling offsets from non-uniform
distributions to give the first better than 2-approximation for this problem.
This improvement also requires use of a configuration LP with variables for
each job's complete schedules along with more careful analysis. For both
non-preemptive and preemptive versions, we break the approximation barrier of 2
for the first time.Comment: 24 pages. To apper in FOCS 201
- …