20,268 research outputs found
New Old Algorithms for Stochastic Scheduling
We consider the stochastic identical parallel machine scheduling problem and its online extension, when the objective is to minimize the expected total weighted completion time of a set of jobs that are released over time. We give randomized as well as deterministic online and offline algorithms that have the best known performance guarantees in either setting, online or offline and deterministic or randomized. Our analysis is based on a novel linear programming relaxation for stochastic scheduling problems that can be solved online
Greed Works -- Online Algorithms For Unrelated Machine Stochastic Scheduling
This paper establishes performance guarantees for online algorithms that
schedule stochastic, nonpreemptive jobs on unrelated machines to minimize the
expected total weighted completion time. Prior work on unrelated machine
scheduling with stochastic jobs was restricted to the offline case, and
required linear or convex programming relaxations for the assignment of jobs to
machines. The algorithms introduced in this paper are purely combinatorial. The
performance bounds are of the same order of magnitude as those of earlier work,
and depend linearly on an upper bound on the squared coefficient of variation
of the jobs' processing times. Specifically for deterministic processing times,
without and with release times, the competitive ratios are 4 and 7.216,
respectively. As to the technical contribution, the paper shows how dual
fitting techniques can be used for stochastic and nonpreemptive scheduling
problems.Comment: Preliminary version appeared in IPCO 201
A Multistage Stochastic Programming Approach to the Dynamic and Stochastic VRPTW - Extended version
We consider a dynamic vehicle routing problem with time windows and
stochastic customers (DS-VRPTW), such that customers may request for services
as vehicles have already started their tours. To solve this problem, the goal
is to provide a decision rule for choosing, at each time step, the next action
to perform in light of known requests and probabilistic knowledge on requests
likelihood. We introduce a new decision rule, called Global Stochastic
Assessment (GSA) rule for the DS-VRPTW, and we compare it with existing
decision rules, such as MSA. In particular, we show that GSA fully integrates
nonanticipativity constraints so that it leads to better decisions in our
stochastic context. We describe a new heuristic approach for efficiently
approximating our GSA rule. We introduce a new waiting strategy. Experiments on
dynamic and stochastic benchmarks, which include instances of different degrees
of dynamism, show that not only our approach is competitive with
state-of-the-art methods, but also enables to compute meaningful offline
solutions to fully dynamic problems where absolutely no a priori customer
request is provided.Comment: Extended version of the same-name study submitted for publication in
conference CPAIOR201
Approximate Dynamic Programming via Sum of Squares Programming
We describe an approximate dynamic programming method for stochastic control
problems on infinite state and input spaces. The optimal value function is
approximated by a linear combination of basis functions with coefficients as
decision variables. By relaxing the Bellman equation to an inequality, one
obtains a linear program in the basis coefficients with an infinite set of
constraints. We show that a recently introduced method, which obtains convex
quadratic value function approximations, can be extended to higher order
polynomial approximations via sum of squares programming techniques. An
approximate value function can then be computed offline by solving a
semidefinite program, without having to sample the infinite constraint. The
policy is evaluated online by solving a polynomial optimization problem, which
also turns out to be convex in some cases. We experimentally validate the
method on an autonomous helicopter testbed using a 10-dimensional helicopter
model.Comment: 7 pages, 5 figures. Submitted to the 2013 European Control
Conference, Zurich, Switzerlan
A Learning Theoretic Approach to Energy Harvesting Communication System Optimization
A point-to-point wireless communication system in which the transmitter is
equipped with an energy harvesting device and a rechargeable battery, is
studied. Both the energy and the data arrivals at the transmitter are modeled
as Markov processes. Delay-limited communication is considered assuming that
the underlying channel is block fading with memory, and the instantaneous
channel state information is available at both the transmitter and the
receiver. The expected total transmitted data during the transmitter's
activation time is maximized under three different sets of assumptions
regarding the information available at the transmitter about the underlying
stochastic processes. A learning theoretic approach is introduced, which does
not assume any a priori information on the Markov processes governing the
communication system. In addition, online and offline optimization problems are
studied for the same setting. Full statistical knowledge and causal information
on the realizations of the underlying stochastic processes are assumed in the
online optimization problem, while the offline optimization problem assumes
non-causal knowledge of the realizations in advance. Comparing the optimal
solutions in all three frameworks, the performance loss due to the lack of the
transmitter's information regarding the behaviors of the underlying Markov
processes is quantified
- …