11,202 research outputs found
Optimization over Integers with Robustness in Cost and Few Constraints
Robust optimization is an approach for optimization under uncertainty that has recently attracted attention both from theory and practitioners. While there is an elaborate and powerful machinery for continuous robust optimization problems, results on robust combinatorial optimization and robust linear integer programs are still rare and hardly general. In a seminal paper Bertsimas and Sim (2003) show that for an arbitrary, linear 0-1-problem, over which one can optimize, one can also optimize the cost-robust counterpart. They explicitly note that this method is confined to binary problems. We present a result of this type for general integer programs. Further, we extend the result to integer programs with uncertainty in one constraint
The robust single machine scheduling problem with uncertain release and processing times
In this work, we study the single machine scheduling problem with uncertain
release times and processing times of jobs. We adopt a robust scheduling
approach, in which the measure of robustness to be minimized for a given
sequence of jobs is the worst-case objective function value from the set of all
possible realizations of release and processing times. The objective function
value is the total flow time of all jobs. We discuss some important properties
of robust schedules for zero and non-zero release times, and illustrate the
added complexity in robust scheduling given non-zero release times. We propose
heuristics based on variable neighborhood search and iterated local search to
solve the problem and generate robust schedules. The algorithms are tested and
their solution performance is compared with optimal solutions or lower bounds
through numerical experiments based on synthetic data
Parallel local search for solving Constraint Problems on the Cell Broadband Engine (Preliminary Results)
We explore the use of the Cell Broadband Engine (Cell/BE for short) for
combinatorial optimization applications: we present a parallel version of a
constraint-based local search algorithm that has been implemented on a
multiprocessor BladeCenter machine with twin Cell/BE processors (total of 16
SPUs per blade). This algorithm was chosen because it fits very well the
Cell/BE architecture and requires neither shared memory nor communication
between processors, while retaining a compact memory footprint. We study the
performance on several large optimization benchmarks and show that this
achieves mostly linear time speedups, even sometimes super-linear. This is
possible because the parallel implementation might explore simultaneously
different parts of the search space and therefore converge faster towards the
best sub-space and thus towards a solution. Besides getting speedups, the
resulting times exhibit a much smaller variance, which benefits applications
where a timely reply is critical
Best-case performance of quantum annealers on native spin-glass benchmarks: How chaos can affect success probabilities
Recent tests performed on the D-Wave Two quantum annealer have revealed no
clear evidence of speedup over conventional silicon-based technologies. Here,
we present results from classical parallel-tempering Monte Carlo simulations
combined with isoenergetic cluster moves of the archetypal benchmark problem-an
Ising spin glass-on the native chip topology. Using realistic uncorrelated
noise models for the D-Wave Two quantum annealer, we study the best-case
resilience, i.e., the probability that the ground-state configuration is not
affected by random fields and random-bond fluctuations found on the chip. We
thus compute classical upper-bound success probabilities for different types of
disorder used in the benchmarks and predict that an increase in the number of
qubits will require either error correction schemes or a drastic reduction of
the intrinsic noise found in these devices. We outline strategies to develop
robust, as well as hard benchmarks for quantum annealing devices, as well as
any other computing paradigm affected by noise.Comment: 8 pages, 5 figure
Robust optimization with incremental recourse
In this paper, we consider an adaptive approach to address optimization
problems with uncertain cost parameters. Here, the decision maker selects an
initial decision, observes the realization of the uncertain cost parameters,
and then is permitted to modify the initial decision. We treat the uncertainty
using the framework of robust optimization in which uncertain parameters lie
within a given set. The decision maker optimizes so as to develop the best cost
guarantee in terms of the worst-case analysis. The recourse decision is
``incremental"; that is, the decision maker is permitted to change the initial
solution by a small fixed amount. We refer to the resulting problem as the
robust incremental problem. We study robust incremental variants of several
optimization problems. We show that the robust incremental counterpart of a
linear program is itself a linear program if the uncertainty set is polyhedral.
Hence, it is solvable in polynomial time. We establish the NP-hardness for
robust incremental linear programming for the case of a discrete uncertainty
set. We show that the robust incremental shortest path problem is NP-complete
when costs are chosen from a polyhedral uncertainty set, even in the case that
only one new arc may be added to the initial path. We also address the
complexity of several special cases of the robust incremental shortest path
problem and the robust incremental minimum spanning tree problem
- …