2,452 research outputs found
A DC Programming Approach for Solving Multicast Network Design Problems via the Nesterov Smoothing Technique
This paper continues our effort initiated in [9] to study Multicast
Communication Networks, modeled as bilevel hierarchical clustering problems, by
using mathematical optimization techniques. Given a finite number of nodes, we
consider two different models of multicast networks by identifying a certain
number of nodes as cluster centers, and at the same time, locating a particular
node that serves as a total center so as to minimize the total transportation
cost through the network. The fact that the cluster centers and the total
center have to be among the given nodes makes this problem a discrete
optimization problem. Our approach is to reformulate the discrete problem as a
continuous one and to apply Nesterov smoothing approximation technique on the
Minkowski gauges that are used as distance measures. This approach enables us
to propose two implementable DCA-based algorithms for solving the problems.
Numerical results and practical applications are provided to illustrate our
approach
The Network Improvement Problem for Equilibrium Routing
In routing games, agents pick their routes through a network to minimize
their own delay. A primary concern for the network designer in routing games is
the average agent delay at equilibrium. A number of methods to control this
average delay have received substantial attention, including network tolls,
Stackelberg routing, and edge removal.
A related approach with arguably greater practical relevance is that of
making investments in improvements to the edges of the network, so that, for a
given investment budget, the average delay at equilibrium in the improved
network is minimized. This problem has received considerable attention in the
literature on transportation research and a number of different algorithms have
been studied. To our knowledge, none of this work gives guarantees on the
output quality of any polynomial-time algorithm. We study a model for this
problem introduced in transportation research literature, and present both
hardness results and algorithms that obtain nearly optimal performance
guarantees.
- We first show that a simple algorithm obtains good approximation guarantees
for the problem. Despite its simplicity, we show that for affine delays the
approximation ratio of 4/3 obtained by the algorithm cannot be improved.
- To obtain better results, we then consider restricted topologies. For
graphs consisting of parallel paths with affine delay functions we give an
optimal algorithm. However, for graphs that consist of a series of parallel
links, we show the problem is weakly NP-hard.
- Finally, we consider the problem in series-parallel graphs, and give an
FPTAS for this case.
Our work thus formalizes the intuition held by transportation researchers
that the network improvement problem is hard, and presents topology-dependent
algorithms that have provably tight approximation guarantees.Comment: 27 pages (including abstract), 3 figure
Data-driven Inverse Optimization with Imperfect Information
In data-driven inverse optimization an observer aims to learn the preferences
of an agent who solves a parametric optimization problem depending on an
exogenous signal. Thus, the observer seeks the agent's objective function that
best explains a historical sequence of signals and corresponding optimal
actions. We focus here on situations where the observer has imperfect
information, that is, where the agent's true objective function is not
contained in the search space of candidate objectives, where the agent suffers
from bounded rationality or implementation errors, or where the observed
signal-response pairs are corrupted by measurement noise. We formalize this
inverse optimization problem as a distributionally robust program minimizing
the worst-case risk that the {\em predicted} decision ({\em i.e.}, the decision
implied by a particular candidate objective) differs from the agent's {\em
actual} response to a random signal. We show that our framework offers rigorous
out-of-sample guarantees for different loss functions used to measure
prediction errors and that the emerging inverse optimization problems can be
exactly reformulated as (or safely approximated by) tractable convex programs
when a new suboptimality loss function is used. We show through extensive
numerical tests that the proposed distributionally robust approach to inverse
optimization attains often better out-of-sample performance than the
state-of-the-art approaches
- …