273,018 research outputs found
A new perspective on the complexity of interior point methods for linear programming
In a dynamical systems paradigm, many optimization algorithms are equivalent to applying forward Euler method to the system of ordinary differential equations defined by the vector field of the search directions. Thus the stiffness of such vector fields will play an essential role in the complexity of these methods. We first exemplify this point with a theoretical result for general linesearch methods for unconstrained optimization, which we further employ to investigating the complexity of a primal short-step path-following interior point method for linear programming. Our analysis involves showing that the Newton vector field associated to the primal logarithmic barrier is nonstiff in a sufficiently small and shrinking neighbourhood of its minimizer. Thus, by confining the iterates to these neighbourhoods of the primal central path, our algorithm has a nonstiff vector field of search directions, and we can give a worst-case bound on its iteration complexity. Furthermore, due to the generality of our vector field setting, we can perform a similar (global) iteration complexity analysis when the Newton direction of the interior point method is computed only approximately, using some direct method for solving linear systems of equations
Direct search based on probabilistic descent in reduced spaces
Derivative-free algorithms seek the minimum value of a given objective
function without using any derivative information. The performance of these
methods often worsen as the dimension increases, a phenomenon predicted by
their worst-case complexity guarantees. Nevertheless, recent algorithmic
proposals have shown that incorporating randomization into otherwise
deterministic frameworks could alleviate this effect for direct-search methods.
The best guarantees and practical performance are obtained when employing a
random vector and its negative, which amounts to drawing directions in a random
one-dimensional subspace. Unlike for other derivative-free schemes, however,
the properties of these subspaces have not been exploited.
In this paper, we study a generic direct-search algorithm in which the
polling directions are defined using random subspaces. Complexity guarantees
for such an approach are derived thanks to probabilistic properties related to
both the subspaces and the directions used within these subspaces. By
leveraging results on random subspace embeddings and sketching matrices, we
show that better complexity bounds are obtained for randomized instances of our
framework. A numerical investigation confirms the benefit of randomization,
particularly when done in subspaces, when solving problems of moderately large
dimension
Characterizing Optimal Adword Auctions
We present a number of models for the adword auctions used for pricing
advertising slots on search engines such as Google, Yahoo! etc. We begin with a
general problem formulation which allows the privately known valuation per
click to be a function of both the identity of the advertiser and the slot. We
present a compact characterization of the set of all deterministic incentive
compatible direct mechanisms for this model. This new characterization allows
us to conclude that there are incentive compatible mechanisms for this auction
with a multi-dimensional type-space that are {\em not} affine maximizers. Next,
we discuss two interesting special cases: slot independent valuation and slot
independent valuation up to a privately known slot and zero thereafter. For
both of these special cases, we characterize revenue maximizing and efficiency
maximizing mechanisms and show that these mechanisms can be computed with a
worst case computational complexity and respectively,
where is number of bidders and is number of slots. Next, we
characterize optimal rank based allocation rules and propose a new mechanism
that we call the customized rank based allocation. We report the results of a
numerical study that compare the revenue and efficiency of the proposed
mechanisms. The numerical results suggest that customized rank-based allocation
rule is significantly superior to the rank-based allocation rules.Comment: 29 pages, work was presented at a) Second Workshop on Sponsored
Search Auctions, Ann Arbor, MI b) INFORMS Annual Meeting, Pittsburgh c)
Decision Sciences Seminar, Fuqua School of Business, Duke Universit
Improved Smoothed Analysis of 2-Opt for the Euclidean TSP
The 2-opt heuristic is a simple local search heuristic for the Travelling
Salesperson Problem (TSP). Although it usually performs well in practice, its
worst-case running time is poor. Attempts to reconcile this difference have
used smoothed analysis, in which adversarial instances are perturbed
probabilistically. We are interested in the classical model of smoothed
analysis for the Euclidean TSP, in which the perturbations are Gaussian. This
model was previously used by Manthey \& Veenstra, who obtained smoothed
complexity bounds polynomial in , the dimension , and the perturbation
strength . However, their analysis only works for . The
only previous analysis for was performed by Englert, R\"oglin \&
V\"ocking, who used a different perturbation model which can be translated to
Gaussian perturbations. Their model yields bounds polynomial in and
, and super-exponential in . As no direct analysis existed for
Gaussian perturbations that yields polynomial bounds for all , we perform
this missing analysis. Along the way, we improve all existing smoothed
complexity bounds for Euclidean 2-opt.Comment: 31 pages, 3 figures. Accepted for presentation at ISAAC 202
Hardness Amplification of Optimization Problems
In this paper, we prove a general hardness amplification scheme for optimization problems based on the technique of direct products.
We say that an optimization problem ? is direct product feasible if it is possible to efficiently aggregate any k instances of ? and form one large instance of ? such that given an optimal feasible solution to the larger instance, we can efficiently find optimal feasible solutions to all the k smaller instances. Given a direct product feasible optimization problem ?, our hardness amplification theorem may be informally stated as follows:
If there is a distribution D over instances of ? of size n such that every randomized algorithm running in time t(n) fails to solve ? on 1/?(n) fraction of inputs sampled from D, then, assuming some relationships on ?(n) and t(n), there is a distribution D\u27 over instances of ? of size O(n??(n)) such that every randomized algorithm running in time t(n)/poly(?(n)) fails to solve ? on 99/100 fraction of inputs sampled from D\u27.
As a consequence of the above theorem, we show hardness amplification of problems in various classes such as NP-hard problems like Max-Clique, Knapsack, and Max-SAT, problems in P such as Longest Common Subsequence, Edit Distance, Matrix Multiplication, and even problems in TFNP such as Factoring and computing Nash equilibrium
Quantum and Classical Strong Direct Product Theorems and Optimal Time-Space Tradeoffs
A strong direct product theorem says that if we want to compute k independent
instances of a function, using less than k times the resources needed for one
instance, then our overall success probability will be exponentially small in
k. We establish such theorems for the classical as well as quantum query
complexity of the OR function. This implies slightly weaker direct product
results for all total functions. We prove a similar result for quantum
communication protocols computing k instances of the Disjointness function.
Our direct product theorems imply a time-space tradeoff T^2*S=Omega(N^3) for
sorting N items on a quantum computer, which is optimal up to polylog factors.
They also give several tight time-space and communication-space tradeoffs for
the problems of Boolean matrix-vector multiplication and matrix multiplication.Comment: 22 pages LaTeX. 2nd version: some parts rewritten, results are
essentially the same. A shorter version will appear in IEEE FOCS 0
Efficient chaining of seeds in ordered trees
We consider here the problem of chaining seeds in ordered trees. Seeds are
mappings between two trees Q and T and a chain is a subset of non overlapping
seeds that is consistent with respect to postfix order and ancestrality. This
problem is a natural extension of a similar problem for sequences, and has
applications in computational biology, such as mining a database of RNA
secondary structures. For the chaining problem with a set of m constant size
seeds, we describe an algorithm with complexity O(m2 log(m)) in time and O(m2)
in space
Joint Bandwidth and Power Allocation with Admission Control in Wireless Multi-User Networks With and Without Relaying
Equal allocation of bandwidth and/or power may not be efficient for wireless
multi-user networks with limited bandwidth and power resources. Joint bandwidth
and power allocation strategies for wireless multi-user networks with and
without relaying are proposed in this paper for (i) the maximization of the sum
capacity of all users; (ii) the maximization of the worst user capacity; and
(iii) the minimization of the total power consumption of all users. It is shown
that the proposed allocation problems are convex and, therefore, can be solved
efficiently. Moreover, the admission control based joint bandwidth and power
allocation is considered. A suboptimal greedy search algorithm is developed to
solve the admission control problem efficiently. The conditions under which the
greedy search is optimal are derived and shown to be mild. The performance
improvements offered by the proposed joint bandwidth and power allocation are
demonstrated by simulations. The advantages of the suboptimal greedy search
algorithm for admission control are also shown.Comment: 30 pages, 5 figures, submitted to IEEE Trans. Signal Processing in
June 201
- …