10,825 research outputs found
Probabilistic analysis of a differential equation for linear programming
In this paper we address the complexity of solving linear programming
problems with a set of differential equations that converge to a fixed point
that represents the optimal solution. Assuming a probabilistic model, where the
inputs are i.i.d. Gaussian variables, we compute the distribution of the
convergence rate to the attracting fixed point. Using the framework of Random
Matrix Theory, we derive a simple expression for this distribution in the
asymptotic limit of large problem size. In this limit, we find that the
distribution of the convergence rate is a scaling function, namely it is a
function of one variable that is a combination of three parameters: the number
of variables, the number of constraints and the convergence rate, rather than a
function of these parameters separately. We also estimate numerically the
distribution of computation times, namely the time required to reach a vicinity
of the attracting fixed point, and find that it is also a scaling function.
Using the problem size dependence of the distribution functions, we derive high
probability bounds on the convergence rates and on the computation times.Comment: 1+37 pages, latex, 5 eps figures. Version accepted for publication in
the Journal of Complexity. Changes made: Presentation reorganized for
clarity, expanded discussion of measure of complexity in the non-asymptotic
regime (added a new section
A simulation comparison of methods for new product location
Includes bibliographical references (p. 29-31)
Recommended from our members
A comparison of general-purpose optimization algorithms forfinding optimal approximate experimental designs
Several common general purpose optimization algorithms are compared for findingA- and D-optimal designs for different types of statistical models of varying complexity,including high dimensional models with five and more factors. The algorithms of interestinclude exact methods, such as the interior point method, the Nelder–Mead method, theactive set method, the sequential quadratic programming, and metaheuristic algorithms,such as particle swarm optimization, simulated annealing and genetic algorithms.Several simulations are performed, which provide general recommendations on theutility and performance of each method, including hybridized versions of metaheuristicalgorithms for finding optimal experimental designs. A key result is that general-purposeoptimization algorithms, both exact methods and metaheuristic algorithms, perform wellfor finding optimal approximate experimental designs
Probabilistic Interpretation of Linear Solvers
This manuscript proposes a probabilistic framework for algorithms that
iteratively solve unconstrained linear problems with positive definite
for . The goal is to replace the point estimates returned by existing
methods with a Gaussian posterior belief over the elements of the inverse of
, which can be used to estimate errors. Recent probabilistic interpretations
of the secant family of quasi-Newton optimization algorithms are extended.
Combined with properties of the conjugate gradient algorithm, this leads to
uncertainty-calibrated methods with very limited cost overhead over conjugate
gradients, a self-contained novel interpretation of the quasi-Newton and
conjugate gradient algorithms, and a foundation for new nonlinear optimization
methods.Comment: final version, in press at SIAM J Optimizatio
- …