21,060 research outputs found
Direct solutions to tropical optimization problems with nonlinear objective functions and boundary constraints
We examine two multidimensional optimization problems that are formulated in
terms of tropical mathematics. The problems are to minimize nonlinear objective
functions, which are defined through the multiplicative conjugate vector
transposition on vectors of a finite-dimensional semimodule over an idempotent
semifield, and subject to boundary constraints. The solution approach is
implemented, which involves the derivation of the sharp bounds on the objective
functions, followed by determination of vectors that yield the bound. Based on
the approach, direct solutions to the problems are obtained in a compact vector
form. To illustrate, we apply the results to solving constrained Chebyshev
approximation and location problems, and give numerical examples.Comment: Mathematical Methods and Optimization Techniques in Engineering:
Proc. 1st Intern. Conf. on Optimization Techniques in Engineering (OTENG
'13), Antalya, Turkey, October 8-10, 2013, WSEAS Press, 2013, pp. 86-91. ISBN
978-960-474-339-
A Still Simpler Way of Introducing the Interior-Point Method for Linear Programming
Linear programming is now included in algorithm undergraduate and
postgraduate courses for computer science majors. We give a self-contained
treatment of an interior-point method which is particularly tailored to the
typical mathematical background of CS students. In particular, only limited
knowledge of linear algebra and calculus is assumed.Comment: Updates and replaces arXiv:1412.065
Small Extended Formulation for Knapsack Cover Inequalities from Monotone Circuits
Initially developed for the min-knapsack problem, the knapsack cover
inequalities are used in the current best relaxations for numerous
combinatorial optimization problems of covering type. In spite of their
widespread use, these inequalities yield linear programming (LP) relaxations of
exponential size, over which it is not known how to optimize exactly in
polynomial time. In this paper we address this issue and obtain LP relaxations
of quasi-polynomial size that are at least as strong as that given by the
knapsack cover inequalities.
For the min-knapsack cover problem, our main result can be stated formally as
follows: for any , there is a -size LP relaxation with an integrality gap of at most ,
where is the number of items. Prior to this work, there was no known
relaxation of subexponential size with a constant upper bound on the
integrality gap.
Our construction is inspired by a connection between extended formulations
and monotone circuit complexity via Karchmer-Wigderson games. In particular,
our LP is based on -depth monotone circuits with fan-in~ for
evaluating weighted threshold functions with inputs, as constructed by
Beimel and Weinreb. We believe that a further understanding of this connection
may lead to more positive results complementing the numerous lower bounds
recently proved for extended formulations.Comment: 21 page
A Path Algorithm for Constrained Estimation
Many least squares problems involve affine equality and inequality
constraints. Although there are variety of methods for solving such problems,
most statisticians find constrained estimation challenging. The current paper
proposes a new path following algorithm for quadratic programming based on
exact penalization. Similar penalties arise in regularization in model
selection. Classical penalty methods solve a sequence of unconstrained problems
that put greater and greater stress on meeting the constraints. In the limit as
the penalty constant tends to , one recovers the constrained solution.
In the exact penalty method, squared penalties are replaced by absolute value
penalties, and the solution is recovered for a finite value of the penalty
constant. The exact path following method starts at the unconstrained solution
and follows the solution path as the penalty constant increases. In the
process, the solution path hits, slides along, and exits from the various
constraints. Path following in lasso penalized regression, in contrast, starts
with a large value of the penalty constant and works its way downward. In both
settings, inspection of the entire solution path is revealing. Just as with the
lasso and generalized lasso, it is possible to plot the effective degrees of
freedom along the solution path. For a strictly convex quadratic program, the
exact penalty algorithm can be framed entirely in terms of the sweep operator
of regression analysis. A few well chosen examples illustrate the mechanics and
potential of path following.Comment: 26 pages, 5 figure
Hardness Results for Structured Linear Systems
We show that if the nearly-linear time solvers for Laplacian matrices and
their generalizations can be extended to solve just slightly larger families of
linear systems, then they can be used to quickly solve all systems of linear
equations over the reals. This result can be viewed either positively or
negatively: either we will develop nearly-linear time algorithms for solving
all systems of linear equations over the reals, or progress on the families we
can solve in nearly-linear time will soon halt
- …