6,508 research outputs found
On Solving Convex Optimization Problems with Linear Ascending Constraints
In this paper, we propose two algorithms for solving convex optimization
problems with linear ascending constraints. When the objective function is
separable, we propose a dual method which terminates in a finite number of
iterations. In particular, the worst case complexity of our dual method
improves over the best-known result for this problem in Padakandla and
Sundaresan [SIAM J. Optimization, 20 (2009), pp. 1185-1204]. We then propose a
gradient projection method to solve a more general class of problems in which
the objective function is not necessarily separable. Numerical experiments show
that both our algorithms work well in test problems.Comment: 20 pages. The final version of this paper is published in
Optimization Letter
Convex separable problems with linear and box constraints
In this work, we focus on separable convex optimization problems with linear
and box constraints and compute the solution in closed-form as a function of
some Lagrange multipliers that can be easily computed in a finite number of
iterations. This allows us to bridge the gap between a wide family of power
allocation problems of practical interest in signal processing and
communications and their efficient implementation in practice.Comment: 5 pages, 2 figures. Published at IEEE International Conference on
Acoustics, Speech and Signal Processing (ICASSP 2014
A Decomposition Algorithm for Nested Resource Allocation Problems
We propose an exact polynomial algorithm for a resource allocation problem
with convex costs and constraints on partial sums of resource consumptions, in
the presence of either continuous or integer variables. No assumption of strict
convexity or differentiability is needed. The method solves a hierarchy of
resource allocation subproblems, whose solutions are used to convert
constraints on sums of resources into bounds for separate variables at higher
levels. The resulting time complexity for the integer problem is , and the complexity of obtaining an -approximate
solution for the continuous case is , being
the number of variables, the number of ascending constraints (such that ), a desired precision, and the total resource. This
algorithm attains the best-known complexity when , and improves it when
. Extensive experimental analyses are conducted with four
recent algorithms on various continuous problems issued from theory and
practice. The proposed method achieves a higher performance than previous
algorithms, addressing all problems with up to one million variables in less
than one minute on a modern computer.Comment: Working Paper -- MIT, 23 page
Mixed-Integer Convex Nonlinear Optimization with Gradient-Boosted Trees Embedded
Decision trees usefully represent sparse, high dimensional and noisy data.
Having learned a function from this data, we may want to thereafter integrate
the function into a larger decision-making problem, e.g., for picking the best
chemical process catalyst. We study a large-scale, industrially-relevant
mixed-integer nonlinear nonconvex optimization problem involving both
gradient-boosted trees and penalty functions mitigating risk. This
mixed-integer optimization problem with convex penalty terms broadly applies to
optimizing pre-trained regression tree models. Decision makers may wish to
optimize discrete models to repurpose legacy predictive models, or they may
wish to optimize a discrete model that particularly well-represents a data set.
We develop several heuristic methods to find feasible solutions, and an exact,
branch-and-bound algorithm leveraging structural properties of the
gradient-boosted trees and penalty functions. We computationally test our
methods on concrete mixture design instance and a chemical catalysis industrial
instance
- …