56,089 research outputs found
A linear fractional optimization over an integer efficient set
Mathematical optimization problems with a goal function, have many applications in
various fields like financial sectors, management sciences and economic applications.
Therefore, it is very important to have a powerful tool to solve such problems when the
main criterion is not linear, particularly fractional, a ratio of two affine functions. In
this paper, we propose an exact algorithm for optimizing a linear fractional function over
the efficient set of a Multiple Objective Integer Linear Programming (MOILP) problem without
having to enumerate all the efficient solutions. We iteratively add some constraints, that
eliminate the undesirable (not interested) points and reduce, progressively, the
admissible region. At each iteration, the solution is being evaluated at the reduced
gradient cost vector and a new direction that improves the objective function is then
defined. The algorithm was coded in MATLAB environment and tested over different
instances randomly generated
Optimising a nonlinear utility function in multi-objective integer programming
In this paper we develop an algorithm to optimise a nonlinear utility
function of multiple objectives over the integer efficient set. Our approach is
based on identifying and updating bounds on the individual objectives as well
as the optimal utility value. This is done using already known solutions,
linear programming relaxations, utility function inversion, and integer
programming. We develop a general optimisation algorithm for use with k
objectives, and we illustrate our approach using a tri-objective integer
programming problem.Comment: 11 pages, 2 tables; v3: minor revisions, to appear in Journal of
Global Optimizatio
Getting Feasible Variable Estimates From Infeasible Ones: MRF Local Polytope Study
This paper proposes a method for construction of approximate feasible primal
solutions from dual ones for large-scale optimization problems possessing
certain separability properties. Whereas infeasible primal estimates can
typically be produced from (sub-)gradients of the dual function, it is often
not easy to project them to the primal feasible set, since the projection
itself has a complexity comparable to the complexity of the initial problem. We
propose an alternative efficient method to obtain feasibility and show that its
properties influencing the convergence to the optimum are similar to the
properties of the Euclidean projection. We apply our method to the local
polytope relaxation of inference problems for Markov Random Fields and
demonstrate its superiority over existing methods.Comment: 20 page, 4 figure
Mixed-Integer Convex Nonlinear Optimization with Gradient-Boosted Trees Embedded
Decision trees usefully represent sparse, high dimensional and noisy data.
Having learned a function from this data, we may want to thereafter integrate
the function into a larger decision-making problem, e.g., for picking the best
chemical process catalyst. We study a large-scale, industrially-relevant
mixed-integer nonlinear nonconvex optimization problem involving both
gradient-boosted trees and penalty functions mitigating risk. This
mixed-integer optimization problem with convex penalty terms broadly applies to
optimizing pre-trained regression tree models. Decision makers may wish to
optimize discrete models to repurpose legacy predictive models, or they may
wish to optimize a discrete model that particularly well-represents a data set.
We develop several heuristic methods to find feasible solutions, and an exact,
branch-and-bound algorithm leveraging structural properties of the
gradient-boosted trees and penalty functions. We computationally test our
methods on concrete mixture design instance and a chemical catalysis industrial
instance
On the Bayes-optimality of F-measure maximizers
The F-measure, which has originally been introduced in information retrieval,
is nowadays routinely used as a performance metric for problems such as binary
classification, multi-label classification, and structured output prediction.
Optimizing this measure is a statistically and computationally challenging
problem, since no closed-form solution exists. Adopting a decision-theoretic
perspective, this article provides a formal and experimental analysis of
different approaches for maximizing the F-measure. We start with a Bayes-risk
analysis of related loss functions, such as Hamming loss and subset zero-one
loss, showing that optimizing such losses as a surrogate of the F-measure leads
to a high worst-case regret. Subsequently, we perform a similar type of
analysis for F-measure maximizing algorithms, showing that such algorithms are
approximate, while relying on additional assumptions regarding the statistical
distribution of the binary response variables. Furthermore, we present a new
algorithm which is not only computationally efficient but also Bayes-optimal,
regardless of the underlying distribution. To this end, the algorithm requires
only a quadratic (with respect to the number of binary responses) number of
parameters of the joint distribution. We illustrate the practical performance
of all analyzed methods by means of experiments with multi-label classification
problems
Multi-objective integer programming: An improved recursive algorithm
This paper introduces an improved recursive algorithm to generate the set of
all nondominated objective vectors for the Multi-Objective Integer Programming
(MOIP) problem. We significantly improve the earlier recursive algorithm of
\"Ozlen and Azizo\u{g}lu by using the set of already solved subproblems and
their solutions to avoid solving a large number of IPs. A numerical example is
presented to explain the workings of the algorithm, and we conduct a series of
computational experiments to show the savings that can be obtained. As our
experiments show, the improvement becomes more significant as the problems grow
larger in terms of the number of objectives.Comment: 11 pages, 6 tables; v2: added more details and a computational stud
- …