2,302 research outputs found
Approximated Perspective Relaxations: a Project&Lift Approach
The Perspective Reformulation (PR) of a Mixed-Integer NonLinear Program with semi-continuous variables is obtained by replacing each term in the (separable) objective function with its convex envelope. Solving the corresponding continuous relaxation requires appropriate techniques. Under some rather restrictive assumptions, the Projected PR (P^2R) can be defined where the integer variables are eliminated by projecting the solution set onto the space of the continuous variables only. This approach produces a simple piecewise-convex problem with the same structure as the original one; however, this prevents the use of general-purpose solvers, in that some variables are then only implicitly represented in the formulation. We show how to construct an Approximated Projected PR (AP^2R) whereby the projected formulation is "lifted" back to the original variable space, with each integer variable expressing one piece of the obtained piecewise-convex function. In some cases, this produces a reformulation of the original problem with exactly the same size and structure as the standard continuous relaxation, but providing substantially improved bounds. In the process we also substantially extend the approach beyond the original P^2R development by relaxing the requirement that the objective function be quadratic and the left endpoint of the domain of the variables be non-negative. While the AP^2R bound can be weaker than that of the PR, this approach can be applied in many more cases and allows direct use of off-the-shelf MINLP software; this is shown to be competitive with previously proposed approaches in some applications
Consistent Second-Order Conic Integer Programming for Learning Bayesian Networks
Bayesian Networks (BNs) represent conditional probability relations among a
set of random variables (nodes) in the form of a directed acyclic graph (DAG),
and have found diverse applications in knowledge discovery. We study the
problem of learning the sparse DAG structure of a BN from continuous
observational data. The central problem can be modeled as a mixed-integer
program with an objective function composed of a convex quadratic loss function
and a regularization penalty subject to linear constraints. The optimal
solution to this mathematical program is known to have desirable statistical
properties under certain conditions. However, the state-of-the-art optimization
solvers are not able to obtain provably optimal solutions to the existing
mathematical formulations for medium-size problems within reasonable
computational times. To address this difficulty, we tackle the problem from
both computational and statistical perspectives. On the one hand, we propose a
concrete early stopping criterion to terminate the branch-and-bound process in
order to obtain a near-optimal solution to the mixed-integer program, and
establish the consistency of this approximate solution. On the other hand, we
improve the existing formulations by replacing the linear "big-" constraints
that represent the relationship between the continuous and binary indicator
variables with second-order conic constraints. Our numerical results
demonstrate the effectiveness of the proposed approaches
Conditional Gradient Algorithms for Rank-One Matrix Approximations with a Sparsity Constraint
The sparsity constrained rank-one matrix approximation problem is a difficult
mathematical optimization problem which arises in a wide array of useful
applications in engineering, machine learning and statistics, and the design of
algorithms for this problem has attracted intensive research activities. We
introduce an algorithmic framework, called ConGradU, that unifies a variety of
seemingly different algorithms that have been derived from disparate
approaches, and allows for deriving new schemes. Building on the old and
well-known conditional gradient algorithm, ConGradU is a simplified version
with unit step size and yields a generic algorithm which either is given by an
analytic formula or requires a very low computational complexity. Mathematical
properties are systematically developed and numerical experiments are given.Comment: Minor changes. Final version. To appear in SIAM Revie
- …