81 research outputs found

    Charactarizations of Linear Suboptimality for Mathematical Programs with Equilibrium Constraints

    Get PDF
    The paper is devoted to the study of a new notion of linear suboptimality in constrained mathematical programming. This concept is different from conventional notions of solutions to optimization-related problems, while seems to be natural and significant from the viewpoint of modern variational analysis and applications. In contrast to standard notions, it admits complete characterizations via appropriate constructions of generalized differentiation in nonconvex settings. In this paper we mainly focus on various classes of mathematical programs with equilibrium constraints (MPECs), whose principal role has been well recognized in optimization theory and its applications. Based on robust generalized differential calculus, we derive new results giving pointwise necessary and sufficient conditions for linear suboptimality in general MPECs and its important specifications involving variational and quasi variational inequalities, implicit complementarity problems, etc

    Hidden Convexity in Partially Separable Optimization

    Get PDF
    The paper identifies classes of nonconvex optimization problems whose convex relaxations have optimal solutions which at the same time are global optimal solutions of the original nonconvex problems. Such a hidden convexity property was so far limited to quadratically constrained quadratic problems with one or two constraints. We extend it here to problems with some partial separable structure. Among other things, the new hidden convexity results open up the possibility to solve multi-stage robust optimization problems using certain nonlinear decision rules.convex relaxation of nonconvex problems;hidden convexity;partially separable functions;robust optimization

    On Degrees of Freedom of Projection Estimators with Applications to Multivariate Nonparametric Regression

    Full text link
    In this paper, we consider the nonparametric regression problem with multivariate predictors. We provide a characterization of the degrees of freedom and divergence for estimators of the unknown regression function, which are obtained as outputs of linearly constrained quadratic optimization procedures, namely, minimizers of the least squares criterion with linear constraints and/or quadratic penalties. As special cases of our results, we derive explicit expressions for the degrees of freedom in many nonparametric regression problems, e.g., bounded isotonic regression, multivariate (penalized) convex regression, and additive total variation regularization. Our theory also yields, as special cases, known results on the degrees of freedom of many well-studied estimators in the statistics literature, such as ridge regression, Lasso and generalized Lasso. Our results can be readily used to choose the tuning parameter(s) involved in the estimation procedure by minimizing the Stein's unbiased risk estimate. As a by-product of our analysis we derive an interesting connection between bounded isotonic regression and isotonic regression on a general partially ordered set, which is of independent interest.Comment: 72 pages, 7 figures, Journal of the American Statistical Association (Theory and Methods), 201

    Hidden Convexity in Partially Separable Optimization

    Get PDF
    The paper identifies classes of nonconvex optimization problems whose convex relaxations have optimal solutions which at the same time are global optimal solutions of the original nonconvex problems. Such a hidden convexity property was so far limited to quadratically constrained quadratic problems with one or two constraints. We extend it here to problems with some partial separable structure. Among other things, the new hidden convexity results open up the possibility to solve multi-stage robust optimization problems using certain nonlinear decision rules.

    Canonical Duality Theory for Global Optimization problems and applications

    Get PDF
    The canonical duality theory is studied, through a discussion on a general global optimization problem and applications on fundamentally important problems. This general problem is a formulation of the minimization problem with inequality constraints, where the objective function and constraints are any convex or nonconvex functions satisfying certain decomposition conditions. It covers convex problems, mixed integer programming problems and many other nonlinear programming problems. The three main parts of the canonical duality theory are canonical dual transformation, complementary-dual principle and triality theory. The complementary-dual principle is further developed, which conventionally states that each critical point of the canonical dual problem is corresponding to a KKT point of the primal problem with their sharing the same function value. The new result emphasizes that there exists a one-to-one correspondence between KKT points of the dual problem and of the primal problem and each pair of the corresponding KKT points share the same function value, which implies that there is truly no duality gap between the canonical dual problem and the primal problem. The triality theory reveals insightful information about global and local solutions. It is shown that as long as the global optimality condition holds true, the primal problem is equivalent to a convex problem in the dual space, which can be solved efficiently by existing convex methods; even if the condition does not hold, the convex problem still provides a lower bound that is at least as good as that by the Lagrangian relaxation method. It is also shown that through examining the canonical dual problem, the hidden convexity of the primal problem is easily observable. The canonical duality theory is then applied to dealing with three fundamentally important problems. The first one is the spherically constrained quadratic problem, also referred to as the trust region subproblem. The canonical dual problem is onedimensional and it is proved that the primal problem, no matter with convex or nonconvex objective function, is equivalent to a convex problem in the dual space. Moreover, conditions are found which comprise the boundary that separates instances into ā€œhard caseā€ and ā€œeasy caseā€. A canonical primal-dual algorithm is developed, which is able to efficiently solve the problem, including the ā€œhard caseā€, and can be used as a unified method for similar problems. The second one is the binary quadratic problem, a fundamental problem in discrete optimization. The discussion is focused on lower bounds and analytically solvable cases, which are obtained by analyzing the canonical dual problem with perturbation techniques. The third one is a general nonconvex problem with log-sum-exp functions and quartic polynomials. It arises widely in engineering science and it can be used to approximate nonsmooth optimization problems. The work shows that problems can still be efficiently solved, via the canonical duality approach, even if they are nonconvex and nonsmooth.Doctor of Philosoph
    • ā€¦
    corecore