50 research outputs found

    Error bounds for rank constrained optimization problems and applications

    Full text link
    This paper is concerned with the rank constrained optimization problem whose feasible set is the intersection of the rank constraint set R= ⁣{X∈X ∣ rank(X)≀κ}\mathcal{R}=\!\big\{X\in\mathbb{X}\ |\ {\rm rank}(X)\le \kappa\big\} and a closed convex set Ξ©\Omega. We establish the local (global) Lipschitzian type error bounds for estimating the distance from any X∈ΩX\in \Omega (X∈XX\in\mathbb{X}) to the feasible set and the solution set, respectively, under the calmness of a multifunction associated to the feasible set at the origin, which is specially satisfied by three classes of common rank constrained optimization problems. As an application of the local Lipschitzian type error bounds, we show that the penalty problem yielded by moving the rank constraint into the objective is exact in the sense that its global optimal solution set coincides with that of the original problem when the penalty parameter is over a certain threshold. This particularly offers an affirmative answer to the open question whether the penalty problem (32) in (Gao and Sun, 2010) is exact or not. As another application, we derive the error bounds of the iterates generated by a multi-stage convex relaxation approach to those three classes of rank constrained problems and show that the bounds are nonincreasing as the number of stages increases

    Calibrated zero-norm regularized LS estimator for high-dimensional error-in-variables regression

    Full text link
    This paper is concerned with high-dimensional error-in-variables regression that aims at identifying a small number of important interpretable factors for corrupted data from many applications where measurement errors or missing data can not be ignored. Motivated by CoCoLasso due to Datta and Zou \cite{Datta16} and the advantage of the zero-norm regularized LS estimator over Lasso for clean data, we propose a calibrated zero-norm regularized LS (CaZnRLS) estimator by constructing a calibrated least squares loss with a positive definite projection of an unbiased surrogate for the covariance matrix of covariates, and use the multi-stage convex relaxation approach to compute the CaZnRLS estimator. Under a restricted eigenvalue condition on the true matrix of covariates, we derive the β„“2\ell_2-error bound of every iterate and establish the decreasing of the error bound sequence, and the sign consistency of the iterates after finite steps. The statistical guarantees are also provided for the CaZnRLS estimator under two types of measurement errors. Numerical comparisons with CoCoLasso and NCL (the nonconvex Lasso proposed by Poh and Wainwright \cite{Loh11}) demonstrate that CaZnRLS not only has the comparable or even better relative RSME but also has the least number of incorrect predictors identified

    KL property of exponent 1/21/2 for zero-norm composite quadratic functions

    Full text link
    This paper is concerned with a class of zero-norm regularized and constrained composite quadratic optimization problems, which has important applications in the fields such as sparse eigenvalue problems, sparse portfolio problems, and nonnegative matrix factorizations. For this class of nonconvex and nonsmooth problems, we establish the KL property of exponent 1/2 of its objective function under a suitable assumption, and provide some examples to illustrate that the assumption holds

    A multi-stage convex relaxation approach to noisy structured low-rank matrix recovery

    Full text link
    This paper concerns with a noisy structured low-rank matrix recovery problem which can be modeled as a structured rank minimization problem. We reformulate this problem as a mathematical program with a generalized complementarity constraint (MPGCC), and show that its penalty version, yielded by moving the generalized complementarity constraint to the objective, has the same global optimal solution set as the MPGCC does whenever the penalty parameter is over a threshold. Then, by solving the exact penalty problem in an alternating way, we obtain a multi-stage convex relaxation approach. We provide theoretical guarantees for our approach under a mild restricted eigenvalue condition, by quantifying the reduction of the error and approximate rank bounds of the first stage convex relaxation (which is exactly the nuclear norm relaxation) in the subsequent stages and establishing the geometric convergence of the error sequence in a statistical sense. Numerical experiments are conducted for some structured low-rank matrix recovery examples to confirm our theoretical findings.Comment: 29 pages, 2 figure

    GEP-MSCRA for computing the group zero-norm regularized least squares estimator

    Full text link
    This paper concerns with the group zero-norm regularized least squares estimator which, in terms of the variational characterization of the zero-norm, can be obtained from a mathematical program with equilibrium constraints (MPEC). By developing the global exact penalty for the MPEC, this estimator is shown to arise from an exact penalization problem that not only has a favorable bilinear structure but also implies a recipe to deliver equivalent DC estimators such as the SCAD and MCP estimators. We propose a multi-stage convex relaxation approach (GEP-MSCRA) for computing this estimator, and under a restricted strong convexity assumption on the design matrix, establish its theoretical guarantees which include the decreasing of the error bounds for the iterates to the true coefficient vector and the coincidence of the iterates after finite steps with the oracle estimator. Finally, we implement the GEP-MSCRA with the subproblems solved by a semismooth Newton augmented Lagrangian method (ALM) and compare its performance with that of SLEP and MALSAR, the solvers for the weighted β„“2,1\ell_{2,1}-norm regularized estimator, on synthetic group sparse regression problems and real multi-task learning problems. Numerical comparison indicates that the GEP-MSCRA has significant advantage in reducing error and achieving better sparsity than the SLEP and the MALSAR do

    Equivalent Lipschitz surrogates for zero-norm and rank optimization problems

    Full text link
    This paper proposes a mechanism to produce equivalent Lipschitz surrogates for zero-norm and rank optimization problems by means of the global exact penalty for their equivalent mathematical programs with an equilibrium constraint (MPECs). Specifically, we reformulate these combinatorial problems as equivalent MPECs by the variational characterization of the zero-norm and rank function, show that their penalized problems, yielded by moving the equilibrium constraint into the objective, are the global exact penalization, and obtain the equivalent Lipschitz surrogates by eliminating the dual variable in the global exact penalty. These surrogates, including the popular SCAD function in statistics, are also difference of two convex functions (D.C.) if the function and constraint set involved in zero-norm and rank optimization problems are convex. We illustrate an application by designing a multi-stage convex relaxation approach to the rank plus zero-norm regularized minimization problem

    A proximal MM method for the zero-norm regularized PLQ composite optimization problem

    Full text link
    This paper is concerned with a class of zero-norm regularized piecewise linear-quadratic (PLQ) composite minimization problems, which covers the zero-norm regularized β„“1\ell_1-loss minimization problem as a special case. For this class of nonconvex nonsmooth problems, we show that its equivalent MPEC reformulation is partially calm on the set of global optima and make use of this property to derive a family of equivalent DC surrogates. Then, we propose a proximal majorization-minimization (MM) method, a convex relaxation approach not in the DC algorithm framework, for solving one of the DC surrogates which is a semiconvex PLQ minimization problem involving three nonsmooth terms. For this method, we establish its global convergence and linear rate of convergence, and under suitable conditions show that the limit of the generated sequence is not only a local optimum but also a good critical point in a statistical sense. Numerical experiments are conducted with synthetic and real data for the proximal MM method with the subproblems solved by a dual semismooth Newton method to confirm our theoretical findings, and numerical comparisons with a convergent indefinite-proximal ADMM for the partially smoothed DC surrogate verify its superiority in the quality of solutions and computing time

    Error bound of local minima and KL property of exponent 1/2 for squared F-norm regularized factorization

    Full text link
    This paper is concerned with the squared F(robenius)-norm regularized factorization form for noisy low-rank matrix recovery problems. Under a suitable assumption on the restricted condition number of the Hessian matrix of the loss function, we establish an error bound to the true matrix for those local minima whose ranks are not more than the rank of the true matrix. Then, for the least squares loss function, we achieve the KL property of exponent 1/2 for the F-norm regularized factorization function over its global minimum set under a restricted strong convexity assumption. These theoretical findings are also confirmed by applying an accelerated alternating minimization method to the F-norm regularized factorization problem

    KL property of exponent 1/21/2 of β„“2,0\ell_{2,0}-norm and DC regularized factorizations for low-rank matrix recovery

    Full text link
    This paper is concerned with the factorization form of the rank regularized loss minimization problem. To cater for the scenario in which only a coarse estimation is available for the rank of the true matrix, an β„“2,0\ell_{2,0}-norm regularized term is added to the factored loss function to reduce the rank adaptively; and account for the ambiguities in the factorization, a balanced term is then introduced. For the least squares loss, under a restricted condition number assumption on the sampling operator, we establish the KL property of exponent 1/21/2 of the nonsmooth factored composite function and its equivalent DC reformulations in the set of their global minimizers. We also confirm the theoretical findings by applying a proximal linearized alternating minimization method to the regularized factorizations.Comment: 29 pages, 3 figure

    A proximal dual semismooth Newton method for computing zero-norm penalized QR estimator

    Full text link
    This paper is concerned with the computation of the high-dimensional zero-norm penalized quantile regression estimator, defined as a global minimizer of the zero-norm penalized check loss function. To seek a desirable approximation to the estimator, we reformulate this NP-hard problem as an equivalent augmented Lipschitz optimization problem, and exploit its coupled structure to propose a multi-stage convex relaxation approach (MSCRA\_PPA), each step of which solves inexactly a weighted β„“1\ell_1-regularized check loss minimization problem with a proximal dual semismooth Newton method. Under a restricted strong convexity condition, we provide the theoretical guarantee for the MSCRA\_PPA by establishing the error bound of each iterate to the true estimator and the rate of linear convergence in a statistical sense. Numerical comparisons on some synthetic and real data show that MSCRA\_PPA not only has comparable even better estimation performance, but also requires much less CPU time
    corecore