53 research outputs found

    Perturbation analysis of a class of composite optimization problems

    Full text link
    In this paper, we study the perturbation analysis of a class of composite optimization problems, which is a very convenient and unified framework for developing both theoretical and algorithmic issues of constrained optimization problems. The underlying theme of this paper is very important in both theoretical and computational study of optimization problems. Under some mild assumptions on the objective function, we provide a definition of a strong second order sufficient condition (SSOSC) for the composite optimization problem and also prove that the following conditions are equivalent to each other: the SSOSC and the nondegeneracy condition, the nonsingularity of Clarke's generalized Jacobian of the nonsmooth system at a Karush-Kuhn-Tucker (KKT) point, and the strong regularity of the KKT point. These results provide an important way to characterize the stability of the KKT point. As for the convex composite optimization problem, which is a special case of the general problem, we establish the equivalence between the primal/dual second order sufficient condition and the dual/primal strict Robinson constraint qualification, the equivalence between the primal/dual SSOSC and the dual/primal nondegeneracy condition. Moreover, we prove that the dual nondegeneracy condition and the nonsingularity of Clarke's generalized Jacobian of the subproblem corresponding to the augmented Lagrangian method are also equivalent to each other. These theoretical results lay solid foundation for designing an efficient algorithm.Comment: 41 page

    An efficient algorithm for the β„“p\ell_{p} norm based metric nearness problem

    Full text link
    Given a dissimilarity matrix, the metric nearness problem is to find the nearest matrix of distances that satisfy the triangle inequalities. This problem has wide applications, such as sensor networks, image processing, and so on. But it is of great challenge even to obtain a moderately accurate solution due to the O(n3)O(n^{3}) metric constraints and the nonsmooth objective function which is usually a weighted β„“p\ell_{p} norm based distance. In this paper, we propose a delayed constraint generation method with each subproblem solved by the semismooth Newton based proximal augmented Lagrangian method (PALM) for the metric nearness problem. Due to the high memory requirement for the storage of the matrix related to the metric constraints, we take advantage of the special structure of the matrix and do not need to store the corresponding constraint matrix. A pleasing aspect of our algorithm is that we can solve these problems involving up to 10810^{8} variables and 101310^{13} constraints. Numerical experiments demonstrate the efficiency of our algorithm. In theory, firstly, under a mild condition, we establish a primal-dual error bound condition which is very essential for the analysis of local convergence rate of PALM. Secondly, we prove the equivalence between the dual nondegeneracy condition and nonsingularity of the generalized Jacobian for the inner subproblem of PALM. Thirdly, when q(β‹…)=βˆ₯β‹…βˆ₯1q(\cdot)=\|\cdot\|_{1} or βˆ₯β‹…βˆ₯∞\|\cdot\|_{\infty}, without the strict complementarity condition, we also prove the equivalence between the the dual nondegeneracy condition and the uniqueness of the primal solution

    Learning the hub graphical Lasso model with the structured sparsity via an efficient algorithm

    Full text link
    Graphical models have exhibited their performance in numerous tasks ranging from biological analysis to recommender systems. However, graphical models with hub nodes are computationally difficult to fit, particularly when the dimension of the data is large. To efficiently estimate the hub graphical models, we introduce a two-phase algorithm. The proposed algorithm first generates a good initial point via a dual alternating direction method of multipliers (ADMM), and then warm starts a semismooth Newton (SSN) based augmented Lagrangian method (ALM) to compute a solution that is accurate enough for practical tasks. The sparsity structure of the generalized Jacobian ensures that the algorithm can obtain a nice solution very efficiently. Comprehensive experiments on both synthetic data and real data show that it obviously outperforms the existing state-of-the-art algorithms. In particular, in some high dimensional tasks, it can save more than 70\% of the execution time, meanwhile still achieves a high-quality estimation.Comment: 28 pages,3 figure

    An inexact proximal majorization-minimization Algorithm for remote sensing image stripe noise removal

    Full text link
    The stripe noise existing in remote sensing images badly degrades the visual quality and restricts the precision of data analysis. Therefore, many destriping models have been proposed in recent years. In contrast to these existing models, in this paper, we propose a nonconvex model with a DC function (i.e., the difference of convex functions) structure to remove the strip noise. To solve this model, we make use of the DC structure and apply an inexact proximal majorization-minimization algorithm with each inner subproblem solved by the alternating direction method of multipliers. It deserves mentioning that we design an implementable stopping criterion for the inner subproblem, while the convergence can still be guaranteed. Numerical experiments demonstrate the superiority of the proposed model and algorithm.Comment: 19 pages, 3 figure
    • …
    corecore