1,235 research outputs found

    An asymptotically superlinearly convergent semismooth Newton augmented Lagrangian method for Linear Programming

    Get PDF
    Powerful interior-point methods (IPM) based commercial solvers, such as Gurobi and Mosek, have been hugely successful in solving large-scale linear programming (LP) problems. The high efficiency of these solvers depends critically on the sparsity of the problem data and advanced matrix factorization techniques. For a large scale LP problem with data matrix AA that is dense (possibly structured) or whose corresponding normal matrix AATAA^T has a dense Cholesky factor (even with re-ordering), these solvers may require excessive computational cost and/or extremely heavy memory usage in each interior-point iteration. Unfortunately, the natural remedy, i.e., the use of iterative methods based IPM solvers, although can avoid the explicit computation of the coefficient matrix and its factorization, is not practically viable due to the inherent extreme ill-conditioning of the large scale normal equation arising in each interior-point iteration. To provide a better alternative choice for solving large scale LPs with dense data or requiring expensive factorization of its normal equation, we propose a semismooth Newton based inexact proximal augmented Lagrangian ({\sc Snipal}) method. Different from classical IPMs, in each iteration of {\sc Snipal}, iterative methods can efficiently be used to solve simpler yet better conditioned semismooth Newton linear systems. Moreover, {\sc Snipal} not only enjoys a fast asymptotic superlinear convergence but is also proven to enjoy a finite termination property. Numerical comparisons with Gurobi have demonstrated encouraging potential of {\sc Snipal} for handling large-scale LP problems where the constraint matrix AA has a dense representation or AATAA^T has a dense factorization even with an appropriate re-ordering.Comment: Due to the limitation "The abstract field cannot be longer than 1,920 characters", the abstract appearing here is slightly shorter than that in the PDF fil

    Identifying the Alteration Patterns of Brain Functional Connectivity in Progressive Mild Cognitive Impairment Patients: A Longitudinal Whole-Brain Voxel-Wise Degree Analysis

    Get PDF
    Patients with mild cognitive impairment (MCI) are at high risk for developing Alzheimer’s disease (AD), while some of them may remain stable over decades. The underlying mechanism is still not fully understood. In this study, we aimed to explore the connectivity differences between progressive MCI (PMCI) and stable MCI (SMCI) individuals on a whole-brain scale and on a voxel-wise basis, and we also aimed to reveal the differential dynamic alternation patterns between these two disease subtypes. The resting-state functional magnetic resonance images of PMCI and SMCI patients at baseline and year-one were obtained from the Alzheimer’s Disease Neuroimaging Initiative dataset, and the progression was determined based on a three-year follow-up. A whole-brain voxel-wise degree map that was calculated based on graph-theory was constructed for each subject, and then the cross-sectional and longitudinal analyses on the degree maps were performed between PMCI and SMCI patients. In longitudinal analyses, compared with SMCI group, PMCI group showed decreased long-range degree in the left middle occipital/supramarginal gyrus, while the short-range degree was increased in the left supplementary motor area and middle frontal gyrus and decreased in the right middle temporal pole. A significant longitudinal alteration of decreased short-range degree in the right middle occipital was found in PMCI group. Taken together with previous evidence, our current findings may suggest that PMCI, compared with SMCI, might be a severe presentation of disease along the AD continuum, and the rapidly reduced degree in the right middle occipital gyrus may have indicative value for the disease progression. Moreover, the cross-sectional comparison results and corresponding receiver-operator characteristic-curves analyses may indicate that the baseline degree difference is not a good predictor of disease progression in MCI patients. Overall, these findings may provide objective evidence and an indicator to characterize the progression-related brain connectivity changes in MCI patients

    MARS: A second-order reduction algorithm for high-dimensional sparse precision matrices estimation

    Full text link
    Estimation of the precision matrix (or inverse covariance matrix) is of great importance in statistical data analysis. However, as the number of parameters scales quadratically with the dimension p, computation becomes very challenging when p is large. In this paper, we propose an adaptive sieving reduction algorithm to generate a solution path for the estimation of precision matrices under the 1\ell_1 penalized D-trace loss, with each subproblem being solved by a second-order algorithm. In each iteration of our algorithm, we are able to greatly reduce the number of variables in the problem based on the Karush-Kuhn-Tucker (KKT) conditions and the sparse structure of the estimated precision matrix in the previous iteration. As a result, our algorithm is capable of handling datasets with very high dimensions that may go beyond the capacity of the existing methods. Moreover, for the sub-problem in each iteration, other than solving the primal problem directly, we develop a semismooth Newton augmented Lagrangian algorithm with global linear convergence on the dual problem to improve the efficiency. Theoretical properties of our proposed algorithm have been established. In particular, we show that the convergence rate of our algorithm is asymptotically superlinear. The high efficiency and promising performance of our algorithm are illustrated via extensive simulation studies and real data applications, with comparison to several state-of-the-art solvers

    An efficient sieving based secant method for sparse optimization problems with least-squares constraints

    Full text link
    In this paper, we propose an efficient sieving based secant method to address the computational challenges of solving sparse optimization problems with least-squares constraints. A level-set method has been introduced in [X. Li, D.F. Sun, and K.-C. Toh, SIAM J. Optim., 28 (2018), pp. 1842--1866] that solves these problems by using the bisection method to find a root of a univariate nonsmooth equation φ(λ)=ϱ\varphi(\lambda) = \varrho for some ϱ>0\varrho > 0, where φ()\varphi(\cdot) is the value function computed by a solution of the corresponding regularized least-squares optimization problem. When the objective function in the constrained problem is a polyhedral gauge function, we prove that (i) for any positive integer kk, φ()\varphi(\cdot) is piecewise CkC^k in an open interval containing the solution λ\lambda^* to the equation φ(λ)=ϱ\varphi(\lambda) = \varrho; (ii) the Clarke Jacobian of φ()\varphi(\cdot) is always positive. These results allow us to establish the essential ingredients of the fast convergence rates of the secant method. Moreover, an adaptive sieving technique is incorporated into the secant method to effectively reduce the dimension of the level-set subproblems for computing the value of φ()\varphi(\cdot). The high efficiency of the proposed algorithm is demonstrated by extensive numerical results
    corecore