492 research outputs found

    An asymptotically superlinearly convergent semismooth Newton augmented Lagrangian method for Linear Programming

    Get PDF
    Powerful interior-point methods (IPM) based commercial solvers, such as Gurobi and Mosek, have been hugely successful in solving large-scale linear programming (LP) problems. The high efficiency of these solvers depends critically on the sparsity of the problem data and advanced matrix factorization techniques. For a large scale LP problem with data matrix AA that is dense (possibly structured) or whose corresponding normal matrix AATAA^T has a dense Cholesky factor (even with re-ordering), these solvers may require excessive computational cost and/or extremely heavy memory usage in each interior-point iteration. Unfortunately, the natural remedy, i.e., the use of iterative methods based IPM solvers, although can avoid the explicit computation of the coefficient matrix and its factorization, is not practically viable due to the inherent extreme ill-conditioning of the large scale normal equation arising in each interior-point iteration. To provide a better alternative choice for solving large scale LPs with dense data or requiring expensive factorization of its normal equation, we propose a semismooth Newton based inexact proximal augmented Lagrangian ({\sc Snipal}) method. Different from classical IPMs, in each iteration of {\sc Snipal}, iterative methods can efficiently be used to solve simpler yet better conditioned semismooth Newton linear systems. Moreover, {\sc Snipal} not only enjoys a fast asymptotic superlinear convergence but is also proven to enjoy a finite termination property. Numerical comparisons with Gurobi have demonstrated encouraging potential of {\sc Snipal} for handling large-scale LP problems where the constraint matrix AA has a dense representation or AATAA^T has a dense factorization even with an appropriate re-ordering.Comment: Due to the limitation "The abstract field cannot be longer than 1,920 characters", the abstract appearing here is slightly shorter than that in the PDF fil

    MARS: A second-order reduction algorithm for high-dimensional sparse precision matrices estimation

    Full text link
    Estimation of the precision matrix (or inverse covariance matrix) is of great importance in statistical data analysis. However, as the number of parameters scales quadratically with the dimension p, computation becomes very challenging when p is large. In this paper, we propose an adaptive sieving reduction algorithm to generate a solution path for the estimation of precision matrices under the â„“1\ell_1 penalized D-trace loss, with each subproblem being solved by a second-order algorithm. In each iteration of our algorithm, we are able to greatly reduce the number of variables in the problem based on the Karush-Kuhn-Tucker (KKT) conditions and the sparse structure of the estimated precision matrix in the previous iteration. As a result, our algorithm is capable of handling datasets with very high dimensions that may go beyond the capacity of the existing methods. Moreover, for the sub-problem in each iteration, other than solving the primal problem directly, we develop a semismooth Newton augmented Lagrangian algorithm with global linear convergence on the dual problem to improve the efficiency. Theoretical properties of our proposed algorithm have been established. In particular, we show that the convergence rate of our algorithm is asymptotically superlinear. The high efficiency and promising performance of our algorithm are illustrated via extensive simulation studies and real data applications, with comparison to several state-of-the-art solvers

    An efficient sieving based secant method for sparse optimization problems with least-squares constraints

    Full text link
    In this paper, we propose an efficient sieving based secant method to address the computational challenges of solving sparse optimization problems with least-squares constraints. A level-set method has been introduced in [X. Li, D.F. Sun, and K.-C. Toh, SIAM J. Optim., 28 (2018), pp. 1842--1866] that solves these problems by using the bisection method to find a root of a univariate nonsmooth equation φ(λ)=ϱ\varphi(\lambda) = \varrho for some ϱ>0\varrho > 0, where φ(⋅)\varphi(\cdot) is the value function computed by a solution of the corresponding regularized least-squares optimization problem. When the objective function in the constrained problem is a polyhedral gauge function, we prove that (i) for any positive integer kk, φ(⋅)\varphi(\cdot) is piecewise CkC^k in an open interval containing the solution λ∗\lambda^* to the equation φ(λ)=ϱ\varphi(\lambda) = \varrho; (ii) the Clarke Jacobian of φ(⋅)\varphi(\cdot) is always positive. These results allow us to establish the essential ingredients of the fast convergence rates of the secant method. Moreover, an adaptive sieving technique is incorporated into the secant method to effectively reduce the dimension of the level-set subproblems for computing the value of φ(⋅)\varphi(\cdot). The high efficiency of the proposed algorithm is demonstrated by extensive numerical results

    Physiological Characterization of Cut-to-Cut Yield Variations of Alfalfa Genotypes under Controlled Greenhouse Conditions

    Get PDF
    In a temperate region, alfalfa (Medicago sativa) crops are usually harvested 3-6 times per annum. The biomass yields of first and second cuts in the spring are generally the high-est. However, in subsequent cuts the biomass yields decline, with the final 1 or 2 cuts producing the lowest yields (Wang et al. 2009). This seasonal reduction in alfalfa biomass yields could be associated with prevailing changes in environmental factors such as rainfall and heat stress or due to biological characteristics of alfalfa crop itself. In this study, alfalfa was grown under controlled greenhouse conditions with suitable temperature, light, water and nutrient supply to determine the driving force in cut-to-cut biomass yield variations among alfalfa genotypes
    • …
    corecore