440 research outputs found

    Convex Optimization Methods for Dimension Reduction and Coefficient Estimation in Multivariate Linear Regression

    Full text link
    In this paper, we study convex optimization methods for computing the trace norm regularized least squares estimate in multivariate linear regression. The so-called factor estimation and selection (FES) method, recently proposed by Yuan et al. [22], conducts parameter estimation and factor selection simultaneously and have been shown to enjoy nice properties in both large and finite samples. To compute the estimates, however, can be very challenging in practice because of the high dimensionality and the trace norm constraint. In this paper, we explore a variant of Nesterov's smooth method [20] and interior point methods for computing the penalized least squares estimate. The performance of these methods is then compared using a set of randomly generated instances. We show that the variant of Nesterov's smooth method [20] generally outperforms the interior point method implemented in SDPT3 version 4.0 (beta) [19] substantially . Moreover, the former method is much more memory efficient.Comment: 27 page

    Accelerated Inexact Composite Gradient Methods for Nonconvex Spectral Optimization Problems

    Full text link
    This paper presents two inexact composite gradient methods, one inner accelerated and another doubly accelerated, for solving a class of nonconvex spectral composite optimization problems. More specifically, the objective function for these problems is of the form f1+f2+hf_1 + f_2 + h where f1f_1 and f2f_2 are differentiable nonconvex matrix functions with Lipschitz continuous gradients, hh is a proper closed convex matrix function, and both f2f_2 and hh can be expressed as functions that operate on the singular values of their inputs. The methods essentially use an accelerated composite gradient method to solve a sequence of proximal subproblems involving the linear approximation of f1f_1 and the singular value functions underlying f2f_2 and hh. Unlike other composite gradient-based methods, the proposed methods take advantage of both the composite and spectral structure underlying the objective function in order to efficiently generate their solutions. Numerical experiments are presented to demonstrate the practicality of these methods on a set of real-world and randomly generated spectral optimization problems

    A unified analysis of a class of proximal bundle methods for solving hybrid convex composite optimization problems

    Full text link
    This paper presents a proximal bundle (PB) framework based on a generic bundle update scheme for solving the hybrid convex composite optimization (HCCO) problem and establishes a common iteration-complexity bound for any variant belonging to it. As a consequence, iteration-complexity bounds for three PB variants based on different bundle update schemes are obtained in the HCCO context for the first time and in a unified manner. While two of the PB variants are universal (i.e., their implementations do not require parameters associated with the HCCO instance), the other newly (as far as the authors are aware of) proposed one is not but has the advantage that it generates simple, namely one-cut, bundle models. The paper also presents a universal adaptive PB variant (which is not necessarily an instance of the framework) based on one-cut models and shows that its iteration-complexity is the same as the two aforementioned universal PB variants.Comment: 31 page
    • …
    corecore