57 research outputs found

    Low-Rank Inducing Norms with Optimality Interpretations

    Full text link
    Optimization problems with rank constraints appear in many diverse fields such as control, machine learning and image analysis. Since the rank constraint is non-convex, these problems are often approximately solved via convex relaxations. Nuclear norm regularization is the prevailing convexifying technique for dealing with these types of problem. This paper introduces a family of low-rank inducing norms and regularizers which includes the nuclear norm as a special case. A posteriori guarantees on solving an underlying rank constrained optimization problem with these convex relaxations are provided. We evaluate the performance of the low-rank inducing norms on three matrix completion problems. In all examples, the nuclear norm heuristic is outperformed by convex relaxations based on other low-rank inducing norms. For two of the problems there exist low-rank inducing norms that succeed in recovering the partially unknown matrix, while the nuclear norm fails. These low-rank inducing norms are shown to be representable as semi-definite programs. Moreover, these norms have cheaply computable proximal mappings, which makes it possible to also solve problems of large size using first-order methods

    Dynamic Tensor Clustering

    Full text link
    Dynamic tensor data are becoming prevalent in numerous applications. Existing tensor clustering methods either fail to account for the dynamic nature of the data, or are inapplicable to a general-order tensor. Also there is often a gap between statistical guarantee and computational efficiency for existing tensor clustering solutions. In this article, we aim to bridge this gap by proposing a new dynamic tensor clustering method, which takes into account both sparsity and fusion structures, and enjoys strong statistical guarantees as well as high computational efficiency. Our proposal is based upon a new structured tensor factorization that encourages both sparsity and smoothness in parameters along the specified tensor modes. Computationally, we develop a highly efficient optimization algorithm that benefits from substantial dimension reduction. In theory, we first establish a non-asymptotic error bound for the estimator from the structured tensor factorization. Built upon this error bound, we then derive the rate of convergence of the estimated cluster centers, and show that the estimated clusters recover the true cluster structures with a high probability. Moreover, our proposed method can be naturally extended to co-clustering of multiple modes of the tensor data. The efficacy of our approach is illustrated via simulations and a brain dynamic functional connectivity analysis from an Autism spectrum disorder study.Comment: Accepted at Journal of the American Statistical Associatio

    Rank Reduction with Convex Constraints

    Get PDF
    This thesis addresses problems which require low-rank solutions under convex constraints. In particular, the focus lies on model reduction of positive systems, as well as finite dimensional optimization problems that are convex, apart from a low-rank constraint. Traditional model reduction techniques try to minimize the error between the original and the reduced system. Typically, the resulting reduced models, however, no longer fulfill physically meaningful constraints. This thesis considers the problem of model reduction with internal and external positivity constraints. Both problems are solved by means of balanced truncation. While internal positivity is shown to be preserved by a symmetry property; external positivity preservation is accomplished by deriving a modified balancing approach based on ellipsoidal cone invariance.In essence, positivity preserving model reduction attempts to find an infinite dimensional low-rank approximation that preserves nonnegativity, as well as Hankel structure. Due to the non-convexity of the low-rank constraint, this problem is even challenging in a finite dimensional setting. In addition to model reduction, the present work also considers such finite dimensional low-rank optimization problems with convex constraints. These problems frequently appear in applications such as image compression, multivariate linear regression, matrix completion and many more. The main idea of this thesis is to derive the largest convex minorizers of rank-constrained unitarily invariant norms. These minorizers can be used to construct optimal convex relaxations for the original non-convex problem. Unlike other methods such as nuclear norm regularization, this approach benefits from having verifiable a posterior conditions for which a solution to the convex relaxation and the corresponding non-convex problem coincide. It is shown that this applies to various numerical examples of well-known low-rank optimization problems. In particular, the proposed convex relaxation performs significantly better than nuclear norm regularization. Moreover, it can be observed that a careful choice among the proposed convex relaxations may have a tremendous positive impact on matrix completion. Computational tractability of the proposed approach is accomplished in two ways. First, the considered relaxations are shown to be representable by semi-definite programs. Second, it is shown how to compute the proximal mappings, for both, the convex relaxations, as well as the non-convex problem. This makes it possible to apply first order method such as so-called Douglas-Rachford splitting. In addition to the convex case, where global convergence of this algorithm is guaranteed, conditions for local convergence in the non-convex setting are presented. Finally, it is shown that the findings of this thesis also extend to the general class of so-called atomic norms that allow us to cover other non-convex constraints
    • …
    corecore