2 research outputs found

    Non-negative Least Squares via Overparametrization

    Full text link
    In many applications, solutions of numerical problems are required to be non-negative, e.g., when retrieving pixel intensity values or physical densities of a substance. In this context, non-negative least squares (NNLS) is a ubiquitous tool, e.g., when seeking sparse solutions of high-dimensional statistical problems. Despite vast efforts since the seminal work of Lawson and Hanson in the '70s, the non-negativity assumption is still an obstacle for the theoretical analysis and scalability of many off-the-shelf solvers. In the different context of deep neural networks, we recently started to see that the training of overparametrized models via gradient descent leads to surprising generalization properties and the retrieval of regularized solutions. In this paper, we prove that, by using an overparametrized formulation, NNLS solutions can reliably be approximated via vanilla gradient flow. We furthermore establish stability of the method against negative perturbations of the ground-truth. Our simulations confirm that this allows the use of vanilla gradient descent as a novel and scalable numerical solver for NNLS. From a conceptual point of view, our work proposes a novel approach to trading side-constraints in optimization problems against complexity of the optimization landscape, which does not build upon the concept of Lagrangian multipliers
    corecore