35 research outputs found

    From Symmetry to Geometry: Tractable Nonconvex Problems

    Full text link
    As science and engineering have become increasingly data-driven, the role of optimization has expanded to touch almost every stage of the data analysis pipeline, from the signal and data acquisition to modeling and prediction. The optimization problems encountered in practice are often nonconvex. While challenges vary from problem to problem, one common source of nonconvexity is nonlinearity in the data or measurement model. Nonlinear models often exhibit symmetries, creating complicated, nonconvex objective landscapes, with multiple equivalent solutions. Nevertheless, simple methods (e.g., gradient descent) often perform surprisingly well in practice. The goal of this survey is to highlight a class of tractable nonconvex problems, which can be understood through the lens of symmetries. These problems exhibit a characteristic geometric structure: local minimizers are symmetric copies of a single "ground truth" solution, while other critical points occur at balanced superpositions of symmetric copies of the ground truth, and exhibit negative curvature in directions that break the symmetry. This structure enables efficient methods to obtain global minimizers. We discuss examples of this phenomenon arising from a wide range of problems in imaging, signal processing, and data analysis. We highlight the key role of symmetry in shaping the objective landscape and discuss the different roles of rotational and discrete symmetries. This area is rich with observed phenomena and open problems; we close by highlighting directions for future research.Comment: review paper submitted to SIAM Review, 34 pages, 10 figure

    Sparse Coding and Autoencoders

    Full text link
    In "Dictionary Learning" one tries to recover incoherent matrices Aβˆ—βˆˆRnΓ—hA^* \in \mathbb{R}^{n \times h} (typically overcomplete and whose columns are assumed to be normalized) and sparse vectors xβˆ—βˆˆRhx^* \in \mathbb{R}^h with a small support of size hph^p for some 0<p<10 <p < 1 while having access to observations y∈Rny \in \mathbb{R}^n where y=Aβˆ—xβˆ—y = A^*x^*. In this work we undertake a rigorous analysis of whether gradient descent on the squared loss of an autoencoder can solve the dictionary learning problem. The "Autoencoder" architecture we consider is a Rnβ†’Rn\mathbb{R}^n \rightarrow \mathbb{R}^n mapping with a single ReLU activation layer of size hh. Under very mild distributional assumptions on xβˆ—x^*, we prove that the norm of the expected gradient of the standard squared loss function is asymptotically (in sparse code dimension) negligible for all points in a small neighborhood of Aβˆ—A^*. This is supported with experimental evidence using synthetic data. We also conduct experiments to suggest that Aβˆ—A^* is a local minimum. Along the way we prove that a layer of ReLU gates can be set up to automatically recover the support of the sparse codes. This property holds independent of the loss function. We believe that it could be of independent interest.Comment: In this new version of the paper with a small change in the distributional assumptions we are actually able to prove the asymptotic criticality of a neighbourhood of the ground truth dictionary for even just the standard squared loss of the ReLU autoencoder (unlike the regularized loss in the older version

    Neural Collapse with Normalized Features: A Geometric Analysis over the Riemannian Manifold

    Full text link
    When training overparameterized deep networks for classification tasks, it has been widely observed that the learned features exhibit a so-called "neural collapse" phenomenon. More specifically, for the output features of the penultimate layer, for each class the within-class features converge to their means, and the means of different classes exhibit a certain tight frame structure, which is also aligned with the last layer's classifier. As feature normalization in the last layer becomes a common practice in modern representation learning, in this work we theoretically justify the neural collapse phenomenon for normalized features. Based on an unconstrained feature model, we simplify the empirical loss function in a multi-class classification task into a nonconvex optimization problem over the Riemannian manifold by constraining all features and classifiers over the sphere. In this context, we analyze the nonconvex landscape of the Riemannian optimization problem over the product of spheres, showing a benign global landscape in the sense that the only global minimizers are the neural collapse solutions while all other critical points are strict saddles with negative curvature. Experimental results on practical deep networks corroborate our theory and demonstrate that better representations can be learned faster via feature normalization.Comment: The first two authors contributed to this work equally; 38 pages, 13 figures. Accepted at NeurIPS'2
    corecore