60 research outputs found

    Archetypal Analysis: Mining Weather and Climate Extremes

    Get PDF
    Conventional analysis methods in weather and climate science (e.g., EOF analysis) exhibit a number of drawbacks including scaling and mixing. These methods focus mostly on the bulk of the probability distribution of the system in state space and overlook its tail. This paper explores a different method, the archetypal analysis (AA), which focuses precisely on the extremes. AA seeks to approximate the convex hull of the data in state space by finding “corners” that represent “pure” types or archetypes through computing mixture weight matrices. The method is quite new in climate science, although it has been around for about two decades in pattern recognition. It encompasses, in particular, the virtues of EOFs and clustering. The method is presented along with a new manifold-based optimization algorithm that optimizes for the weights simultaneously, unlike the conventional multistep algorithm based on the alternating constrained least squares. The paper discusses the numerical solution and then applies it to the monthly sea surface temperature (SST) from HadISST and to the Asian summer monsoon (ASM) using sea level pressure (SLP) from ERA-40 over the Asian monsoon region. The application to SST reveals, in particular, three archetypes, namely, El Niño, La Niña, and a third pattern representing the western boundary currents. The latter archetype shows a particular trend in the last few decades. The application to the ASM SLP anomalies yields archetypes that are consistent with the ASM regimes found in the literature. Merits and weaknesses of the method along with possible future development are also discussed

    Second-order networks in PyTorch

    Get PDF
    International audienceClassification of Symmetric Positive Definite (SPD) matrices is gaining momentum in a variety machine learning application fields. In this work we propose a Python library which implements neural networks on SPD matrices, based on the popular deep learning framework Pytorch

    Sparse Exploratory Factor Analysis

    Get PDF
    Sparse principal component analysis is a very active research area in the last decade. It produces component loadings with many zero entries which facilitates their interpretation and helps avoid redundant variables. The classic factor analysis is another popular dimension reduction technique which shares similar interpretation problems and could greatly benefit from sparse solutions. Unfortunately, there are very few works considering sparse versions of the classic factor analysis. Our goal is to contribute further in this direction. We revisit the most popular procedures for exploratory factor analysis, maximum likelihood and least squares. Sparse factor loadings are obtained for them by, first, adopting a special reparameterization and, second, by introducing additional [Formula: see text]-norm penalties into the standard factor analysis problems. As a result, we propose sparse versions of the major factor analysis procedures. We illustrate the developed algorithms on well-known psychometric problems. Our sparse solutions are critically compared to ones obtained by other existing methods

    Shonan Rotation Averaging: Global Optimality by Surfing SO(p)nSO(p)^n

    Full text link
    Shonan Rotation Averaging is a fast, simple, and elegant rotation averaging algorithm that is guaranteed to recover globally optimal solutions under mild assumptions on the measurement noise. Our method employs semidefinite relaxation in order to recover provably globally optimal solutions of the rotation averaging problem. In contrast to prior work, we show how to solve large-scale instances of these relaxations using manifold minimization on (only slightly) higher-dimensional rotation manifolds, re-using existing high-performance (but local) structure-from-motion pipelines. Our method thus preserves the speed and scalability of current SFM methods, while recovering globally optimal solutions.Comment: 30 pages (paper + supplementary material). To appear at the European Conference on Computer Vision (ECCV) 202

    Proceedings of the second "international Traveling Workshop on Interactions between Sparse models and Technology" (iTWIST'14)

    Get PDF
    The implicit objective of the biennial "international - Traveling Workshop on Interactions between Sparse models and Technology" (iTWIST) is to foster collaboration between international scientific teams by disseminating ideas through both specific oral/poster presentations and free discussions. For its second edition, the iTWIST workshop took place in the medieval and picturesque town of Namur in Belgium, from Wednesday August 27th till Friday August 29th, 2014. The workshop was conveniently located in "The Arsenal" building within walking distance of both hotels and town center. iTWIST'14 has gathered about 70 international participants and has featured 9 invited talks, 10 oral presentations, and 14 posters on the following themes, all related to the theory, application and generalization of the "sparsity paradigm": Sparsity-driven data sensing and processing; Union of low dimensional subspaces; Beyond linear and convex inverse problem; Matrix/manifold/graph sensing/processing; Blind inverse problems and dictionary learning; Sparsity and computational neuroscience; Information theory, geometry and randomness; Complexity/accuracy tradeoffs in numerical methods; Sparsity? What's next?; Sparse machine learning and inference.Comment: 69 pages, 24 extended abstracts, iTWIST'14 website: http://sites.google.com/site/itwist1

    Recipes for sparse LDA of horizontal data

    Get PDF
    Many important modern applications require analyzing data with more variables than observations, called for short horizontal. In such situation the classical Fisher’s linear discriminant analysis (LDA) does not possess solution because the within-group scatter matrix is singular. Moreover, the number of the variables is usually huge and the classical type of solutions (discriminant functions) are difficult to interpret as they involve all available variables. Nowadays, the aim is to develop fast and reliable algorithms for sparse LDA of horizontal data. The resulting discriminant functions depend on very few original variables, which facilitates their interpretation. The main theoretical and numerical challenge is how to cope with the singularity of the within-group scatter matrix. This work aims at classifying the existing approaches according to the way they tackle this singularity issue, and suggest new ones

    Geometric methods on low-rank matrix and tensor manifolds

    Get PDF
    In this chapter we present numerical methods for low-rank matrix and tensor problems that explicitly make use of the geometry of rank constrained matrix and tensor spaces. We focus on two types of problems: The first are optimization problems, like matrix and tensor completion, solving linear systems and eigenvalue problems. Such problems can be solved by numerical optimization for manifolds, called Riemannian optimization methods. We will explain the basic elements of differential geometry in order to apply such methods efficiently to rank constrained matrix and tensor spaces. The second type of problem is ordinary differential equations, defined on matrix and tensor spaces. We show how their solution can be approximated by the dynamical low-rank principle, and discuss several numerical integrators that rely in an essential way on geometric properties that are characteristic to sets of low rank matrices and tensors

    Global rates of convergence for nonconvex optimization on manifolds

    No full text
    We consider the minimization of a cost function ff on a manifold MM using Riemannian gradient descent and Riemannian trust regions (RTR). We focus on satisfying necessary optimality conditions within a tolerance Δ\varepsilon. Specifically, we show that, under Lipschitz-type assumptions on the pullbacks of ff to the tangent spaces of MM, both of these algorithms produce points with Riemannian gradient smaller than Δ\varepsilon in O(1/Δ2)O(1/\varepsilon^2) iterations. Furthermore, RTR returns a point where also the Riemannian Hessian's least eigenvalue is larger than -Δ\varepsilon in O(1/Δ3)O(1/\varepsilon^3) iterations. There are no assumptions on initialization. The rates match their (sharp) unconstrained counterparts as a function of the accuracy Δ\varepsilon (up to constants) and hence are sharp in that sense. These are the first general results for global rates of convergence to approximate first- and second-order KKT points on manifolds. They apply in particular for optimization constrained to compact submanifolds of Rn\mathbb{R}^n, under simpler assumptions

    Robust Low-Rank Matrix Completion by Riemannian Optimization

    No full text

    Adaptive regularization with cubics on manifolds

    No full text
    Adaptive regularization with cubics (ARC) is an algorithm for unconstrained, non-convex optimization. Akin to the trust-region method, its iterations can be thought of as approximate, safe-guarded Newton steps. For cost functions with Lipschitz continuous Hessian, ARC has optimal iteration complexity, in the sense that it produces an iterate with gradient smaller than in (1/1.5) iterations. For the same price, it can also guarantee a Hessian with smallest eigenvalue larger than −√. In this paper, we study a generalization of ARC to optimization on Riemannian manifolds. In particular, we generalize the iteration complexity results to this richer framework. Our central contribution lies in the identification of appropriate manifold-specific assumptions that allow us to secure these complexity guarantees both when using the exponential map and when using a general retraction. A substantial part of the paper is devoted to studying these assumptions—relevant beyond ARC—and providing user-friendly sufficient conditions for them. Numerical experiments are encouraging
    • 

    corecore