24 research outputs found

    Least-squares methods for nonnegative matrix factorization over rational functions

    Full text link
    Nonnegative Matrix Factorization (NMF) models are widely used to recover linearly mixed nonnegative data. When the data is made of samplings of continuous signals, the factors in NMF can be constrained to be samples of nonnegative rational functions, which allow fairly general models; this is referred to as NMF using rational functions (R-NMF). We first show that, under mild assumptions, R-NMF has an essentially unique factorization unlike NMF, which is crucial in applications where ground-truth factors need to be recovered such as blind source separation problems. Then we present different approaches to solve R-NMF: the R-HANLS, R-ANLS and R-NLS methods. From our tests, no method significantly outperforms the others, and a trade-off should be done between time and accuracy. Indeed, R-HANLS is fast and accurate for large problems, while R-ANLS is more accurate, but also more resources demanding, both in time and memory. R-NLS is very accurate but only for small problems. Moreover, we show that R-NMF outperforms NMF in various tasks including the recovery of semi-synthetic continuous signals, and a classification problem of real hyperspectral signals.Comment: 13 page

    Conversions between barycentric, RKFUN, and Newton representations of rational interpolants

    Full text link
    We derive explicit formulas for converting between rational interpolants in barycentric, rational Krylov (RKFUN), and Newton form. We show applications of these conversions when working with rational approximants produced by the AAA algorithm [Y. Nakatsukasa, O. S\`ete, L. N. Trefethen, arXiv preprint 1612.00337, 2016] within the Rational Krylov Toolbox and for the solution of nonlinear eigenvalue problems

    Rational Krylov methods for functions of matrices with applications to fractional partial differential equations

    Full text link
    In this paper, we propose a new choice of poles to define reliable rational Krylov methods. These methods are used for approximating function of positive definite matrices. In particular, the fractional power and the fractional resolvent are considered because of their importance in the numerical solution of fractional partial differential equations. The results of the numerical experiments we have carried out on some fractional models confirm that the proposed approach is promising

    Adaptive rational Krylov methods for exponential Runge--Kutta integrators

    Full text link
    We consider the solution of large stiff systems of ordinary differential equations with explicit exponential Runge--Kutta integrators. These problems arise from semi-discretized semi-linear parabolic partial differential equations on continuous domains or on inherently discrete graph domains. A series of results reduces the requirement of computing linear combinations of φ\varphi-functions in exponential integrators to the approximation of the action of a smaller number of matrix exponentials on certain vectors. State-of-the-art computational methods use polynomial Krylov subspaces of adaptive size for this task. They have the drawback that the required Krylov subspace iteration numbers to obtain a desired tolerance increase drastically with the spectral radius of the discrete linear differential operator, e.g., the problem size. We present an approach that leverages rational Krylov subspace methods promising superior approximation qualities. We prove a novel a-posteriori error estimate of rational Krylov approximations to the action of the matrix exponential on vectors for single time points, which allows for an adaptive approach similar to existing polynomial Krylov techniques. We discuss pole selection and the efficient solution of the arising sequences of shifted linear systems by direct and preconditioned iterative solvers. Numerical experiments show that our method outperforms the state of the art for sufficiently large spectral radii of the discrete linear differential operators. The key to this are approximately constant rational Krylov iteration numbers, which enable a near-linear scaling of the runtime with respect to the problem size

    Randomized sketching of nonlinear eigenvalue problems

    Full text link
    Rational approximation is a powerful tool to obtain accurate surrogates for nonlinear functions that are easy to evaluate and linearize. The interpolatory adaptive Antoulas--Anderson (AAA) method is one approach to construct such approximants numerically. For large-scale vector- and matrix-valued functions, however, the direct application of the set-valued variant of AAA becomes inefficient. We propose and analyze a new sketching approach for such functions called sketchAAA that, with high probability, leads to much better approximants than previously suggested approaches while retaining efficiency. The sketching approach works in a black-box fashion where only evaluations of the nonlinear function at sampling points are needed. Numerical tests with nonlinear eigenvalue problems illustrate the efficacy of our approach, with speedups above 200 for sampling large-scale black-box functions without sacrificing on accuracy.Comment: 15 page

    Rational minimax approximation via adaptive barycentric representations

    Get PDF
    Computing rational minimax approximations can be very challenging when there are singularities on or near the interval of approximation - precisely the case where rational functions outperform polynomials by a landslide. We show that far more robust algorithms than previously available can be developed by making use of rational barycentric representations whose support points are chosen in an adaptive fashion as the approximant is computed. Three variants of this barycentric strategy are all shown to be powerful: (1) a classical Remez algorithm, (2) a "AAA-Lawson" method of iteratively reweighted least-squares, and (3) a differential correction algorithm. Our preferred combination, implemented in the Chebfun MINIMAX code, is to use (2) in an initial phase and then switch to (1) for generically quadratic convergence. By such methods we can calculate approximations up to type (80, 80) of ∣x∣|x| on [−1,1][-1, 1] in standard 16-digit floating point arithmetic, a problem for which Varga, Ruttan, and Carpenter required 200-digit extended precision.Comment: 29 pages, 11 figure

    An algorithm for real and complex rational minimax approximation

    Full text link
    Rational minimax approximation of real functions on real intervals is an established topic, but when it comes to complex functions or domains, there appear to be no algorithms currently in use. Such a method is introduced here, the {\em AAA-Lawson algorithm,} available in Chebfun. The new algorithm solves a wide range of problems on arbitrary domains in a fraction of a second of laptop time by a procedure consisting of two steps. First, the standard AAA algorithm is run to obtain a near-best approximation and a set of support points for a barycentric representation of the rational approximant. Then a "Lawson phase" of iteratively reweighted least-squares adjustment of the barycentric coefficients is carried out to improve the approximation to minimax

    A convex dual programming for the rational minimax approximation and Lawson's iteration

    Full text link
    Computing the discrete rational minimax approximation in the complex plane is challenging. Apart from Ruttan's sufficient condition, there are few other sufficient conditions for global optimality. The state-of-the-art rational approximation algorithms, such as the adaptive Antoulas-Anderson (AAA), AAA-Lawson, and the rational Krylov fitting (RKFIT) method, perform highly efficiently, but the computed rational approximants may be near-best. In this paper, we propose a convex programming approach, the solution of which is guaranteed to be the rational minimax approximation under Ruttan's sufficient condition. Furthermore, we present a new version of Lawson's iteration for solving this convex programming problem. The computed solution can be easily verified as the rational minimax approximant. Our numerical experiments demonstrate that this updated version of Lawson's iteration generally converges monotonically with respect to the objective function of the convex programming. It is an effective competitive approach for the rational minimax problem, compared to the highly efficient AAA, AAA-Lawson, and the stabilized Sanathanan-Koerner iteration.Comment: 38 pages, 10 figure
    corecore