19 research outputs found

    Fast algorithms for the Sylvester equation AX−XBT=C

    Get PDF
    AbstractFor given matrices A∈Fm×m, B∈Fn×n, and C∈Fm×n over an arbitrary field F, the matrix equation AX−XBT=C has a unique solution X∈Fm×n if and only if A and B have disjoint spectra. We describe an algorithm that computes the solution X for m,n⩽N with O(Nβ·logN) arithmetic operations in F, where β>2 is such that M×M matrices can be multiplied with O(Mβ) arithmetic operations, e.g., β=2.376. It seems that before no better bound than O(m3·n3) arithmetic operations was known. The state of the art in numerical analysis is O(n3+m3) flops, but these algorithms (due to Bartels/Stewart and Golub/Nash/van Loan) involve Schur decompositions, i.e., they compute the eigenvalues of at least one of A and B, and can hence not be transferred for general F

    Simple and Nearly Optimal Polynomial Root-finding by Means of Root Radii Approximation

    Full text link
    We propose a new simple but nearly optimal algorithm for the approximation of all sufficiently well isolated complex roots and root clusters of a univariate polynomial. Quite typically the known root-finders at first compute some crude but reasonably good approximations to well-conditioned roots (that is, those isolated from the other roots) and then refine the approximations very fast, by using Boolean time which is nearly optimal, up to a polylogarithmic factor. By combining and extending some old root-finding techniques, the geometry of the complex plane, and randomized parametrization, we accelerate the initial stage of obtaining crude to all well-conditioned simple and multiple roots as well as isolated root clusters. Our algorithm performs this stage at a Boolean cost dominated by the nearly optimal cost of subsequent refinement of these approximations, which we can perform concurrently, with minimum processor communication and synchronization. Our techniques are quite simple and elementary; their power and application range may increase in their combination with the known efficient root-finding methods.Comment: 12 pages, 1 figur

    Non-Negative Local Sparse Coding for Subspace Clustering

    Full text link
    Subspace sparse coding (SSC) algorithms have proven to be beneficial to clustering problems. They provide an alternative data representation in which the underlying structure of the clusters can be better captured. However, most of the research in this area is mainly focused on enhancing the sparse coding part of the problem. In contrast, we introduce a novel objective term in our proposed SSC framework which focuses on the separability of data points in the coding space. We also provide mathematical insights into how this local-separability term improves the clustering result of the SSC framework. Our proposed non-linear local SSC algorithm (NLSSC) also benefits from the efficient choice of its sparsity terms and constraints. The NLSSC algorithm is also formulated in the kernel-based framework (NLKSSC) which can represent the nonlinear structure of data. In addition, we address the possibility of having redundancies in sparse coding results and its negative effect on graph-based clustering problems. We introduce the link-restore post-processing step to improve the representation graph of non-negative SSC algorithms such as ours. Empirical evaluations on well-known clustering benchmarks show that our proposed NLSSC framework results in better clusterings compared to the state-of-the-art baselines and demonstrate the effectiveness of the link-restore post-processing in improving the clustering accuracy via correcting the broken links of the representation graph.Comment: 15 pages, IDA 2018 conferenc

    Partial Fraction Decomposition in C(z) and Simultaneous Newton Iteration for Factorization in C[z]

    Get PDF
    The subject of this paper is fast numerical algorithms for factoring univariate polynomials with complex coefficients and for computing partial fraction decompositions (PFDs) of rational functions in C(z). Numerically stable and computationally feasible versions of PFD are specified first for the special case of rational functions with all singularities in the unit disk (the ``bounded case'') and then for rational functions with arbitrarily distributed singularities. Two major algorithms for computing PFDs are presented: The first one is an extension of the ``splitting circle method' ' by A. Schonhage (``The Fundamental Theorem of Algebra in Terms of Computational Complexity,' ' Technical Report, Univ. Tubingen, 1982) for factoring polynomials in C[z] to an algorithm for PFD. The second algorithm is a Newton iteration for simultaneously improving the accuracy of all factors in an approximate factorization of a polynomial resp. all partial fractions of an approximate PFD of a rational function. Algorithmically useful starting value conditions for the Newton algorithm are provided. Three subalgorithms are of independent interest. They compute the product of a sequence of polynomials, the su

    Nearly optimal computations with structured matrices

    No full text
    International audienceWe estimate the Boolean complexity of multiplication of structured matrices by a vector and the solution of nonsingular linear systems of equations with these matrices. We study four basic and most popular classes, that is, Toeplitz, Hankel, Cauchy and Vandermonde matrices, for which the cited computational problems are equivalent to the task of polynomial multiplication and division and polynomial and rational multipoint evaluation and interpolation. The Boolean cost estimates for the latter problems have been obtained by Kirrinnis in [10], except for rational interpolation. We supply them now as well as the Boolean complexity estimates for the important problems of multiplication of transposed Vandermonde matrix and its inverse by a vector. All known Boolean cost estimates from [10] for such problems rely on using Kronecker product. This implies the d-fold precision increase for the d-th degree output, but we avoid such an increase by relying on distinct techniques based on employing FFT. Furthermore we simplify the analysis and make it more transparent by combining the representations of our tasks and algorithms both via structured matrices and via polynomials and rational functions. This also enables further extensions of our estimates to cover Trummer’s important problem and computations with the popular classes of structured matrices that generalize the four cited basic matrix classes, as well as the transposed Vandermonde matrices. It is known that the solution of Toeplitz, Hankel, Cauchy, Vandermonde, and transposed Vandermonde linear systems of equations is generally prone to numerical stability problems, and numerical problems arise even for multiplication of Cauchy, Vandermonde, and transposed Vandermonde matrices by a vector. Thus our FFT-based results on the Boolean complexity of these important computations could be quite interesting because our estimates are reasonable even for more general classes of structured matrices, showing rather moderate growth of the complexity as the input size increases

    Polynomial Evaluation and Interpolation and Transformations of Matrix Structures

    No full text
    Abstract. Multipoint polynomial evaluation and interpolation are fundamental for modern numerical and symbolic computing. The known algorithms solve both problems over any field of constants in nearly linear arithmetic time, but the cost grows to quadratic for numerical solution. We decrease this cost dramatically and for a large class of inputs yield nearly linear time as well. We first restate our tasks as multiplication of a Vandermonde matrix and its inverse by a vector, then transform this matrix into other structured matrices, and finally apply a variant of the Multipole celebrated techniques to achieve the desired speedup for the computations with polynomials, Vandermonde matrices and their transposes. An important impact of our work is a new demonstration of the power of the method of the transformation of matrix structures, which we proposed in [P90]. At the end we comment on further applications and extension of this method to computations with structured matrices, polynomials, and rational functions
    corecore