239,211 research outputs found

    Complexity of Linear Operators

    Get PDF
    Let A in {0,1}^{n x n} be a matrix with z zeroes and u ones and x be an n-dimensional vector of formal variables over a semigroup (S, o). How many semigroup operations are required to compute the linear operator Ax? As we observe in this paper, this problem contains as a special case the well-known range queries problem and has a rich variety of applications in such areas as graph algorithms, functional programming, circuit complexity, and others. It is easy to compute Ax using O(u) semigroup operations. The main question studied in this paper is: can Ax be computed using O(z) semigroup operations? We prove that in general this is not possible: there exists a matrix A in {0,1}^{n x n} with exactly two zeroes in every row (hence z=2n) whose complexity is Theta(n alpha(n)) where alpha(n) is the inverse Ackermann function. However, for the case when the semigroup is commutative, we give a constructive proof of an O(z) upper bound. This implies that in commutative settings, complements of sparse matrices can be processed as efficiently as sparse matrices (though the corresponding algorithms are more involved). Note that this covers the cases of Boolean and tropical semirings that have numerous applications, e.g., in graph theory. As a simple application of the presented linear-size construction, we show how to multiply two n x n matrices over an arbitrary semiring in O(n^2) time if one of these matrices is a 0/1-matrix with O(n) zeroes (i.e., a complement of a sparse matrix)

    Flexible Multi-layer Sparse Approximations of Matrices and Applications

    Get PDF
    The computational cost of many signal processing and machine learning techniques is often dominated by the cost of applying certain linear operators to high-dimensional vectors. This paper introduces an algorithm aimed at reducing the complexity of applying linear operators in high dimension by approximately factorizing the corresponding matrix into few sparse factors. The approach relies on recent advances in non-convex optimization. It is first explained and analyzed in details and then demonstrated experimentally on various problems including dictionary learning for image denoising, and the approximation of large matrices arising in inverse problems

    Fast Computation of Common Left Multiples of Linear Ordinary Differential Operators

    Full text link
    We study tight bounds and fast algorithms for LCLMs of several linear differential operators with polynomial coefficients. We analyze the arithmetic complexity of existing algorithms for LCLMs, as well as the size of their outputs. We propose a new algorithm that recasts the LCLM computation in a linear algebra problem on a polynomial matrix. This algorithm yields sharp bounds on the coefficient degrees of the LCLM, improving by one order of magnitude the best bounds obtained using previous algorithms. The complexity of the new algorithm is almost optimal, in the sense that it nearly matches the arithmetic size of the output.Comment: The final version will appear in Proceedings of ISSAC 201

    Spectral Simplicity of Apparent Complexity, Part I: The Nondiagonalizable Metadynamics of Prediction

    Full text link
    Virtually all questions that one can ask about the behavioral and structural complexity of a stochastic process reduce to a linear algebraic framing of a time evolution governed by an appropriate hidden-Markov process generator. Each type of question---correlation, predictability, predictive cost, observer synchronization, and the like---induces a distinct generator class. Answers are then functions of the class-appropriate transition dynamic. Unfortunately, these dynamics are generically nonnormal, nondiagonalizable, singular, and so on. Tractably analyzing these dynamics relies on adapting the recently introduced meromorphic functional calculus, which specifies the spectral decomposition of functions of nondiagonalizable linear operators, even when the function poles and zeros coincide with the operator's spectrum. Along the way, we establish special properties of the projection operators that demonstrate how they capture the organization of subprocesses within a complex system. Circumventing the spurious infinities of alternative calculi, this leads in the sequel, Part II, to the first closed-form expressions for complexity measures, couched either in terms of the Drazin inverse (negative-one power of a singular operator) or the eigenvalues and projection operators of the appropriate transition dynamic.Comment: 24 pages, 3 figures, 4 tables; current version always at http://csc.ucdavis.edu/~cmg/compmech/pubs/sdscpt1.ht

    A complexity analysis of statistical learning algorithms

    Full text link
    We apply information-based complexity analysis to support vector machine (SVM) algorithms, with the goal of a comprehensive continuous algorithmic analysis of such algorithms. This involves complexity measures in which some higher order operations (e.g., certain optimizations) are considered primitive for the purposes of measuring complexity. We consider classes of information operators and algorithms made up of scaled families, and investigate the utility of scaling the complexities to minimize error. We look at the division of statistical learning into information and algorithmic components, at the complexities of each, and at applications to support vector machine (SVM) and more general machine learning algorithms. We give applications to SVM algorithms graded into linear and higher order components, and give an example in biomedical informatics
    • …
    corecore