60,948 research outputs found

    Minimizing Communication for Eigenproblems and the Singular Value Decomposition

    Full text link
    Algorithms have two costs: arithmetic and communication. The latter represents the cost of moving data, either between levels of a memory hierarchy, or between processors over a network. Communication often dominates arithmetic and represents a rapidly increasing proportion of the total cost, so we seek algorithms that minimize communication. In \cite{BDHS10} lower bounds were presented on the amount of communication required for essentially all O(n3)O(n^3)-like algorithms for linear algebra, including eigenvalue problems and the SVD. Conventional algorithms, including those currently implemented in (Sca)LAPACK, perform asymptotically more communication than these lower bounds require. In this paper we present parallel and sequential eigenvalue algorithms (for pencils, nonsymmetric matrices, and symmetric matrices) and SVD algorithms that do attain these lower bounds, and analyze their convergence and communication costs.Comment: 43 pages, 11 figure

    Arithmetic circuits: the chasm at depth four gets wider

    Get PDF
    In their paper on the "chasm at depth four", Agrawal and Vinay have shown that polynomials in m variables of degree O(m) which admit arithmetic circuits of size 2^o(m) also admit arithmetic circuits of depth four and size 2^o(m). This theorem shows that for problems such as arithmetic circuit lower bounds or black-box derandomization of identity testing, the case of depth four circuits is in a certain sense the general case. In this paper we show that smaller depth four circuits can be obtained if we start from polynomial size arithmetic circuits. For instance, we show that if the permanent of n*n matrices has circuits of size polynomial in n, then it also has depth 4 circuits of size n^O(sqrt(n)*log(n)). Our depth four circuits use integer constants of polynomial size. These results have potential applications to lower bounds and deterministic identity testing, in particular for sums of products of sparse univariate polynomials. We also give an application to boolean circuit complexity, and a simple (but suboptimal) reduction to polylogarithmic depth for arithmetic circuits of polynomial size and polynomially bounded degree

    Relative Entropy Relaxations for Signomial Optimization

    Full text link
    Signomial programs (SPs) are optimization problems specified in terms of signomials, which are weighted sums of exponentials composed with linear functionals of a decision variable. SPs are non-convex optimization problems in general, and families of NP-hard problems can be reduced to SPs. In this paper we describe a hierarchy of convex relaxations to obtain successively tighter lower bounds of the optimal value of SPs. This sequence of lower bounds is computed by solving increasingly larger-sized relative entropy optimization problems, which are convex programs specified in terms of linear and relative entropy functions. Our approach relies crucially on the observation that the relative entropy function -- by virtue of its joint convexity with respect to both arguments -- provides a convex parametrization of certain sets of globally nonnegative signomials with efficiently computable nonnegativity certificates via the arithmetic-geometric-mean inequality. By appealing to representation theorems from real algebraic geometry, we show that our sequences of lower bounds converge to the global optima for broad classes of SPs. Finally, we also demonstrate the effectiveness of our methods via numerical experiments

    Bounds on the arithmetic degree : a thesis presented in partial fulfilment of the requirements for the degree of Master of Science in Mathematics at Massey University

    Get PDF
    In this thesis we study the arithmetic degree theory of polynomial ideals. The main objectives are: (i) to show whether we can generalize a lower bound on the arithmetic degree of monomial ideals to the arithmetic degree of arbitrary homogeneous ideals; and (ii) to explain whether some known bounds for the geometric degree can be restated in terms of bounds on the arithmetic degree. We give a negative answer to all questions raised by constructing counterexamples. In some cases we provide a general method for constructing such counterexamples. Concerning properties of the arithmetic degree, we give a new Bezout-type theorem. Finally we take a brief look at open problems concerning the arithmetic degree under hypersurface sections

    Bounds for the price of discrete arithmetic Asian options.

    Get PDF
    In this paper the pricing of European-style discrete arithmetic Asian options with fixed and floating strike is studied by deriving analytical lower and upper bounds. In our approach we use a general technique for deriving upper (and lower) bounds for stop-loss premiums of sums of dependent random variables, as explained in Kaas, Dhaene and Goovaerts (2000), and additionally, the ideas of Rogers and Shi (1995) and of Nielsen and Sandmann (2003). We are able to create a unifying framework for discrete Asian options through these bounds, that generalizes several approaches in the literature as well as improves the existing results. We obtain analytical and easily computable bounds. The aim of the paper is to formulate an advice of the appropriate choice of the bounds given the parameters, investigate the effect of different conditioning variables and compare their efficiency numerically. Several sets of numerical results are included. We also show that the hedging using these bounds is possible. Moreover, our methods are applicable to a wide range of (pricing) problems involving a sum of dependent random variables.Asian option; Choice; Efficiency; Framework; Hedging; Methods; Options; Premium; Pricing; Problems; Random variables; Research; Stop-loss premium; Variables;

    Decimated generalized Prony systems

    Full text link
    We continue studying robustness of solving algebraic systems of Prony type (also known as the exponential fitting systems), which appear prominently in many areas of mathematics, in particular modern "sub-Nyquist" sampling theories. We show that by considering these systems at arithmetic progressions (or "decimating" them), one can achieve better performance in the presence of noise. We also show that the corresponding lower bounds are closely related to well-known estimates, obtained for similar problems but in different contexts

    Minimizing Communication in Linear Algebra

    Full text link
    In 1981 Hong and Kung proved a lower bound on the amount of communication needed to perform dense, matrix-multiplication using the conventional O(n3)O(n^3) algorithm, where the input matrices were too large to fit in the small, fast memory. In 2004 Irony, Toledo and Tiskin gave a new proof of this result and extended it to the parallel case. In both cases the lower bound may be expressed as Ω\Omega(#arithmetic operations / M\sqrt{M}), where M is the size of the fast memory (or local memory in the parallel case). Here we generalize these results to a much wider variety of algorithms, including LU factorization, Cholesky factorization, LDLTLDL^T factorization, QR factorization, algorithms for eigenvalues and singular values, i.e., essentially all direct methods of linear algebra. The proof works for dense or sparse matrices, and for sequential or parallel algorithms. In addition to lower bounds on the amount of data moved (bandwidth) we get lower bounds on the number of messages required to move it (latency). We illustrate how to extend our lower bound technique to compositions of linear algebra operations (like computing powers of a matrix), to decide whether it is enough to call a sequence of simpler optimal algorithms (like matrix multiplication) to minimize communication, or if we can do better. We give examples of both. We also show how to extend our lower bounds to certain graph theoretic problems. We point out recently designed algorithms for dense LU, Cholesky, QR, eigenvalue and the SVD problems that attain these lower bounds; implementations of LU and QR show large speedups over conventional linear algebra algorithms in standard libraries like LAPACK and ScaLAPACK. Many open problems remain.Comment: 27 pages, 2 table
    • …
    corecore