45 research outputs found

    Proceedings of the 3rd International Workshop on Polyhedral Compilation Techniques

    Get PDF
    IMPACT 2013 in Berlin, Germany (in conjuction with HiPEAC 2013) is the third workshop in a series of international workshops on polyhedral compilation techniques. The previous workshops were held in Chamonix, France (2011) in conjuction with CGO 2011 and Paris, France (2012) in conjuction with HiPEAC 2012

    Communication lower bounds for nested bilinear algorithms

    Full text link
    We develop lower bounds on communication in the memory hierarchy or between processors for nested bilinear algorithms, such as Strassen's algorithm for matrix multiplication. We build on a previous framework that establishes communication lower bounds by use of the rank expansion, or the minimum rank of any fixed size subset of columns of a matrix, for each of the three matrices encoding the bilinear algorithm. This framework provides lower bounds for any way of computing a bilinear algorithm, which encompasses a larger space of algorithms than by fixing a particular dependency graph. Nested bilinear algorithms include fast recursive algorithms for convolution, matrix multiplication, and contraction of tensors with symmetry. Two bilinear algorithms can be nested by taking Kronecker products between their encoding matrices. Our main result is a lower bound on the rank expansion of a matrix constructed by a Kronecker product derived from lower bounds on the rank expansion of the Kronecker product's operands. To prove this bound, we map a subset of columns from a submatrix to a 2D grid, collapse them into a dense grid, expand the grid, and use the size of the expanded grid to bound the number of linearly independent columns of the submatrix. We apply the rank expansion lower bounds to obtain novel communication lower bounds for nested Toom-Cook convolution, Strassen's algorithm, and fast algorithms for partially symmetric contractions.Comment: 37 pages, 5 figures, 1 table. Update includes log-log convex/concave functions to fix previous bug in v

    Structural and Computational Existence Results for Multidimensional Subshifts

    Get PDF
    Symbolic dynamics is a branch of mathematics that studies the structure of infinite sequences of symbols, or in the multidimensional case, infinite grids of symbols. Classes of such sequences and grids defined by collections of forbidden patterns are called subshifts, and subshifts of finite type are defined by finitely many forbidden patterns. The simplest examples of multidimensional subshifts are sets of Wang tilings, infinite arrangements of square tiles with colored edges, where adjacent edges must have the same color. Multidimensional symbolic dynamics has strong connections to computability theory, since most of the basic properties of subshifts cannot be recognized by computer programs, but are instead characterized by some higher-level notion of computability. This dissertation focuses on the structure of multidimensional subshifts, and the ways in which it relates to their computational properties. In the first part, we study the subpattern posets and Cantor-Bendixson ranks of countable subshifts of finite type, which can be seen as measures of their structural complexity. We show, by explicitly constructing subshifts with the desired properties, that both notions are essentially restricted only by computability conditions. In the second part of the dissertation, we study different methods of defining (classes of ) multidimensional subshifts, and how they relate to each other and existing methods. We present definitions that use monadic second-order logic, a more restricted kind of logical quantification called quantifier extension, and multi-headed finite state machines. Two of the definitions give rise to hierarchies of subshift classes, which are a priori infinite, but which we show to collapse into finitely many levels. The quantifier extension provides insight to the somewhat mysterious class of multidimensional sofic subshifts, since we prove a characterization for the class of subshifts that can extend a sofic subshift into a nonsofic one.Symbolidynamiikka on matematiikan ala, joka tutkii äärettömän pituisten symbolijonojen ominaisuuksia, tai moniulotteisessa tapauksessa äärettömän laajoja symbolihiloja. Siirtoavaruudet ovat tällaisten jonojen tai hilojen kokoelmia, jotka on määritelty kieltämällä jokin joukko äärellisen kokoisia kuvioita, ja äärellisen tyypin siirtoavaruudet saadaan kieltämällä vain äärellisen monta kuviota. Wangin tiilitykset ovat yksinkertaisin esimerkki moniulotteisista siirtoavaruuksista. Ne ovat värillisistä neliöistä muodostettuja tiilityksiä, joissa kaikkien vierekkäisten sivujen on oltava samanvärisiä. Moniulotteinen symbolidynamiikka on vahvasti yhteydessä laskettavuuden teoriaan, sillä monia siirtoavaruuksien perusominaisuuksia ei ole mahdollista tunnistaa tietokoneohjelmilla, vaan korkeamman tason laskennallisilla malleilla. Väitöskirjassani tutkin moniulotteisten siirtoavaruuksien rakennetta ja sen suhdetta niiden laskennallisiin ominaisuuksiin. Ensimmäisessä osassa keskityn tiettyihin äärellisen tyypin siirtoavaruuksien rakenteellisiin ominaisuuksiin: äärellisten kuvioiden muodostamaan järjestykseen ja Cantor-Bendixsonin astelukuun. Halutunlaisia siirtoavaruuksia rakentamalla osoitan, että molemmat ominaisuudet ovat olennaisesti laskennallisten ehtojen rajoittamia. Väitöskirjan toisessa osassa tutkin erilaisia tapoja määritellä moniulotteisia siirtoavaruuksia, sekä sitä, miten nämä tavat vertautuvat toisiinsa ja tunnettuihin siirtoavaruuksien luokkiin. Käsittelen määritelmiä, jotka perustuvat toisen kertaluvun logiikkaan, kvanttorilaajennukseksi kutsuttuun rajoitettuun loogiseen kvantifiointiin, sekä monipäisiin äärellisiin automaatteihin. Näistä kolmesta määritelmästä kahteen liittyy erilliset siirtoavaruuksien hierarkiat, joiden todistan romahtavan äärellisen korkuisiksi. Kvanttorilaajennuksen tutkimus valottaa myös niin kutsuttujen sofisten siirtoavaruuksien rakennetta, jota ei vielä tunneta hyvin: kyseisessä luvussa selvitän tarkasti, mitkä siirtoavaruudet voivat laajentaa sofisen avaruuden ei-sofiseksi.Siirretty Doriast

    Beyond shared memory loop parallelism in the polyhedral model

    Get PDF
    2013 Spring.Includes bibliographical references.With the introduction of multi-core processors, motivated by power and energy concerns, parallel processing has become main-stream. Parallel programming is much more difficult due to its non-deterministic nature, and because of parallel programming bugs that arise from non-determinacy. One solution is automatic parallelization, where it is entirely up to the compiler to efficiently parallelize sequential programs. However, automatic parallelization is very difficult, and only a handful of successful techniques are available, even after decades of research. Automatic parallelization for distributed memory architectures is even more problematic in that it requires explicit handling of data partitioning and communication. Since data must be partitioned among multiple nodes that do not share memory, the original memory allocation of sequential programs cannot be directly used. One of the main contributions of this dissertation is the development of techniques for generating distributed memory parallel code with parametric tiling. Our approach builds on important contributions to the polyhedral model, a mathematical framework for reasoning about program transformations. We show that many affine control programs can be uniformized only with simple techniques. Being able to assume uniform dependences significantly simplifies distributed memory code generation, and also enables parametric tiling. Our approach implemented in the AlphaZ system, a system for prototyping analyses, transformations, and code generators in the polyhedral model. The key features of AlphaZ are memory re-allocation, and explicit representation of reductions. We evaluate our approach on a collection of polyhedral kernels from the PolyBench suite, and show that our approach scales as well as PLuTo, a state-of-the-art shared memory automatic parallelizer using the polyhedral model. Automatic parallelization is only one approach to dealing with the non-deterministic nature of parallel programming that leaves the difficulty entirely to the compiler. Another approach is to develop novel parallel programming languages. These languages, such as X10, aim to provide highly productive parallel programming environment by including parallelism into the language design. However, even in these languages, parallel bugs remain to be an important issue that hinders programmer productivity. Another contribution of this dissertation is to extend the array dataflow analysis to handle a subset of X10 programs. We apply the result of dataflow analysis to statically guarantee determinism. Providing static guarantees can significantly increase programmer productivity by catching questionable implementations at compile-time, or even while programming

    Tightening curves and graphs on surfaces

    Get PDF
    Any continuous deformation of closed curves on a surface can be decomposed into a finite sequence of local changes on the structure of the curves; we refer to such local operations as homotopy moves. Tightening is the process of deforming given curves into their minimum position; that is, those with minimum number of self-intersections. While such operations and the tightening process has been studied extensively, surprisingly little is known about the quantitative bounds on the number of homotopy moves required to tighten an arbitrary curve. An unexpected connection exists between homotopy moves and a set of local operations on graphs called electrical transformations. Electrical transformations have been used to simplify electrical networks since the 19th century; later they have been used for solving various combinatorial problems on graphs, as well as applications in statistical mechanics, robotics, and quantum mechanics. Steinitz, in his study of 3-dimensional polytopes, looked at the electrical transformations through the lens of medial construction, and implicitly established the connection to homotopy moves; later the same observation has been discovered independently in the context of knots. In this thesis, we study the process of tightening curves on surfaces using homotopy moves and their consequences on electrical transformations from a quantitative perspective. To derive upper and lower bounds we utilize tools like curve invariants, surface theory, combinatorial topology, and hyperbolic geometry. We develop several new tools to construct efficient algorithms on tightening curves and graphs, as well as to present examples where no efficient algorithm exists. We then argue that in order to study electrical transformations, intuitively it is most beneficial to work with monotonic homotopy moves instead, where no new crossings are created throughout the process; ideas and proof techniques that work for monotonic homotopy moves should transfer to those for electrical transformations. We present conjectures and partial evidence supporting the argument

    Large bichromatic point sets admit empty monochromatic 4-gons

    No full text
    We consider a variation of a problem stated by Erd˝os and Szekeres in 1935 about the existence of a number fES(k) such that any set S of at least fES(k) points in general position in the plane has a subset of k points that are the vertices of a convex k-gon. In our setting the points of S are colored, and we say that a (not necessarily convex) spanned polygon is monochromatic if all its vertices have the same color. Moreover, a polygon is called empty if it does not contain any points of S in its interior. We show that any bichromatic set of n ≥ 5044 points in R2 in general position determines at least one empty, monochromatic quadrilateral (and thus linearly many).Postprint (published version

    Proceedings of the 26th International Symposium on Theoretical Aspects of Computer Science (STACS'09)

    Get PDF
    The Symposium on Theoretical Aspects of Computer Science (STACS) is held alternately in France and in Germany. The conference of February 26-28, 2009, held in Freiburg, is the 26th in this series. Previous meetings took place in Paris (1984), Saarbr¨ucken (1985), Orsay (1986), Passau (1987), Bordeaux (1988), Paderborn (1989), Rouen (1990), Hamburg (1991), Cachan (1992), W¨urzburg (1993), Caen (1994), M¨unchen (1995), Grenoble (1996), L¨ubeck (1997), Paris (1998), Trier (1999), Lille (2000), Dresden (2001), Antibes (2002), Berlin (2003), Montpellier (2004), Stuttgart (2005), Marseille (2006), Aachen (2007), and Bordeaux (2008). ..

    Trellis Decoding And Applications For Quantum Error Correction

    Get PDF
    Compact, graphical representations of error-correcting codes called trellises are a crucial tool in classical coding theory, establishing both theoretical properties and performance metrics for practical use. The idea was extended to quantum error-correcting codes by Ollivier and Tillich in 2005. Here, we use their foundation to establish a practical decoder able to compute the maximum-likely error for any stabilizer code over a finite field of prime dimension. We define a canonical form for the stabilizer group and use it to classify the internal structure of the graph. Similarities and differences between the classical and quantum theories are discussed throughout. Numerical results are presented which match or outperform current state-of-the-art decoding techniques. New construction techniques for large trellises are developed and practical implementations discussed. We then define a dual trellis and use algebraic graph theory to solve the maximum-likely coset problem for any stabilizer code over a finite field of prime dimension at minimum added cost. Classical trellis theory makes occasional theoretical use of a graph product called the trellis product. We establish the relationship between the trellis product and the standard graph products and use it to provide a closed form expression for the resulting graph, allowing it to be used in practice. We explore its properties and classify all idempotents. The special structure of the trellis allows us to present a factorization procedure for the product, which is much simpler than that of the standard products. Finally, we turn to an algorithmic study of the trellis and explore what coding-theoretic information can be extracted assuming no other information about the code is available. In the process, we present a state-of-the-art algorithm for computing the minimum distance for any stabilizer code over a finite field of prime dimension. We also define a new weight enumerator for stabilizer codes over F_2 incorporating the phases of each stabilizer and provide a trellis-based algorithm to compute it.Ph.D
    corecore