6 research outputs found

    Milepost GCC: Machine Learning Enabled Self-tuning Compiler

    Get PDF
    International audienceTuning compiler optimizations for rapidly evolving hardwaremakes porting and extending an optimizing compiler for each new platform extremely challenging. Iterative optimization is a popular approach to adapting programs to a new architecture automatically using feedback-directed compilation. However, the large number of evaluations required for each program has prevented iterative compilation from widespread take-up in production compilers. Machine learning has been proposed to tune optimizations across programs systematically but is currently limited to a few transformations, long training phases and critically lacks publicly released, stable tools. Our approach is to develop a modular, extensible, self-tuning optimization infrastructure to automatically learn the best optimizations across multiple programs and architectures based on the correlation between program features, run-time behavior and optimizations. In this paper we describeMilepostGCC, the first publicly-available open-source machine learning-based compiler. It consists of an Interactive Compilation Interface (ICI) and plugins to extract program features and exchange optimization data with the cTuning.org open public repository. It automatically adapts the internal optimization heuristic at function-level granularity to improve execution time, code size and compilation time of a new program on a given architecture. Part of the MILEPOST technology together with low-level ICI-inspired plugin framework is now included in the mainline GCC.We developed machine learning plugins based on probabilistic and transductive approaches to predict good combinations of optimizations. Our preliminary experimental results show that it is possible to automatically reduce the execution time of individual MiBench programs, some by more than a factor of 2, while also improving compilation time and code size. On average we are able to reduce the execution time of the MiBench benchmark suite by 11% for the ARC reconfigurable processor.We also present a realistic multi-objective optimization scenario for Berkeley DB library using Milepost GCC and improve execution time by approximately 17%, while reducing compilatio

    Congressional districting using a TSP-based genetic algorithm

    No full text
    Abstract. The drawing of congressional districts by legislative bodies in the United States creates a great deal of controversy each decade as political parties and special interest groups attempt to divide states into districts beneficial to their candidates. The genetic algorithm presented in this paper attempts to find a set of compact and contiguous congressional districts of approximately equal population. This genetic algorithm utilizes a technique based on an encoding and genetic operators used to solve Traveling Salesman Problems (TSP). This encoding forces near equality of district population and uses the fitness function to promote district contiguity and compactness. A post-processing step further refines district population equality. Results are provided for three states (North Carolina, South Carolina, and Iowa) using 2000 census data. 1 Problem History The United States Congress consists of two houses, the Senate (containing two members from each of the fifty states) and the House of Representatives. The House of Representatives has 435 members, and each state is apportioned a congressional delegation in proportion to its population as determined by a national, decennial census. Each state (usually the state’s legislative body) is responsible for partitioning its state into a number of districts (a districting plan) equal to its apportionment. Through years of case law, the courts have outlined several requirements for the drawing of districts [1]. – The districts must be contiguous. – The districts must be of equal population following the “one-man one-vote ” principle

    A Study on the Evolution of Bayesian Network Graph Structures

    No full text
    Abstract. Bayesian Networks (BN) are often sought as useful descriptive and predictive models for theavaHable data. Learning algorithms trying to ascertain automatically the best BN model (graph structure) for some input data are of the greatest interest for practical reasons. In this paper we examine a number of evolutionary programming algorithms for this network induction problem. Our algorithms build on recent advances in the field and are based on selection and various kinds of mutation operators (working at both the directed acyclic and essential graph level). A review of related evolutionary work is also provided. We analyze and discuss the merit and computational toll of these EP algorithms in a couple of benchmark tasks. Some general conclusions'about the most efficient algorithms, and the most appropriate search landscapes are presented.

    A review on distinct methods and approaches to perform triangulation for Bayesian networks

    No full text
    Summary. Triangulation of a Bayesian network (BN) is somehow a necessary step in order to perform inference in a more efficient way, either if we use a secondary structure as the join tree (JT) or implicitly when we try to use other direct techniques on the network. If we focus on the first procedure, the goodness of the triangulation will affect on the simplicity of the join tree and therefore on a quicker and easier inference process. The task of obtaining an optimal triangulation (in terms of producing the minimum number of triangulation links a.k.a. fill-ins) has been proved as an NP-hard problem. That is why many methods of distinct nature have been used with the purpose of getting as good as possible triangulations for any given network, especially important for big structures, that is, with a large number of variables and links. In this chapter, we attempt to introduce the problem of triangulation, locating it in the compilation process and showing first its relevance for inference, and consequently for working with Bayesian networks. After this introduction, the most popular and used strategies to cope with the triangulation problem are reviewed

    Thermal Properties of Epoxy/Block-Copolymer Blends

    No full text
    New ways to improve the thermal properties of epoxy systems have been interesting topic for polymer researchers for several years. The block copolymer-modified epoxy matrix has received a great deal of attention and is still being intensely studied. Differential scanning calorimetry (DSC) is the most commonly used technique to investigate the thermal properties of epoxy/block copolymer systems. It can generally provide information such as phase behavior, miscibility, glass transition temperature, melting temperature, etc. between the block copolymer blocks and the epoxy matrix. In this chapter, we have mainly focused on the changes in the glass transition properties of the thermosets modified with block copolymers. The influence of the type of block copolymers and curing agents used and the effects of cure time and temperature on the phase behavior and microphase separation of epoxy thermosets are also discussed
    corecore