250 research outputs found

    UMDA/S: An Effective Iterative Compilation Algorithm for Parameter Search

    Get PDF
    The search process is critical for iterative compilation because the large size of the search space and the cost of evaluating the candidate implementations make it infeasible to find the true optimal value of the optimization parameter by brute force. Considering it as a nonlinear global optimization problem, this paper introduces a new hybrid algorithm -- UMDA/S: Univariate Marginal Distribution Algorithm with Nelder-Mead Simplex Search, which utilizes the optimization space structure and parameter dependency to find the near optimal parameter. Elitist preservation, weighted estimation and mutation are proposed to improve the performance of UMDA/S. Experimental results show the ability of UMDA/S to locate more excellent parameters, as compared to existing static methods and search algorithms

    A methodology pruning the search space of six compiler transformations by addressing them together as one problem and by exploiting the hardware architecture details

    Get PDF
    Today’s compilers have a plethora of optimizations-transformations to choose from, and the correct choice, order as well parameters of transformations have a significant/large impact on performance; choosing the correct order and parameters of optimizations has been a long standing problem in compilation research, which until now remains unsolved; the separate sub-problems optimization gives a different schedule/binary for each sub-problem and these schedules cannot coexist, as by refining one degrades the other. Researchers try to solve this problem by using iterative compilation techniques but the search space is so big that it cannot be searched even by using modern supercomputers. Moreover, compiler transformations do not take into account the hardware architecture details and data reuse in an efficient way. In this paper, a new iterative compilation methodology is presented which reduces the search space of six compiler transformations by addressing the above problems; the search space is reduced by many orders of magnitude and thus an efficient solution is now capable to be found. The transformations are the following: loop tiling (including the number of the levels of tiling), loop unroll, register allocation, scalar replacement, loop interchange and data array layouts. The search space is reduced (a) by addressing the aforementioned transformations together as one problem and not separately, (b) by taking into account the custom hardware architecture details (e.g., cache size and associativity) and algorithm characteristics (e.g., data reuse). The proposed methodology has been evaluated over iterative compilation and gcc/icc compilers, on both embedded and general purpose processors; it achieves significant performance gains at many orders of magnitude lower compilation time

    Using Decision Tree Voting to Select a Polyhedral Model Loop Transformation

    Get PDF
    Algorithms in fields like image manipulation, sound and signal processing, and statistics frequently employ tight loops. These loops are computationally intensive and CPU-bound, making their performance highly dependent on efficient utilization of the CPU pipeline and memory bus. Recent years have seen CPU pipelines becoming more and more complicated, with features such as branch prediction and speculative execution. At the same time, clock speeds have stopped their prior exponential growth rate due to heat dissipation issues, and multiple cores have become prevalent. These developments have made it more difficult for developers to reason about how their code executes on the CPU, which in turn makes it difficult to write performant code. An automated method to take code and optimize it for most efficient execution would, therefore, be desirable. The Polyhedral Model allows the generation of alternative transformations for a loop nest that are semantically equivalent to the original. The transformations vary the degree of loop tiling, loop fusion, loop unrolling, parallelism, and vectorization. However, selecting the transformation that would most efficiently utilize the architecture remains challenging. Previous work utilizes regression models to select a transformation, using as features hardware performance counter values collected during a sample run of the program being optimized. Due to inaccuracies in the resulting regression model, the transformation selected by the model as the best transformation often yields unsatisfactory performance. As a result, previous work resorts to using a five-shot technique, which entails running the top five transformations suggested by the model and selecting the best one based on their actual runtime. However, for long-running benchmarks, five runs may be take an excessive amount of time. I present a variation on the previous approach which does not need to resort to the five-shot selection process to achieve performance comparable to the best five-shot results reported in previous work. With the transformations in the search space ranked in reverse runtime order, the transformation selected by my classifier is, on average, in the 86th percentile. There are several key contributing factors to the performance improvements attained by my method: formulating the problem as a classification problem rather than a regression problem, using static features in addition to dynamic performance counter features, performing feature selection, and using ensemble methods to boost the performance of the classifier. Decision trees are constructed from pairs of features (performance counters and structural features than can be determined statically from the source code). The trees are then evaluated according to the number of benchmarks for which they select a transformation that performs better than two baseline variants, the original program and the expected runtime if a randomly selected transformation were applied. The top 20 trees vote to select a final transformation

    Transformations of High-Level Synthesis Codes for High-Performance Computing

    Full text link
    Specialized hardware architectures promise a major step in performance and energy efficiency over the traditional load/store devices currently employed in large scale computing systems. The adoption of high-level synthesis (HLS) from languages such as C/C++ and OpenCL has greatly increased programmer productivity when designing for such platforms. While this has enabled a wider audience to target specialized hardware, the optimization principles known from traditional software design are no longer sufficient to implement high-performance codes. Fast and efficient codes for reconfigurable platforms are thus still challenging to design. To alleviate this, we present a set of optimizing transformations for HLS, targeting scalable and efficient architectures for high-performance computing (HPC) applications. Our work provides a toolbox for developers, where we systematically identify classes of transformations, the characteristics of their effect on the HLS code and the resulting hardware (e.g., increases data reuse or resource consumption), and the objectives that each transformation can target (e.g., resolve interface contention, or increase parallelism). We show how these can be used to efficiently exploit pipelining, on-chip distributed fast memory, and on-chip streaming dataflow, allowing for massively parallel architectures. To quantify the effect of our transformations, we use them to optimize a set of throughput-oriented FPGA kernels, demonstrating that our enhancements are sufficient to scale up parallelism within the hardware constraints. With the transformations covered, we hope to establish a common framework for performance engineers, compiler developers, and hardware developers, to tap into the performance potential offered by specialized hardware architectures using HLS

    Iterative Compilation and Performance Prediction for Numerical Applications

    Get PDF
    Institute for Computing Systems ArchitectureAs the current rate of improvement in processor performance far exceeds the rate of memory performance, memory latency is the dominant overhead in many performance critical applications. In many cases, automatic compiler-based approaches to improving memory performance are limited and programmers frequently resort to manual optimisation techniques. However, this process is tedious and time-consuming. Furthermore, a diverse range of a rapidly evolving hardware makes the optimisation process even more complex. It is often hard to predict the potential benefits from different optimisations and there are no simple criteria to stop optimisations i.e. when optimal memory performance has been achieved or sufficiently approached. This thesis presents a platform independent optimisation approach for numerical applications based on iterative feedback-directed program restructuring using a new reasonably fast and accurate performance prediction technique for guiding optimisations. New strategies for searching the optimisation space, by means of profiling to find the best possible program variant, have been developed. These strategies have been evaluated using a range of kernels and programs on different platforms and operating systems. A significant performance improvement has been achieved using new approaches when compared to the state-of-the-art native static and platform-specific feedback directed compilers

    Automatic scheduling of image processing pipelines

    Get PDF

    Model-driven Code Optimization

    Get PDF
    Although code optimizations have been applied by compilers for over 40 years, much of the research has been devoted to the development of particular optimizations. Certain problems with the application of optimizations have yet to be addressed, including when, where and in what order to apply optimizations to get the most benefit. A number of occurring events demand these problems to be considered. For example, cost-sensitive embedded systems are widely used, where any performance improvement from applying optimizations can help reduce cost. Although several approaches have been proposed for handling some of these issues, there is no systematic way to address the problems.This dissertation presents a novel model-based framework for effectively applying optimizations. The goal of the framework is to determine optimization properties and use these properties to drive the application of optimizations. This dissertation describes three framework instances: FPSO for predicting the profitability of scalar optimizations; FPLO for predicting the profitability of loop optimizations; and FIO for determining the interaction property. Based on profitability and the interaction properties, compilers will selectively apply only beneficial optimizations and determine code-specific optimization sequences to get the most benefit. We implemented the framework instances and performed the experiments to demonstrate their effectiveness and efficiency. On average, FPSO and FPLO can accurately predict profitability 90% of the time. Compared with a heuristic approach for selectively applying optimizations, our model-driven approach can achieve similar or better performance improvement without tuning the parameters necessary in the heuristic approach. Compared with an empirical approach that experimentally chooses a good order to apply optimizations, our model-driven approach can find similarly good sequences with up to 43 times compile-time savings.This dissertation demonstrates that analytic models can be used to address the effective application of optimizations. Our model-driven approach is practical and scalable. With model-driven optimizations, compilers can produce higher quality code in less time than what is possible with current approaches
    • …
    corecore