14 research outputs found

    A Survey on Compiler Autotuning using Machine Learning

    Full text link
    Since the mid-1990s, researchers have been trying to use machine-learning based approaches to solve a number of different compiler optimization problems. These techniques primarily enhance the quality of the obtained results and, more importantly, make it feasible to tackle two main compiler optimization problems: optimization selection (choosing which optimizations to apply) and phase-ordering (choosing the order of applying optimizations). The compiler optimization space continues to grow due to the advancement of applications, increasing number of compiler optimizations, and new target architectures. Generic optimization passes in compilers cannot fully leverage newly introduced optimizations and, therefore, cannot keep up with the pace of increasing options. This survey summarizes and classifies the recent advances in using machine learning for the compiler optimization field, particularly on the two major problems of (1) selecting the best optimizations and (2) the phase-ordering of optimizations. The survey highlights the approaches taken so far, the obtained results, the fine-grain classification among different approaches and finally, the influential papers of the field.Comment: version 5.0 (updated on September 2018)- Preprint Version For our Accepted Journal @ ACM CSUR 2018 (42 pages) - This survey will be updated quarterly here (Send me your new published papers to be added in the subsequent version) History: Received November 2016; Revised August 2017; Revised February 2018; Accepted March 2018

    Exploration of Compiler Optimization Sequences Using a Hybrid Approach

    Get PDF
    Finding a program-specific compiler optimization sequence is a challenge, due to the large number of optimizations provided by optimizing compilers. As a result, researchers have proposed design-space exploration schemes. This paper also presents a design-space exploration scheme, which aims to search for a compiler optimization sequence. Our hybrid approach relies on sequences previously generated for a set of training programs, with the purpose of finding optimizations and their order of application. In the first step, a clustering algorithm chooses optimizations, and in the second step, a metaheuristic algorithm discovers the sequence, in which the compiler will apply each optimization. We evaluate our approach using the LLVM compiler, and an I7 processor, respectively. The results show that we can find optimization sequences that result in target codes that, when executed on the I7 processor, outperform the standard optimization level O3, by an average improvement of 8.01 % and 6.07 %, on Polybench and cBench benchmark suites, respectively. In addition, our approach outperforms the method proposed by Purini and Jain, Best10, by an average improvement of 24.22 % and 38.81 %, considering the two benchmarks suites

    COBAYN: Compiler autotuning framework using Bayesian networks

    Get PDF
    The variety of today's architectures forces programmers to spend a great deal of time porting and tuning application codes across different platforms. Compilers themselves need additional tuning, which has considerable complexity as the standard optimization levels, usually designed for the average case and the specific target architecture, often fail to bring the best results. This article proposes COBAYN: Compiler autotuning framework using Bayesian Networks, an approach for a compiler autotuning methodology using machine learning to speed up application performance and to reduce the cost of the compiler optimization phases. The proposed framework is based on the application characterization done dynamically by using independent microarchitecture features and Bayesian networks. The article also presents an evaluation based on using static analysis and hybrid feature collection approaches. In addition, the article compares Bayesian networks with respect to several state-of-the-art machine-learning models. Experiments were carried out on an ARM embedded platform and GCC compiler by considering two benchmark suites with 39 applications. The set of compiler configurations, selected by the model (less than 7% of the search space), demonstrated an application performance speedup of up to 4.6× on Polybench (1.85× on average) and 3.1× on cBench (1.54× on average) with respect to standard optimization levels. Moreover, the comparison of the proposed technique with (i) random iterative compilation, (ii) machine learning-based iterative compilation, and (iii) noniterative predictive modeling techniques shows, on average, 1.2×, 1.37×, and 1.48×speedup, respectively. Finally, the proposed method demonstrates 4×and 3×speedup, respectively, on cBench and Polybench in terms of exploration efficiency given the same quality of the solutions generated by the random iterative compilation model

    Internet of Things and Neural Network Based Energy Optimization and Predictive Maintenance Techniques in Heterogeneous Data Centers

    Full text link
    Rapid growth of cloud-based systems is accelerating growth of data centers. Private and public cloud service providers are increasingly deploying data centers all around the world. The need for edge locations by cloud computing providers has created large demand for leasing space and power from midsize data centers in smaller cities. Midsize data centers are typically modular and heterogeneous demanding 100% availability along with high service level agreements. Data centers are recognized as an increasingly troublesome percentage of electricity consumption. Growing energy costs and environmental responsibility have placed the data center industry, particularly midsize data centers under increasing pressure to improve its operational efficiency. The power consumption is mainly due to servers and networking devices on computing side and cooling systems on the facility side. The facility side systems have complex interactions with each other. The static control logic and high number of configuration and nonlinear interdependency create challenges in understanding and optimizing energy efficiency. Doing analytical or experimental approach to determine optimum configuration is very challenging however, a learning based approach has proven to be effective for optimizing complex operations. Machine learning methodologies have proven to be effective for optimizing complex systems. In this thesis, we utilize a learning engine that learns from operationally collected data to accurately predict Power Usage Effectiveness (PUE) and creation of intelligent method to validate and test results. We explore new techniques on how to design and implement Internet of Things (IoT) platform to collect, store and analyze data. First, we study using machine learning framework to predictively detect issues in facility side systems in a modular midsize data center. We propose ways to recognize gaps between optimal values and operational values to identify potential issues. Second, we study using machine learning techniques to optimize power usage in facility side systems in a modular midsize data center. We have experimented with neural network controllers to further optimize the data suite cooling system energy consumption in real time. We designed, implemented, and deployed an Internet of Things framework to collect relevant information from facility side infrastructure. We designed flexible configuration controllers to connect all facility side infrastructure within data center ecosystem. We addressed resiliency by creating reductant controls network and mission critical alerting via edge device. The data collected was also used to enhance service processes that improved operational service level metrics. We observed high impact on service metrics with faster response time (increased 77%) and first time resolution went up by 32%. Further, our experimental results show that we can predictively identify issues in the cooling systems. And, the anomalies in the systems can be identified 30 days to 60 days ahead. We also see the potential to optimize power usage efficiency in the range of 3% to 6%. In the future, more samples of issues and corrective actions can be analyzed to create practical implementation of neural network based controller for real-time optimization.Ph.D.Information Systems Engineering, College of Engineering and Computer ScienceUniversity of Michigan-Dearbornhttp://deepblue.lib.umich.edu/bitstream/2027.42/136074/1/Final Dissertation Vishal Singh.pdfDescription of Final Dissertation Vishal Singh.pdf : Dissertatio

    Reducing the cost of heuristic generation with machine learning

    Get PDF
    The space of compile-time transformations and or run-time options which can improve the performance of a given code is usually so large as to be virtually impossible to search in any practical time-frame. Thus, heuristics are leveraged which can suggest good but not necessarily best configurations. Unfortunately, since such heuristics are tightly coupled to processor architecture performance is not portable; heuristics must be tuned, traditionally manually, for each device in turn. This is extremely laborious and the result is often outdated heuristics and less effective optimisation. Ideally, to keep up with changes in hardware and run-time environments a fast and automated method to generate heuristics is needed. Recent works have shown that machine learning can be used to produce mathematical models or rules in their place, which is automated but not necessarily fast. This thesis proposes the use of active machine learning, sequential analysis, and active feature acquisition to accelerate the training process in an automatic way, thereby tackling this timely and substantive issue. First, a demonstration of the efficiency of active learning over the previously standard supervised machine learning technique is presented in the form of an ensemble algorithm. This algorithm learns a model capable of predicting the best processing device in a heterogeneous system to use per workload size, per kernel. Active machine learning is a methodology which is sensitive to the cost of training; specifically, it is able to reduce the time taken to construct a model by predicting how much is expected to be learnt from each new training instance and then only choosing to learn from those most profitable examples. The exemplar heuristic is constructed on average 4x faster than a baseline approach, whilst maintaining comparable quality. Next, a combination of active learning and sequential analysis is presented which reduces both the number of samples per training example as well as the number of training examples overall. This allows for the creation of models based on noisy information, sacrificing accuracy per training instance for speed, without having a significant affect on the quality of the final product. In particular, the runtime of high-performance compute kernels is predicted from code transformations one may want to apply using a heuristic which was generated up to 26x faster than with active learning alone. Finally, preliminary work demonstrates that an automated system can be created which optimises both the number of training examples as well as which features to select during training to further substantially accelerate learning, in cases where each feature value that is revealed comes at some cost

    Enabling aggressive compiler optimization for the mobile environment

    Get PDF
    Aggressive code optimization on the mobile environment is a difficult endeavor. Billions of users rely on mobile devices for their daily computing tasks. Yet, they mostly run poorly optimized code, under-utilizing their already limited processing and energy resources. Existing optimization approaches, like iterative compilation, perform well in other domains but fall short on the mobile environment. They either rely on representative inputs that are hard to reconstruct, or expose users to slowdowns and crashes. An ideal solution must be able to perform an optimization search by repeatedly evaluating different optimization decisions on the same input. That input should be representative of actual user usage without requiring developers to artificially create it. Finally, users should never be exposed to slow or crashing evaluations, a quite common side-effect of iterative compilation. This thesis presents a novel approach with all above in mind, bringing aggressive code optimization to the mobile environment. With a transparent capture mechanism, real user inputs can be stored. This mechanism is infrequently invoked and remains unnoticeable from the users. A single capture is enough to enable offline, input-driven code optimization. It supports C functions as well as code regions of interactive Android applications. It allows controlling the timing and frequency of captures, it bails out on imminent high-impact runtime events, and excludes from captures some immutable data. A replay-based evaluation mechanism is able to repeatedly restore a captured input while changing the underlying code. For C programs, it employs compile and link-time strategies to consistently work despite code transformations. For Android apps, a novel mechanism was developed, able to replay using different code types. These are the original Android-compiled code, interpretation, and LLVM-generated code. Additionally, it works well even in the presence of memory-shuffling security mechanisms. Capture and replay is fused into an iterative compilation system that uses offline, replay-based evaluations. Initially, real inputs are captured online, without noticeably affecting the users. For C and interactive apps, captures required on average 2ms and 15ms respectively. Then, an optimization search is performed by repeatedly replaying the inputs using different code transformations. As this happens offline, any crashing or erroneous executions are not affecting the users. C programs became 29% faster using a random search, while interactive apps became 44% faster using a genetic algorithm and a novel Android backend based on LLVM. Finally, with crowd-sourcing, the offline evaluation effort was significantly accelerated. Specifically, for the user with the highest workload the search accelerated by 7 times

    Iterative optimization for the data center

    No full text

    Iterative optimization for the data center

    No full text
    Iterative optimization is a simple but powerful approach that searches for the best possible combination of compiler optimizations for a given workload. However, each program, if not each data set, potentially favors a different combination. As a result, iterative optimization is plagued by several practical issues that prevent it from being widely used in practice: a large number of runs are required for finding the best combination; the process can be data set dependent; and the exploration process incurs significant overhead that needs to be compensated for by performance benefits. Therefore, while iterative optimization has been shown to have significant performance potential, it is seldomly used in production compilers. In this paper, we propose Iterative Optimization for the Data Center (IODC): we show that servers and data centers offer a context in which all of the above hurdles can be overcome. The basic idea is to spawn different combinations across workers and recollect performance statistics at the master, which then evolves to the optimum combination of compiler optimizations. IODC carefully manages costs and benefits, and is transparent to the end user. We evaluate IODC using both Map Reduce and throughput compute-intensive server applications. In order to reflect the large number of users interacting with the system, we gather a very large collection of data sets (at least 1000 and up to several million unique data sets per program), for a total storage of 10.7TB, and 568 days of CPU time. We report an average performance improvement of 1.48x, and up to 2.08x, for the MapReduce applications, and 1.14 x, and up to 1.39x, for the throughput compute-intensive server applications
    corecore