When applying optimizations, a number of decisions are made using fixed strategies, such as always applying an optimization if it is applicable, applying optimizations in a fixed order and assuming a fixed configuration for optimizations such as tile size and loop unrolling factor. In this paper, we present a framework that enables these decisions to be made based on predicting the impact of an optimization, taking into account resources and code context. The framework consists of optimization models, code models and resource models, which are integrated for predicting the impact of applying a set of optimizations. In this paper, we focus on cache performance and present an instance of the framework for cache. Since most opportunities for cache improvement come from loop optimizations, we describe code, optimization and cache models tailored to predict the benefit of optimizations for data locality. Experimentally we demonstrate that always applying an optimization when it is safe can degrade performance. We then show the improvement of applying an optimization only when our framework indicates it will be beneficial. The accuracy of our framework ranges from 100 % to 82%. We also show that our framework can be used to choose the most beneficial optimization when a number of optimizations can be applied to a loop nest. And lastly, we show that we can use the framework to combine optimizations on a loop nest. The framework is general and can be used for other problems such as determining the best order of optimizations for a code segment by providing an objective function to use with search techniques. 1
To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.