31,090 research outputs found
A Survey on Compiler Autotuning using Machine Learning
Since the mid-1990s, researchers have been trying to use machine-learning
based approaches to solve a number of different compiler optimization problems.
These techniques primarily enhance the quality of the obtained results and,
more importantly, make it feasible to tackle two main compiler optimization
problems: optimization selection (choosing which optimizations to apply) and
phase-ordering (choosing the order of applying optimizations). The compiler
optimization space continues to grow due to the advancement of applications,
increasing number of compiler optimizations, and new target architectures.
Generic optimization passes in compilers cannot fully leverage newly introduced
optimizations and, therefore, cannot keep up with the pace of increasing
options. This survey summarizes and classifies the recent advances in using
machine learning for the compiler optimization field, particularly on the two
major problems of (1) selecting the best optimizations and (2) the
phase-ordering of optimizations. The survey highlights the approaches taken so
far, the obtained results, the fine-grain classification among different
approaches and finally, the influential papers of the field.Comment: version 5.0 (updated on September 2018)- Preprint Version For our
Accepted Journal @ ACM CSUR 2018 (42 pages) - This survey will be updated
quarterly here (Send me your new published papers to be added in the
subsequent version) History: Received November 2016; Revised August 2017;
Revised February 2018; Accepted March 2018
Decoder-in-the-Loop: Genetic Optimization-based LDPC Code Design
LDPC code design tools typically rely on asymptotic code behavior and are
affected by an unavoidable performance degradation due to model imperfections
in the short length regime. We propose an LDPC code design scheme based on an
evolutionary algorithm, the Genetic Algorithm (GenAlg), implementing a
"decoder-in-the-loop" concept. It inherently takes into consideration the
channel, code length and the number of iterations while optimizing the
error-rate of the actual decoder hardware architecture. We construct short
length LDPC codes (i.e., the parity-check matrix) with error-rate performance
comparable to, or even outperforming that of well-designed standardized short
length LDPC codes over both AWGN and Rayleigh fading channels. Our proposed
algorithm can be used to design LDPC codes with special graph structures (e.g.,
accumulator-based codes) to facilitate the encoding step, or to satisfy any
other practical requirement. Moreover, GenAlg can be used to design LDPC codes
with the aim of reducing decoding latency and complexity, leading to coding
gains of up to dB and dB at BLER of for both AWGN and
Rayleigh fading channels, respectively, when compared to state-of-the-art short
LDPC codes. Also, we analyze what can be learned from the resulting codes and,
as such, the GenAlg particularly highlights design paradigms of short length
LDPC codes (e.g., codes with degree-1 variable nodes obtain very good results).Comment: in IEEE Access, 201
On the design of an ECOC-compliant genetic algorithm
Genetic Algorithms (GA) have been previously applied to Error-Correcting Output Codes (ECOC) in state-of-the-art works in order to find a suitable coding matrix. Nevertheless, none of the presented techniques directly take into account the properties of the ECOC matrix. As a result the considered search space is unnecessarily large. In this paper, a novel Genetic strategy to optimize the ECOC coding step is presented. This novel strategy redefines the usual crossover and mutation operators in order to take into account the theoretical properties of the ECOC framework. Thus, it reduces the search space and lets the algorithm to converge faster. In addition, a novel operator that is able to enlarge the code in a smart way is introduced. The novel methodology is tested on several UCI datasets and four challenging computer vision problems. Furthermore, the analysis of the results done in terms of performance, code length and number of Support Vectors shows that the optimization process is able to find very efficient codes, in terms of the trade-off between classification performance and the number of classifiers. Finally, classification performance per dichotomizer results shows that the novel proposal is able to obtain similar or even better results while defining a more compact number of dichotomies and SVs compared to state-of-the-art approaches
A Genetic Algorithm for Chromaticity Correction in Diffraction Limited Storage Rings
A multi-objective genetic algorithm is developed for optimizing
nonlinearities in diffraction limited storage rings. This algorithm determines
sextupole and octupole strengths for chromaticity correction that deliver
optimized dynamic aperture and beam lifetime. The algorithm makes use of
dominance constraints to breed desirable properties into the early generations.
The momentum aperture is optimized indirectly by constraining the chromatic
tune footprint and optimizing the off-energy dynamic aperture. The result is an
effective and computationally efficient technique for correcting chromaticity
in a storage ring while maintaining optimal dynamic aperture and beam lifetime.
This framework was developed for the Swiss Light Source (SLS) upgrade project.Comment: 12 pages, 14 figure
- …