5 research outputs found

    MLGOPerf: An ML Guided Inliner to Optimize Performance

    Full text link
    For the past 25 years, we have witnessed an extensive application of Machine Learning to the Compiler space; the selection and the phase-ordering problem. However, limited works have been upstreamed into the state-of-the-art compilers, i.e., LLVM, to seamlessly integrate the former into the optimization pipeline of a compiler to be readily deployed by the user. MLGO was among the first of such projects and it only strives to reduce the code size of a binary with an ML-based Inliner using Reinforcement Learning. This paper presents MLGOPerf; the first end-to-end framework capable of optimizing performance using LLVM's ML-Inliner. It employs a secondary ML model to generate rewards used for training a retargeted Reinforcement learning agent, previously used as the primary model by MLGO. It does so by predicting the post-inlining speedup of a function under analysis and it enables a fast training framework for the primary model which otherwise wouldn't be practical. The experimental results show MLGOPerf is able to gain up to 1.8% and 2.2% with respect to LLVM's optimization at O3 when trained for performance on SPEC CPU2006 and Cbench benchmarks, respectively. Furthermore, the proposed approach provides up to 26% increased opportunities to autotune code regions for our benchmarks which can be translated into an additional 3.7% speedup value.Comment: Version 2: Added the missing Table 6. The short version of this work is accepted at ACM/IEEE CASES 202

    Large Language Models for Compiler Optimization

    Full text link
    We explore the novel application of Large Language Models to code optimization. We present a 7B-parameter transformer model trained from scratch to optimize LLVM assembly for code size. The model takes as input unoptimized assembly and outputs a list of compiler options to best optimize the program. Crucially, during training, we ask the model to predict the instruction counts before and after optimization, and the optimized code itself. These auxiliary learning tasks significantly improve the optimization performance of the model and improve the model's depth of understanding. We evaluate on a large suite of test programs. Our approach achieves a 3.0% improvement in reducing instruction counts over the compiler, outperforming two state-of-the-art baselines that require thousands of compilations. Furthermore, the model shows surprisingly strong code reasoning abilities, generating compilable code 91% of the time and perfectly emulating the output of the compiler 70% of the time

    A Survey on Approaches of Motion Mode Recognition Using Sensors

    No full text

    Inertial Sensors and Their Applications

    Get PDF
    Due to the universal presence of motion, vibration, and shock, inertial motion sensors can be applied in various contexts. Development of the microelectromechanical (MEMS) technology opens up many new consumer and industrial applications for accelerometers and gyroscopes. The multiformity of applications creates different requirements to inertial sensors in terms of accuracy, size, power consumption and cost. This makes it challenging to choose sensors that are suited best for the particular application. In addition, development of signal processing algorithms for inertial sensor data require understanding on the physical principles of both motion generated and sensor operation principles. This chapter aims to aid the system designer to understand and manage these challenges. The principles of operation of accelerometers and gyroscopes are explained with examples of different applications using inertial sensors data as input. Especially, detailed examples of signal processing algorithms for pedestrian navigation and motion classification are given.acceptedVersionPeer reviewe
    corecore