103,281 research outputs found

    Lost in translation: Exposing hidden compiler optimization opportunities

    Get PDF
    Existing iterative compilation and machine-learning-based optimization techniques have been proven very successful in achieving better optimizations than the standard optimization levels of a compiler. However, they were not engineered to support the tuning of a compiler's optimizer as part of the compiler's daily development cycle. In this paper, we first establish the required properties which a technique must exhibit to enable such tuning. We then introduce an enhancement to the classic nightly routine testing of compilers which exhibits all the required properties, and thus, is capable of driving the improvement and tuning of the compiler's common optimizer. This is achieved by leveraging resource usage and compilation information collected while systematically exploiting prefixes of the transformations applied at standard optimization levels. Experimental evaluation using the LLVM v6.0.1 compiler demonstrated that the new approach was able to reveal hidden cross-architecture and architecture-dependent potential optimizations on two popular processors: the Intel i5-6300U and the Arm Cortex-A53-based Broadcom BCM2837 used in the Raspberry Pi 3B+. As a case study, we demonstrate how the insights from our approach enabled us to identify and remove a significant shortcoming of the CFG simplification pass of the LLVM v6.0.1 compiler.Comment: 31 pages, 7 figures, 2 table. arXiv admin note: text overlap with arXiv:1802.0984

    A Survey on Compiler Autotuning using Machine Learning

    Full text link
    Since the mid-1990s, researchers have been trying to use machine-learning based approaches to solve a number of different compiler optimization problems. These techniques primarily enhance the quality of the obtained results and, more importantly, make it feasible to tackle two main compiler optimization problems: optimization selection (choosing which optimizations to apply) and phase-ordering (choosing the order of applying optimizations). The compiler optimization space continues to grow due to the advancement of applications, increasing number of compiler optimizations, and new target architectures. Generic optimization passes in compilers cannot fully leverage newly introduced optimizations and, therefore, cannot keep up with the pace of increasing options. This survey summarizes and classifies the recent advances in using machine learning for the compiler optimization field, particularly on the two major problems of (1) selecting the best optimizations and (2) the phase-ordering of optimizations. The survey highlights the approaches taken so far, the obtained results, the fine-grain classification among different approaches and finally, the influential papers of the field.Comment: version 5.0 (updated on September 2018)- Preprint Version For our Accepted Journal @ ACM CSUR 2018 (42 pages) - This survey will be updated quarterly here (Send me your new published papers to be added in the subsequent version) History: Received November 2016; Revised August 2017; Revised February 2018; Accepted March 2018

    Can k-NN imputation improve the performance of C4.5 with small software project data sets? A comparative evaluation

    Get PDF
    Missing data is a widespread problem that can affect the ability to use data to construct effective prediction systems. We investigate a common machine learning technique that can tolerate missing values, namely C4.5, to predict cost using six real world software project databases. We analyze the predictive performance after using the k-NN missing data imputation technique to see if it is better to tolerate missing data or to try to impute missing values and then apply the C4.5 algorithm. For the investigation, we simulated three missingness mechanisms, three missing data patterns, and five missing data percentages. We found that the k-NN imputation can improve the prediction accuracy of C4.5. At the same time, both C4.5 and k-NN are little affected by the missingness mechanism, but that the missing data pattern and the missing data percentage have a strong negative impact upon prediction (or imputation) accuracy particularly if the missing data percentage exceeds 40%

    A Survey of Prediction and Classification Techniques in Multicore Processor Systems

    Get PDF
    In multicore processor systems, being able to accurately predict the future provides new optimization opportunities, which otherwise could not be exploited. For example, an oracle able to predict a certain application\u27s behavior running on a smart phone could direct the power manager to switch to appropriate dynamic voltage and frequency scaling modes that would guarantee minimum levels of desired performance while saving energy consumption and thereby prolonging battery life. Using predictions enables systems to become proactive rather than continue to operate in a reactive manner. This prediction-based proactive approach has become increasingly popular in the design and optimization of integrated circuits and of multicore processor systems. Prediction transforms from simple forecasting to sophisticated machine learning based prediction and classification that learns from existing data, employs data mining, and predicts future behavior. This can be exploited by novel optimization techniques that can span across all layers of the computing stack. In this survey paper, we present a discussion of the most popular techniques on prediction and classification in the general context of computing systems with emphasis on multicore processors. The paper is far from comprehensive, but, it will help the reader interested in employing prediction in optimization of multicore processor systems

    A Survey on Software Testing Techniques using Genetic Algorithm

    Full text link
    The overall aim of the software industry is to ensure delivery of high quality software to the end user. To ensure high quality software, it is required to test software. Testing ensures that software meets user specifications and requirements. However, the field of software testing has a number of underlying issues like effective generation of test cases, prioritisation of test cases etc which need to be tackled. These issues demand on effort, time and cost of the testing. Different techniques and methodologies have been proposed for taking care of these issues. Use of evolutionary algorithms for automatic test generation has been an area of interest for many researchers. Genetic Algorithm (GA) is one such form of evolutionary algorithms. In this research paper, we present a survey of GA approach for addressing the various issues encountered during software testing.Comment: 13 Page

    Methods of Technical Prognostics Applicable to Embedded Systems

    Get PDF
    HlavnĂ­ cĂ­lem dizertace je poskytnutĂ­ ucelenĂ©ho pohledu na problematiku technickĂ© prognostiky, kterĂĄ nachĂĄzĂ­ uplatněnĂ­ v tzv. prediktivnĂ­ ĂșdrĆŸbě zaloĆŸenĂ© na trvalĂ©m monitorovĂĄnĂ­ zaƙízenĂ­ a odhadu Ășrovně degradace systĂ©mu či jeho zbĂœvajĂ­cĂ­ ĆŸivotnosti a to zejmĂ©na v oblasti komplexnĂ­ch zaƙízenĂ­ a strojĆŻ. V současnosti je technickĂĄ diagnostika poměrně dobƙe zmapovanĂĄ a reĂĄlně nasazenĂĄ na rozdĂ­l od technickĂ© prognostiky, kterĂĄ je stĂĄle rozvĂ­jejĂ­cĂ­m se oborem, kterĂœ ovĆĄem postrĂĄdĂĄ větĆĄĂ­ mnoĆŸstvĂ­ reĂĄlnĂœch aplikaci a navĂ­c ne vĆĄechny metody jsou dostatečně pƙesnĂ© a aplikovatelnĂ© pro embedded systĂ©my. DizertačnĂ­ prĂĄce pƙinĂĄĆĄĂ­ pƙehled zĂĄkladnĂ­ch metod pouĆŸitelnĂœch pro Ășčely predikce zbĂœvajĂ­cĂ­ uĆŸitnĂ© ĆŸivotnosti, jsou zde popsĂĄny metriky pomocĂ­, kterĂœch je moĆŸnĂ© jednotlivĂ© pƙístupy porovnĂĄvat aĆ„ uĆŸ z pohledu pƙesnosti, ale takĂ© i z pohledu vĂœpočetnĂ­ nĂĄročnosti. Jedno z dizertačnĂ­ch jader tvoƙí doporučenĂ­ a postup pro vĂœběr vhodnĂ© prognostickĂ© metody s ohledem na prognostickĂĄ kritĂ©ria. DalĆĄĂ­m dizertačnĂ­m jĂĄdrem je pƙedstavenĂ­ tzv. částicovĂ©ho filtrovanĂ­ (particle filtering) vhodnĂ© pro model-based prognostiku s ověƙenĂ­m jejich implementace a porovnĂĄnĂ­m. HlavnĂ­ dizertačnĂ­ jĂĄdro reprezentuje pƙípadovou studii pro velmi aktuĂĄlnĂ­ tĂ©ma prognostiky Li-Ion baterii s ohledem na trvalĂ© monitorovĂĄnĂ­. PƙípadovĂĄ studie demonstruje proces prognostiky zaloĆŸenĂ© na modelu a srovnĂĄvĂĄ moĆŸnĂ© pƙístupy jednak pro odhad doby pƙed vybitĂ­m baterie, ale takĂ© sleduje moĆŸnĂ© vlivy na degradaci baterie. SoučástĂ­ prĂĄce je zĂĄkladnĂ­ ověƙenĂ­ modelu Li-Ion baterie a nĂĄvrh prognostickĂ©ho procesu.The main aim of the thesis is to provide a comprehensive overview of technical prognosis, which is applied in the condition based maintenance, based on continuous device monitoring and remaining useful life estimation, especially in the field of complex equipment and machinery. Nowadays technical prognosis is still evolving discipline with limited number of real applications and is not so well developed as technical diagnostics, which is fairly well mapped and deployed in real systems. Thesis provides an overview of basic methods applicable for prediction of remaining useful life, metrics, which can help to compare the different approaches both in terms of accuracy and in terms of computational/deployment cost. One of the research cores consists of recommendations and guide for selecting the appropriate forecasting method with regard to the prognostic criteria. Second thesis research core provides description and applicability of particle filtering framework suitable for model-based forecasting. Verification of their implementation and comparison is provided. The main research topic of the thesis provides a case study for a very actual Li-Ion battery health monitoring and prognostics with respect to continuous monitoring. The case study demonstrates the prognostic process based on the model and compares the possible approaches for estimating both the runtime and capacity fade. Proposed methodology is verified on real measured data.
    • 

    corecore