3,593 research outputs found

    The 'what' and 'how' of learning in design, invited paper

    Get PDF
    Previous experiences hold a wealth of knowledge which we often take for granted and use unknowingly through our every day working lives. In design, those experiences can play a crucial role in the success or failure of a design project, having a great deal of influence on the quality, cost and development time of a product. But how can we empower computer based design systems to acquire this knowledge? How would we use such systems to support design? This paper outlines some of the work which has been carried out in applying and developing Machine Learning techniques to support the design activity; particularly in utilising previous designs and learning the design process

    Towards Automatic Learning of Heuristics for Mechanical Transformations of Procedural Code

    Get PDF
    The current trend in next-generation exascale systems goes towards integrating a wide range of specialized (co-)processors into traditional supercomputers. However, the integration of different specialized devices increases the degree of heterogeneity and the complexity in programming such type of systems. Due to the efficiency of heterogeneous systems in terms of Watt and FLOPS per surface unit, opening the access of heterogeneous platforms to a wider range of users is an important problem to be tackled. In order to bridge the gap between heterogeneous systems and programmers, in this paper we propose a machine learning-based approach to learn heuristics for defining transformation strategies of a program transformation system. Our approach proposes a novel combination of reinforcement learning and classification methods to efficiently tackle the problems inherent to this type of systems. Preliminary results demonstrate the suitability of the approach for easing the programmability of heterogeneous systems.Comment: Part of the Program Transformation for Programmability in Heterogeneous Architectures (PROHA) workshop, Barcelona, Spain, 12th March 2016, 9 pages, LaTe

    Portable compiler optimisation across embedded programs and microarchitectures using machine learning

    Get PDF
    Building an optimising compiler is a difficult and time consuming task which must be repeated for each generation of a microprocessor. As the underlying microarchitecture changes from one generation to the next, the compiler must be retuned to optimise specifically for that new system. It may take several releases of the compiler to effectively exploit a processor’s performance potential, by which time a new generation has appeared and the process starts again. We address this challenge by developing a portable optimising compiler. Our approach employs machine learning to automatically learn the best optimisations to apply for any new program on a new microarchitectural configuration. It achieves this by learning a model off-line which maps a microarchitecture description plus the hardware counters from a single run of the program to the best compiler optimisation passes. Our compiler gains 67 % of the maximum speedup obtainable by an iterative compiler search using 1000 evaluations. We obtain, on average, a 1.16x speedup over the highest default optimisation level across an entire microarchitecture configuration space, achieving a 4.3x speedup in the best case. We demonstrate the robustness of this technique by applying it to an extended microarchitectural space where we achieve comparable performance

    Automated Feature Engineering for Classification Problems

    Get PDF
    O estudo sobre geração de features tem aumentado conforme os anos, é um dos maiores desafios para Machine Learning. Totalmente dependente de conhecimento de domínio é uma área que se feita de forma manual consome muito tempo e não é escalável. Por sua vez, meta-learning auxilia o aprendizado através diferentes domínios. Nos apresentamos uma abordagem de automação de geração de features que utiliza o meta-learning como auxílio na seleção de features. Considerando que geramos uma grande quantidade de features, usamos o conhecimento de 100 data sets de diferentes domínios para responder à pergunta se devemos ou não gerar features para um data set e também quais features. Nosso experimento mostrou que é possível utilizar o meta-learning no processo de seleção, podendo nos informar se devemos ou não gerar o conjunto de features automáticas para um determinado data set, obtendo 66.96% de taxa de acerto, enquanto a nossa baseline é de 50%, nos provamos estatisticamente que a nossa taxa de acerto é melhor do que a baseline em 88% dos casos.Infelizmente, não obtivemos um excelente resultado a nível base ao utilizar apenas as features que foram selecionadas individualmente, porém ao nível meta obtemos um resultado de 65.52% de taxa de acerto ao prever quais features individuais supostamente trariam melhora na performance do modelo. Considerando que a nossa baseline é de 39%, nos estatisticamente provamos que nossa taxa de acerto é melhor que a baseline em 93% dos casos.Os resultados nos mostram que meta-learning pode ser utilizado no auxílio de geração e seleção de features, entretanto a nossa abordagem ainda pode ser aprimorada sendo mais assertiva nas previsões a nível meta e melhores resultados a nível base. Nosso código esta disponível em https://github.com/guifeliper/automated-feature-engineering.The study on feature generation has grown over the last years, is one of the biggest challenges for Machine Learning. Entirely dependent on domain knowledge, it is an area that if done manually, is time-consuming and not scalable. In turn, meta-learning helps to learn through different domains and can bring benefits to this area.We present an automated feature engineering approach that uses meta-learning as an assistant in the selection of features. Considering that we generate a large number of features, we use the knowledge of 100 data sets from different domains to answer the question of whether or not to create features for a data set and also what features to use.Our experiment showed that it is possible to use meta-learning in the selection process, and can inform us whether or not we should generate the set of automatic features for a given data set, obtaining 66.96% of accuracy, while the overall baseline is 50% and statistically, our accuracy is proved to be better than the baseline at 88% of the cases.Unfortunately, we did not get an excellent result in the base level by using only the features that were selected individually, but at the meta level, we get a 65.52% of accuracy, when predicting which individual features would supposedly bring improve for the performance. Considering that our overall baseline is 39%, we statistically proved that our accuracy is better than the baseline at 93% of the cases.The results show that meta-learning can be used to aid the generation and selection of features. However, our approach can still be improved, being more precise in the predictions at the meta-level and better results at the base level. Our code is available at https://github.com/guifeliper/automated-feature-engineering

    Toolflows for Mapping Convolutional Neural Networks on FPGAs: A Survey and Future Directions

    Get PDF
    In the past decade, Convolutional Neural Networks (CNNs) have demonstrated state-of-the-art performance in various Artificial Intelligence tasks. To accelerate the experimentation and development of CNNs, several software frameworks have been released, primarily targeting power-hungry CPUs and GPUs. In this context, reconfigurable hardware in the form of FPGAs constitutes a potential alternative platform that can be integrated in the existing deep learning ecosystem to provide a tunable balance between performance, power consumption and programmability. In this paper, a survey of the existing CNN-to-FPGA toolflows is presented, comprising a comparative study of their key characteristics which include the supported applications, architectural choices, design space exploration methods and achieved performance. Moreover, major challenges and objectives introduced by the latest trends in CNN algorithmic research are identified and presented. Finally, a uniform evaluation methodology is proposed, aiming at the comprehensive, complete and in-depth evaluation of CNN-to-FPGA toolflows.Comment: Accepted for publication at the ACM Computing Surveys (CSUR) journal, 201

    Evolutionary improvement of programs

    Get PDF
    Most applications of genetic programming (GP) involve the creation of an entirely new function, program or expression to solve a specific problem. In this paper, we propose a new approach that applies GP to improve existing software by optimizing its non-functional properties such as execution time, memory usage, or power consumption. In general, satisfying non-functional requirements is a difficult task and often achieved in part by optimizing compilers. However, modern compilers are in general not always able to produce semantically equivalent alternatives that optimize non-functional properties, even if such alternatives are known to exist: this is usually due to the limited local nature of such optimizations. In this paper, we discuss how best to combine and extend the existing evolutionary methods of GP, multiobjective optimization, and coevolution in order to improve existing software. Given as input the implementation of a function, we attempt to evolve a semantically equivalent version, in this case optimized to reduce execution time subject to a given probability distribution of inputs. We demonstrate that our framework is able to produce non-obvious optimizations that compilers are not yet able to generate on eight example functions. We employ a coevolved population of test cases to encourage the preservation of the function's semantics. We exploit the original program both through seeding of the population in order to focus the search, and as an oracle for testing purposes. As well as discussing the issues that arise when attempting to improve software, we employ rigorous experimental method to provide interesting and practical insights to suggest how to address these issues

    Performance Improvement in Kernels by Guiding Compiler Auto-Vectorization Heuristics

    Get PDF
    Vectorization support in hardware continues to expand and grow as well we still continue on superscalar architectures. Unfortunately, compilers are not always able to generate optimal code for the hardware;detecting and generating vectorized code is extremely complex. Programmers can use a number of tools to aid in development and tuning, but most of these tools require expert or domain-specific knowledge to use. In this work we aim to provide techniques for determining the best way to optimize certain codes, with an end goal of guiding the compiler into generating optimized code without requiring expert knowledge from the developer. Initally, we study how to combine vectorization reports with iterative comilation and code generation and summarize our insights and patterns on how the compiler vectorizes code. Our utilities for iterative compiliation and code generation can be further used by non-experts in the generation and analysis of programs. Finally, we leverage the obtained knowledge to design a Support Vector Machine classifier to predict the speedup of a program given a sequence of optimization underprediction, with 82% of these accurate within 15 % both ways
    corecore