233,915 research outputs found

    A Survey on Compiler Autotuning using Machine Learning

    Full text link
    Since the mid-1990s, researchers have been trying to use machine-learning based approaches to solve a number of different compiler optimization problems. These techniques primarily enhance the quality of the obtained results and, more importantly, make it feasible to tackle two main compiler optimization problems: optimization selection (choosing which optimizations to apply) and phase-ordering (choosing the order of applying optimizations). The compiler optimization space continues to grow due to the advancement of applications, increasing number of compiler optimizations, and new target architectures. Generic optimization passes in compilers cannot fully leverage newly introduced optimizations and, therefore, cannot keep up with the pace of increasing options. This survey summarizes and classifies the recent advances in using machine learning for the compiler optimization field, particularly on the two major problems of (1) selecting the best optimizations and (2) the phase-ordering of optimizations. The survey highlights the approaches taken so far, the obtained results, the fine-grain classification among different approaches and finally, the influential papers of the field.Comment: version 5.0 (updated on September 2018)- Preprint Version For our Accepted Journal @ ACM CSUR 2018 (42 pages) - This survey will be updated quarterly here (Send me your new published papers to be added in the subsequent version) History: Received November 2016; Revised August 2017; Revised February 2018; Accepted March 2018

    Lateinisches und Romanisches in den Reichenauer Glossen

    Get PDF
    In today’s complex networks, timely identification and resolution of performance problems is extremely challenging. Current diagnostic practices to identify the root causes of such problems primarily rely on human intervention and investigation. Fully automated and scalable systems, which are capable of identifying complex problems are needed to provide rapid and accurate diagnosis. The study presented in this thesis creates the necessary scientific basis for the automatic diagnosis of network performance faults using novel intelligent inference techniques based on machine learning. We propose three new techniques for characterisation of network soft failures, and by using them, create the Intelligent Automated Network Diagnostic (IAND) system. First, we propose Transmission Control Protocol (TCP) trace characterisation techniques that use aggregated TCP statistics. Faulty network components embed unique artefacts in TCP packet streams by altering the normal protocol behaviour. Our technique captures such artefacts and generates a set of unique fault signatures. We first introduce Normalised Statistical Signatures (NSSs) with 460 features, a novel representation of network soft failures to provide the basis for diagnosis. Since not all 460 features contribute equally to the identification of a particular fault, we then introduce improved forms of NSSs called EigenNSS and FisherNSS with reduced complexity and greater class separability. Evaluations show that we can achieve dimensionality reduction of over 95% and detection accuracies up to 95% while achieving micro-second diagnosis times with these signatures. Second, given NSSs have features that are dependent on link properties, we introduce a technique called Link Adaptive Signature Estimation (LASE) using regression-based predictors to artificially generateNSSs for a large number of link parameter combinations. Using LASE, the system can be trained to suit the exact networking environment, however dynamic, with a minimal set of sample data. For extensive performance evaluation, we collected 1.2 million sample traces for 17 types of device failures on 8 TCP variants over various types of networks using a combination of fault injection and link emulation techniques. Third, to automate fault identification, we propose a modular inference technique that learns from the patterns embedded in the signatures, and create Fault Classifier Modules (FCMs). FCMs use support vector machines to uniquely identify individual faults and are designed using soft class boundaries to provide generalised fault detection capability. The use of a modular design and generic algorithm that can be trained and tuned based on the specific faults, offers scalability and is a key differentiator from the existing systems that use specific algorithms to detect each fault. Experimental evaluations show that FCMs can achieve detection accuracies of between 90% – 98%. The signatures and classifiers are used as the building blocks to create the IAND system with its two main sub-systems: IAND-k and IAND-h. The IANDk is a modular diagnostic system for automatic detection of previously known problems using FCMs. The IAND-k system is applied for accurately detecting faulty links and diagnosing problems in end-user devices in a wide range of network types (IAND-kUD, IAND-kCC). Extensive evaluation of the systems demonstrated high overall detection accuracies up to 96.6% with low false positives and over 90% accuracy even in the most difficult scenarios. Here, the FCMs use supervised machine learning methods and can only detect previously known problems. To extend the diagnostic capability to detect previously unknown problems, we propose IAND-h, a hybrid classifier system that uses a combination of unsupervised machine learning-based clustering and supervised machine learning-based classification. The evaluation of the system shows that previously unknown faults can be detected with over 92% accuracy. The IAND-h system also offers real-time detection capability with diagnosis times between 4 μs and 66 μs. The techniques and systems proposed during this research contribute to the state of the art of network diagnostics and focus on scalability, automation and modularity with evaluation results demonstrating a high degree of accuracy

    Low computational complexity model reduction of power systems with preservation of physical characteristics

    Get PDF
    A data-driven algorithm recently proposed to solve the problem of model reduction by moment matching is extended to multi-input, multi-output systems. The algorithm is exploited for the model reduction of large-scale interconnected power systems and it offers, simultaneously, a low computational complexity approximation of the moments and the possibility to easily enforce constraints on the reduced order model. This advantage is used to preserve selected slow and poorly damped modes. The preservation of these modes has been shown to be important from a physical point of view and in obtaining an overall good approximation. The problem of the choice of the socalled tangential directions is also analyzed. The algorithm and the resulting reduced order model are validated with the study of the dynamic response of the NETS-NYPS benchmark system (68-Bus, 16-Machine, 5-Area) to multiple fault scenarios

    Noncomputable functions in the Blum-Shub-Smale model

    Full text link
    Working in the Blum-Shub-Smale model of computation on the real numbers, we answer several questions of Meer and Ziegler. First, we show that, for each natural number d, an oracle for the set of algebraic real numbers of degree at most d is insufficient to allow an oracle BSS-machine to decide membership in the set of algebraic numbers of degree d + 1. We add a number of further results on relative computability of these sets and their unions. Then we show that the halting problem for BSS-computation is not decidable below any countable oracle set, and give a more specific condition, related to the cardinalities of the sets, necessary for relative BSS-computability. Most of our results involve the technique of using as input a tuple of real numbers which is algebraically independent over both the parameters and the oracle of the machine

    The Parameterized Complexity of Domination-type Problems and Application to Linear Codes

    Full text link
    We study the parameterized complexity of domination-type problems. (sigma,rho)-domination is a general and unifying framework introduced by Telle: a set D of vertices of a graph G is (sigma,rho)-dominating if for any v in D, |N(v)\cap D| in sigma and for any $v\notin D, |N(v)\cap D| in rho. We mainly show that for any sigma and rho the problem of (sigma,rho)-domination is W[2] when parameterized by the size of the dominating set. This general statement is optimal in the sense that several particular instances of (sigma,rho)-domination are W[2]-complete (e.g. Dominating Set). We also prove that (sigma,rho)-domination is W[2] for the dual parameterization, i.e. when parameterized by the size of the dominated set. We extend this result to a class of domination-type problems which do not fall into the (sigma,rho)-domination framework, including Connected Dominating Set. We also consider problems of coding theory which are related to domination-type problems with parity constraints. In particular, we prove that the problem of the minimal distance of a linear code over Fq is W[2] for both standard and dual parameterizations, and W[1]-hard for the dual parameterization. To prove W[2]-membership of the domination-type problems we extend the Turing-way to parameterized complexity by introducing a new kind of non deterministic Turing machine with the ability to perform `blind' transitions, i.e. transitions which do not depend on the content of the tapes. We prove that the corresponding problem Short Blind Multi-Tape Non-Deterministic Turing Machine is W[2]-complete. We believe that this new machine can be used to prove W[2]-membership of other problems, not necessarily related to dominationComment: 19 pages, 2 figure
    • …
    corecore