147,931 research outputs found

    A general guide to applying machine learning to computer architecture

    Get PDF
    The resurgence of machine learning since the late 1990s has been enabled by significant advances in computing performance and the growth of big data. The ability of these algorithms to detect complex patterns in data which are extremely difficult to achieve manually, helps to produce effective predictive models. Whilst computer architects have been accelerating the performance of machine learning algorithms with GPUs and custom hardware, there have been few implementations leveraging these algorithms to improve the computer system performance. The work that has been conducted, however, has produced considerably promising results. The purpose of this paper is to serve as a foundational base and guide to future computer architecture research seeking to make use of machine learning models for improving system efficiency. We describe a method that highlights when, why, and how to utilize machine learning models for improving system performance and provide a relevant example showcasing the effectiveness of applying machine learning in computer architecture. We describe a process of data generation every execution quantum and parameter engineering. This is followed by a survey of a set of popular machine learning models. We discuss their strengths and weaknesses and provide an evaluation of implementations for the purpose of creating a workload performance predictor for different core types in an x86 processor. The predictions can then be exploited by a scheduler for heterogeneous processors to improve the system throughput. The algorithms of focus are stochastic gradient descent based linear regression, decision trees, random forests, artificial neural networks, and k-nearest neighbors.This work has been supported by the European Research Council (ERC) Advanced Grant RoMoL (Grant Agreemnt 321253) and by the Spanish Ministry of Science and Innovation (contract TIN 2015-65316P).Peer ReviewedPostprint (published version

    Fully supervised training of Gaussian radial basis function networks in WEKA

    Get PDF
    Radial basis function networks are a type of feedforward network with a long history in machine learning. In spite of this, there is relatively little literature on how to train them so that accurate predictions are obtained. A common strategy is to train the hidden layer of the network using k-means clustering and the output layer using supervised learning. However, Wettschereck and Dietterich found that supervised training of hidden layer parameters can improve predictive performance. They investigated learning center locations, local variances of the basis functions, and attribute weights, in a supervised manner. This document discusses supervised training of Gaussian radial basis function networks in the WEKA machine learning software. More specifically, we discuss the RBFClassifier and RBFRegressor classes available as part of the RBFNetwork package for WEKA 3.7 and consider (a) learning of center locations and one global variance parameter, (b) learning of center locations and one local variance parameter per basis function, and (c) learning center locations with per-attribute local variance parameters. We also consider learning attribute weights jointly with other parameters

    A Simple Likelihood Method for Quasar Target Selection

    Full text link
    We present a new method for quasar target selection using photometric fluxes and a Bayesian probabilistic approach. For our purposes we target quasars using Sloan Digital Sky Survey (SDSS) photometry to a magnitude limit of g=22. The efficiency and completeness of this technique is measured using the Baryon Oscillation Spectroscopic Survey (BOSS) data, taken in 2010. This technique was used for the uniformly selected (CORE) sample of targets in BOSS year one spectroscopy to be realized in the 9th SDSS data release. When targeting at a density of 40 objects per sq-deg (the BOSS quasar targeting density) the efficiency of this technique in recovering z>2.2 quasars is 40%. The completeness compared to all quasars identified in BOSS data is 65%. This paper also describes possible extensions and improvements for this techniqueComment: Updated to accepted version for publication in the Astrophysical Journal. 10 pages, 10 figures, 3 table

    A new approach to measure reduction intensity on cores and tools on cobbles: the Volumetric Reconstruction Method

    No full text
    Knowing to what extent lithic cores have been reduced through knapping is an important step toward understanding the technological variability of lithic assemblages and disentangling the formation processes of archaeological assemblages. In addition, it is a good complement to more developed studies of reduction intensity in retouched tools, and can provide information on raw material management or site occupation dynamics. This paper presents a new methodology for estimating the intensity of reduction in cores and tools on cobbles, the Volumetric Reconstruction Method (VRM). This method is based on a correction of the dimensions (length, width, and thickness) of each core from an assemblage. The mean values of thickness and platform thickness of the assemblage’s flakes are used as corrections for the cores’ original dimensions, after its diacritic analysis. Then, based on these new dimensions, the volume or mass of the original blank are reconstructed using the ellipsoid volume formula. The accuracy of this method was experimentally tested, reproducing a variety of possible archaeological scenarios. The experimental results demonstrate a high inferential potential of the VRM, both in estimating the original volume or mass of the original blanks, and in inferring the individual percentage of reduction for each core. The results of random resampling demonstrate the applicability of VRM to non size-biased archaeological contexts.Introduction Methods - The Volumetric Reconstruction Method - Experimental design - Statistical procedures - Resamples Results - Geometric formulas - Reduction strategy and size - Resampling (randomly biased record) - Resampling (size bias) - Measuring the effect of number of generations Discussion and conclusion

    Data centric trust evaluation and prediction framework for IOT

    Get PDF
    © 2017 ITU. Application of trust principals in internet of things (IoT) has allowed to provide more trustworthy services among the corresponding stakeholders. The most common method of assessing trust in IoT applications is to estimate trust level of the end entities (entity-centric) relative to the trustor. In these systems, trust level of the data is assumed to be the same as the trust level of the data source. However, most of the IoT based systems are data centric and operate in dynamic environments, which need immediate actions without waiting for a trust report from end entities. We address this challenge by extending our previous proposals on trust establishment for entities based on their reputation, experience and knowledge, to trust estimation of data items [1-3]. First, we present a hybrid trust framework for evaluating both data trust and entity trust, which will be enhanced as a standardization for future data driven society. The modules including data trust metric extraction, data trust aggregation, evaluation and prediction are elaborated inside the proposed framework. Finally, a possible design model is described to implement the proposed ideas
    corecore