4 research outputs found

    A Survey of Prediction and Classification Techniques in Multicore Processor Systems

    Get PDF
    In multicore processor systems, being able to accurately predict the future provides new optimization opportunities, which otherwise could not be exploited. For example, an oracle able to predict a certain application\u27s behavior running on a smart phone could direct the power manager to switch to appropriate dynamic voltage and frequency scaling modes that would guarantee minimum levels of desired performance while saving energy consumption and thereby prolonging battery life. Using predictions enables systems to become proactive rather than continue to operate in a reactive manner. This prediction-based proactive approach has become increasingly popular in the design and optimization of integrated circuits and of multicore processor systems. Prediction transforms from simple forecasting to sophisticated machine learning based prediction and classification that learns from existing data, employs data mining, and predicts future behavior. This can be exploited by novel optimization techniques that can span across all layers of the computing stack. In this survey paper, we present a discussion of the most popular techniques on prediction and classification in the general context of computing systems with emphasis on multicore processors. The paper is far from comprehensive, but, it will help the reader interested in employing prediction in optimization of multicore processor systems

    Uso de redes neurais na previsao de desvios em arquiteturas superescalares

    Get PDF
    Orientador : Maurício F. FigueiredoDissertaçao (mestrado) - Universidade Federal do ParanáRESUMO. Os processadores comerciais atuais usam técnicas agressivas para a extração do paralelismo em nível de instrução com o objetivo de atingir maior desempenho. Uma destas técnicas, a previsão de desvios, é usada para antecipar a busca de instruções, manter contínuo o fluxo de instruções no pipeline e aumentar as chances de paralelização de instruções. A maioria dos previsores de desvios utiliza algoritmos triviais aplicados a informações comportamentais sobre os desvios contidas em tabelas atualizadas dinamicamente. Uma nova abordagem tem sido investigada recentemente visando substituir estes algoritmos triviais por redes neurais, com o objetivo de prover maior inteligência aos previsores. Os trabalhos realizados com previsores deste tipo ainda são introdutórios e por isso estudos mais profundos devem ser realizados. O presente trabalho analisa o desempenho da previsão de desvios baseada em rede neural do tipo Perceptron para cinco diferentes modelos de previsores propostos. O modelo UNI realiza a previsão através de um único Perceptron para todas as instruções dos programas. Os modelos TIP e END utilizam vários Perceptrons em tabelas acessadas pelo tipo ou endereço das instruções de desvios, respectivamente. Os modelos DNT e DNE possuem o mecanismo de previsão implementado em dois níveis e são extensões dos respectivos modelos em um nível (TIP e END). Estes modelos foram avaliados sob diferentes tamanhos de históricos de desvios (2 a 64), diferentes números de linhas (64 a 1024) e graus de associatividade (1 a 16) da tabela de Perceptrons, incluindo diferentes tipos de organização do previsor, LOCAL e GLOBAL, definindo a localização do histórico de desvios nos Perceptrons; e LG_AND e LG_OR, que combinam as saídas de LOCAL e GLOBAL segundo sua função lógica. As avaliações mostram que os previsores de dois níveis apresentam melhores resultados que os correspondentes de um nível, que o aumento das linhas da tabela para a mesma associatividade apresenta um ganho de desempenho e que há aumento de desempenho com o aumento da associatividade para o mesmo número de linhas da tabela. Os melhores resultados obtidos foram para programas de ponto flutuante e desvios para frente. As organizações LG_AND e LG_OR não apresentam contribuições representativas na previsão de desvios, ficando os melhores resultados para LOCAL e GLOBAL. De uma forma geral, o presente trabalho mostrou que o uso do Perceptron na previsão de desvio é atrativo e os resultados são equivalentes àqueles obtidos em trabalhos correlatos.ABSTRACT. Nowadays, all commercial processors use aggressive techniques to extract high instruction level parallelism and to improve performance. One of these techniques, the branch prediction, is used to anticipate the instruction fetch, to maintain a continuous instruction stream in the pipeline and to increase the chances of obtaining instructions to be executed in parallel. Most of predictors use trivial algorithms, which work with behavior information of branches stored in tables that are updated dynamically. A new approach has been investigated to replace these trivial algorithms to neural networks, with the aim to add more intelligence to branch predictors. Work on this kind of predictor is current and new studies must be realized. This work analyses the performance of branch prediction Perceptron under five different proposed models. The UNI model makes prediction with a single Perceptron that receives all branch instructions of the programs. The models TIP and END use several Perceptrons arranged in tables indexed by branch type or branch address, respectively. The models DNT and DNE have a two-level prediction mechanism and are extensions of the models TIP and AND, respectively. All prediction models were evaluated under different configurations for branch history sizes (2 up to 64), number of lines (64 up to 1024) and associativity (1 up to 16) of the Perceptron tables. In addition, different types of predictor organization where LOCAL and GLOBAL, that define the branch history distribution in analyzed Perceptron tables, and LG_AND and LG_OR, that combine LOCAL and GLOBAL outputs. The results show that two-level predictors have better performance than their one-level counterparts. Also the increase of table size using fixed associativity improves predictor performance. Moreover, the performance of the predictor increases when enlarging associativity to the same table line number. Floating-point programs have better performance than integer programs. Forward branches have better performance than backward branches. The performance attained using LG_AND and LG_OR organizations is worst than with LOCAL and GLOBAL organizations. In a general way, this work showed that the use of Perceptron in branch prediction is viable and the results are close to ones from other techniques

    Dynamic branch prediction using neural networks

    Get PDF
    Dynamic branch prediction in high-performance processors is a specific instance of a general time series prediction problem that occurs in many areas of science. In contrast, most branch prediction research focuses on two-level adaptive branch prediction techniques, a very specific solution to the branch prediction problem. An alternative approach is to look to other application areas and fields for novel solutions to the problem. In this paper, we examine the application of neural networks to dynamic branch prediction. Two neural networks are considered: a lecturing vector quantisation (LVQ) Network and a backpropagation network. We demonstrate that a neural predictor can achieve misprediction rates comparable to conventional two-level adaptive predictors and suggest that neural predictors merit further investigation.Final Published versio

    STATIC AND DYNAMIC BRANCH PREDICTION USING NEURAL NETWORKS

    No full text
    Abstract: In this short paper we investigated a new static branch prediction technique. The main idea of this technique is to use a large body of different programs (benchmarks) to identify and infer common C program behaviour. Then, this knowledge is used to predict new “unseen ” branches belonging to new programs. The common behaviour is represented as a set of static features of branches that are mapped using a neural network to the probability that the branch will be taken. In this way the predictor does not predict a program behaviour based on previous execution of the same program or based on some program profiles but uses the knowledge gathered from other programs (knowledge experience). Also we combined static and a dynamic neural branch predictor in order to investigate how much influences the static predictor the dynamic one
    corecore