14,343 research outputs found

    Control speculation for energy-efficient next-generation superscalar processors

    Get PDF
    Conventional front-end designs attempt to maximize the number of "in-flight" instructions in the pipeline. However, branch mispredictions cause the processor to fetch useless instructions that are eventually squashed, increasing front-end energy and issue queue utilization and, thus, wasting around 30 percent of the power dissipated by a processor. Furthermore, processor design trends lead to increasing clock frequencies by lengthening the pipeline, which puts more pressure on the branch prediction engine since branches take longer to be resolved. As next-generation high-performance processors become deeply pipelined, the amount of wasted energy due to misspeculated instructions will go up. The aim of this work is to reduce the energy consumption of misspeculated instructions. We propose selective throttling, which triggers different power-aware techniques (fetch throttling, decode throttling, or disabling the selection logic) depending on the branch prediction confidence level. Results show that combining fetch-bandwidth reduction along with select-logic disabling provides the best performance in terms of overall energy reduction and energy-delay product improvement (14 percent and 10 percent, respectively, for a processor with a 22-stage pipeline and 16 percent and 13 percent, respectively, for a processor with a 42-stage pipeline).Peer ReviewedPostprint (published version

    Reducing Branch Misprediction Penalty through Confidence Estimation

    Get PDF
    The goal of this Thesis is reducing the global penalty associated to branch mispredictions, in terms of both performance degradation and energy consumption, through the use of confidence estimation. The reduction of this global penalty has been achieved, firstly, by increasing the accuracy of branch predictors, next, by reducing the time necessary to restore the processor from a mispredicted branch, and finally, by reducing the energy consumption due to the execution of incorrect instructions. All these proposals rely on the use of confidence estimation, a mechanism that assesses the quality of branch predictions by means of estimating the probability of a dynamic branch prediction to be correct or incorrect.Resumen de tesis presentada por el autor en la Universidad de Murcia (2003)Facultad de InformĂĄtic

    SCORE performance in Central and Eastern Europe and former Soviet Union: MONICA and HAPIEE results

    Get PDF
    Aims: The Systematic COronary Risk Evaluation (SCORE) scale assesses 10 year risk of fatal atherosclerotic cardiovascular disease (CVD), based on conventional risk factors. The high-risk SCORE version is recommended for Central and Eastern Europe and former Soviet Union (CEE/FSU), but its performance has never been systematically assessed in the region. We evaluated SCORE performance in two sets of population-based CEE/FSU cohorts. Methods and results: The cohorts based on the World Health Organization MONitoring of trends and determinants in CArdiovascular disease (MONICA) surveys in the Czech Republic, Poland (Warsaw and Tarnobrzeg), Lithuania (Kaunas), and Russia (Novosibirsk) were followed from the mid-1980s. The Health, Alcohol, and Psychosocial factors in Eastern Europe (HAPIEE) study follows Czech, Polish (Krakow), and Russian (Novosibirsk) cohorts from 2002–05. In Cox regression analyses, the high-risk SCORE ≄5% at baseline significantly predicted CVD mortality in both MONICA [n = 15 027; hazard ratios (HR), 1.7–6.3] and HAPIEE (n = 20 517; HR, 2.6–10.5) samples. While SCORE calibration was good in most MONICA samples (predicted and observed mortality were close), the risk was underestimated in Russia. In HAPIEE, the high-risk SCORE overpredicted the estimated 10 year mortality for Czech and Polish samples and adequately predicted it for Russia. SCORE discrimination was satisfactory in both MONICA and HAPIEE. Conclusion: The high-risk SCORE underestimated the fatal CVD risk in Russian MONICA but performed well in most MONICA samples and Russian HAPIEE. This SCORE version might overestimate the risk in contemporary Czech and Polish populations

    A General Approach for Predicting the Behavior of the Supreme Court of the United States

    Full text link
    Building on developments in machine learning and prior work in the science of judicial prediction, we construct a model designed to predict the behavior of the Supreme Court of the United States in a generalized, out-of-sample context. To do so, we develop a time evolving random forest classifier which leverages some unique feature engineering to predict more than 240,000 justice votes and 28,000 cases outcomes over nearly two centuries (1816-2015). Using only data available prior to decision, our model outperforms null (baseline) models at both the justice and case level under both parametric and non-parametric tests. Over nearly two centuries, we achieve 70.2% accuracy at the case outcome level and 71.9% at the justice vote level. More recently, over the past century, we outperform an in-sample optimized null model by nearly 5%. Our performance is consistent with, and improves on the general level of prediction demonstrated by prior work; however, our model is distinctive because it can be applied out-of-sample to the entire past and future of the Court, not a single term. Our results represent an important advance for the science of quantitative legal prediction and portend a range of other potential applications.Comment: version 2.02; 18 pages, 5 figures. This paper is related to but distinct from arXiv:1407.6333, and the results herein supersede arXiv:1407.6333. Source code available at https://github.com/mjbommar/scotus-predict-v

    Rigorous statistical detection and characterization of a deviation from the Gutenberg-Richter distribution above magnitude 8 in subduction zones

    Full text link
    We present a quantitative statistical test for the presence of a crossover c0 in the Gutenberg-Richter distribution of earthquake seismic moments, separating the usual power law regime for seismic moments less than c0 from another faster decaying regime beyond c0. Our method is based on the transformation of the ordered sample of seismic moments into a series with uniform distribution under condition of no crossover. The bootstrap method allows us to estimate the statistical significance of the null hypothesis H0 of an absence of crossover (c0=infinity). When H0 is rejected, we estimate the crossover c0 using two different competing models for the second regime beyond c0 and the bootstrap method. For the catalog obtained by aggregating 14 subduction zones of the Circum Pacific Seismic Belt, our estimate of the crossover point is log(c0) =28.14 +- 0.40 (c0 in dyne-cm), corresponding to a crossover magnitude mW=8.1 +- 0.3. For separate subduction zones, the corresponding estimates are much more uncertain, so that the null hypothesis of an identical crossover for all subduction zones cannot be rejected. Such a large value of the crossover magnitude makes it difficult to associate it directly with a seismogenic thickness as proposed by many different authors in the past. Our measure of c0 may substantiate the concept that the localization of strong shear deformation could propagate significantly in the lower crust and upper mantle, thus increasing the effective size beyond which one should expect a change of regime.Comment: pdf document of 40 pages including 5 tables and 19 figure

    UTILIZATION AND INTERPRETATION OF HYDROLOGIC DATA: WITH SELECTED EXAMPLES FROM NEW HAMPSHIRE

    Get PDF
    • 

    corecore