440 research outputs found

    Amdahl's law for predicting the future of multicores considered harmful

    Get PDF
    Several recent works predict the future of multicore systems or identify scalability bottlenecks based on Amdahl's law. Amdahl's law implicitly assumes, however, that the problem size stays constant, but in most cases more cores are used to solve larger and more complex problems. There is a related law known as Gustafson's law which assumes that runtime, not the problem size, is constant. In other words, it is assumed that the runtime on p cores is the same as the runtime on 1 core and that the parallel part of an application scales linearly with the number of cores. We apply Gustafson's law to symmetric, asymmetric, and dynamic multicores and show that this leads to fundamentally different results than when Amdahl's law is applied. We also generalize Amdahl's and Gustafson's law and study how this quantitatively effects the dimensioning of future multicore systems

    Getting to Know the Poor

    Get PDF

    Bayesian methods for analysing pesticide contamination with uncertain covariates

    Get PDF
    Two chemical properties of pesticides are thought to control their environmental fate. These are the adsorption coefficient k(_oc) and soil half-life t(^soil_1/2). This study aims to demonstrate the use of Bayesian methods in exploring whether or not it is possible to discriminate between pesticides that leach from those that do not leach on the basis of their chemical properties, when the monitored values of these properties are uncertain, in the sense that there are a range of values reported for both k(_oc) and t(^soil_1/2) - The study was limited to 43 pesticides extracted from the UK Environment Agency (EA) where complete information was available regarding these pesticides. In addition, analysis of data from a separate study, known as "Gustafson's data”, with a single value reported for k(_oc) and t(^soil_1/2) was used as prior information for the EA data. Bayesian methods to analyse the EA data are proposed in this thesis. These methods use logistic regression with random covariates and prior information derives from (i) available United States Department of Agriculture (USDA) data base values of k(_oc) and t(^soil_1/2) for the covariates and (ii) Gustafson's data for the regression parameters. They are analysed by means of Markov Chain Monte Carlo (MCMC) simulation techniques via the freely available WinBUGS software and R package. These methods have succeeded in providing a complete or a good separation between leaching and non-leaching pesticide

    Getting to Know the Poor

    Get PDF
    The theme of the World Bank\u27s principal publication in 2000/2001, the World Development Report, was Attacking Poverty. Such a focus is not surprising given the centrality of fighting poverty to the mission of the World Bank. However, what was noteworthy about the 2000/2001 Report was the prominent way the Bank featured what it called the voices of the poor. The perspectives of poor people informed both the Report\u27s descriptions of the challenges of poverty and the Report\u27s proposed policy responses. By incorporating the lives, language, and experiences of the poor directly into the Report, the Bank participated in a larger trend in changing how poverty is understood: from international aid and development organizations to academics and journalists, portrayals of poor people have taken center stage in the discussion of poverty and development policy

    Failure analysis and reliability -aware resource allocation of parallel applications in High Performance Computing systems

    Get PDF
    The demand for more computational power to solve complex scientific problems has been driving the physical size of High Performance Computing (HPC) systems to hundreds and thousands of nodes. Uninterrupted execution of large scale parallel applications naturally becomes a major challenge because a single node failure interrupts the entire application, and the reliability of a job completion decreases with increasing the number of nodes. Accurate reliability knowledge of a HPC system enables runtime systems such as resource management and applications to minimize performance loss due to random failures while also providing better Quality Of Service (QOS) for computational users. This dissertation makes three major contributions for reliability evaluation and resource management in HPC systems. First we study the failure properties of HPC systems and observe that Times To Failure (TTF\u27s) of individual compute nodes follow a time-varying failure rate based distribution like Weibull distribution. We then propose a model for the TTF distribution of a system of k independent nodes when individual nodes exhibit time varying failure rates. Based on the reliability of the proposed TTF model, we develop reliability-aware resource allocation algorithms and evaluated them on actual parallel workloads and failure data of a HPC system. Our observations indicate that applying time varying failure rate-based reliability function combined with some heuristics reduce the performance loss due to unexpected failures by as much as 30 to 53 percent. Finally, we also study the effect of reliability with respect to the number of nodes and propose reliability-aware optimal k node allocation algorithm for large scale parallel applications. Our simulation results of comparing the optimal k node algorithm indicate that choosing the number of nodes for large scale parallel applications based on the reliability of compute nodes can reduce the overall completion time and waste time when the k may be smaller than the total number of nodes in the system

    The Effects of Microprocessor Architecture on Speedup in Distrbuted Memory Supercomputers

    Get PDF
    Amdahl\u27s Law states that speedup in moving from one processor to N identical processors can never be greater than N, and in fact usually is lower than N because of operations that must be done sequentially. Amdahl\u27s Law gives us the following formula for speedup: Speedup \u3c or = (S+P)/(S+(P/N)) where is the number of processors, S is the percentage of the code that is serial (i.e., cannot be parallelized), and P is the percentage of code that is parallelizable. We can substitute 1 - S for P in the above formula and we see that as S approaches zero speedup approaches N. It can also be shown that seemingly small values of S can severely limit the maximum speedup. Researchers at the University of Maine saw speedups that seemed to contradict Amdahl\u27s Law, and identified an assumption made by the law that is not always true. When this assumption is not true, it is possible to achieve speedups that are larger than the theoretical maximum speedup of N given by Amdahl\u27s Law. The assumption in question is that the computer performance scales linearly as the size of the problem is reduced by dividing it over a larger number of processors. This assumption is not valid for computers with tiered memory. In this thesis we investigate superlinear speedup through a series of test programs specifically designed to exhibit superlinear speedup. After demonstrating these programs show superlinear speedup, we suggest methods for detecting the potential for superlinear speedup in a variety of algorithms

    On Not Being Ashamed of the Gospel: Particularity, Pluralism, and Validation

    Get PDF
    • …
    corecore