60 research outputs found

    Replicative Use of an External Model in Simulation Variance Reduction

    Get PDF
    The use of control variates is a well-known variance reduction technique for discrete event simulation experiments. Currently, internal control variates are used almost exclusively by practitioners and researchers when using control variates. The primary objective of this study is to explore the variance reduction achieved by the replicative use of an external, analytical model to generate control variates. Performance for the analytical control variates is compared to the performance of typical internal and external control variates for both an open and a closed queueing network. Performance measures used are confidence interval width reduction, realized coverage, and estimated Mean Square Error. Results of this study indicate analytical control variates achieve comparable confidence interval width reduction with internal and external control variates. However, the analytical control variates exhibit greater levels of estimated bias. Possible causes and remedies for the observed bias are discussed and areas for future research and use of analytical control variates conclude the study

    Modular Architectures And Optimization Techniques For Power And Reliability In Future Many Core Microprocessors

    Full text link
    Power and reliability issues are expected to increase in future multicore systems with a higher degree of component integration. As the feature sizes of transistors continue to shrink, more resources can be incorporated in microprocessors to address a broader spectrum of different application requirements. However, power constraints will limit the amount of resources that can be powered on at any given time. Recent studies have shown that future multicore systems will be able to power on less than 80% of their transistors in the near future, and less than 50% in the long term. The most difficult challenge is deciding which transistors should be powered on at any given time to deliver high performance under strict power constraints. At the same time, device reliability issues - the proliferation of devices that will either be defective at manufacturing time or will fail in the field with usage - are projected to be exacerbated by the continued scaling of device sizes. We present a modular, dynamically reconfigurable architecture as a promising unified solution to the problems of dark silicon (the inability to power all available computing resources) and reliability. Our modular architecture implements deconfigurable lanes within the decoupled sections of a superscalar pipeline that can be easily powered on or off to isolate faults or create an energy-efficient hardware configuration tailored to the needs of the running software. At the system level, we propose a novel framework that uses surrogate response surfaces and heuristic global optimization algorithms to characterize the behavior of applications at runtime and dynamically redistribute the available chip-wide power to obtain hardware configurations customized for the software diversity and system goals. Our reconfigurable architecture is able to provide high performance under a strict power budget, maintain a certain performance level at a reduced power cost, and in the case of hard faults, restore the system's performance to pre-fault levels

    A comparison of some performance evaluation techniques

    Get PDF
    In this thesis we look at three approaches to modelling interactive computer systems: Simulation, Operational analysis and Performance-Oriented design. The simulation approach, presented first, is applied to a general purpose, multiprogrammed, machine independent, virtual memory computer system. The model is used to study the effects of different performance parameters upon important performance indices. It is also used to compare or validate the results produced by the other two methods. The major drawback of the simulation model (i.e. its relatively high cost) has been overcome by combining regression techniques with simulation, using simple experimental case studies. Next, operational analysis was reviewed in a hierarchical way (starting by analysing a single-resource queue and ending up by analysing a multi-class customer general interactive system), to study the performance model of general interactive systems. The results of the model were compared with the performance indices produced using the simulation results. The performance-oriented design technique was the third method used for building system performance models. Here, several optimization design problems have been reviewed to minimize the response time or maximize the system throughput subject to a cost constraint. Again, the model results were compared with the simulation results using different cost constraints. We suggest finally, that the above methods should be used together to assist the designer in building computer performance models

    Confidence interval methods in discrete event computer simulation: Theoretical properties and practical recommendations.

    Get PDF
    Most of steady state simulation outputs are characterized by some degree of dependency between successive observations at different lags measured by the autocorrelation function. In such cases, classical statistical techniques based on independent, identical and normal random variables are not recommended in the construction of confidence intervals for steady state means. Such confidence intervals would cover the steady state mean with probability different from the nominal confidence level. For the last two decades, alternative confidence interval methods have been proposed for stationary simulation output processes. These methods offer different ways to estimate the variance of the sample mean with final objective of achieving coverages equal to the nominal confidence level. Each sample mean variance estimator depends on a number of different parameters and the sample size. In assessing the performance of the confidence interval methods, emphasis is necessarily placed on studying the actual properties of the methods in an empirical context rather than proving their mathematical properties. The testing process takes place in the context of an environment where certain statistical criteria, which measure the actual properties, are estimated through Monte Carlo methods on output processes from different types of simulation models. Over the past years, however, different testing environments have been used. Different methods have been tested on different output processes under different sample sizes and parameter values for the sample mean variance estimators. The diversity of the testing environments has made it difficult to select the most appropriate confidence interval method for certain types of output processes. Moreover, a catalogue of the properties of the confidence interval methods offers limited direct support to a simulation practitioner seeking to apply the methods to particular processes. Five confidence interval methods are considered in this thesis. Two of them were proposed in the last decade. The other three appeared in the literature in 1983 and 1984 and constitute the recent research objects for the statistical experts in simulation output analysis. First, for the case of small samples, theoretical properties are investigated for the bias of the corresponding sample mean variance estimators on AR(1) and AR(2) time series models and the delay in queue in the M/M/1 queueing system. Then an asymptotic comparison for these five methods is carried out. The special characteristic of the above three processes is that the 5th lag autocorrelation coefficient is given by known difference equations. Based on the asymptotic results and the properties of the sample mean variance estimators in small samples, several recommendations are given in making the following decisions: I) The selection of the most appropriate confidence interval method for certain types of simulation outputs. II) The determination of the best parameter values for the sample mean variance estimators so that the corresponding confidence interval methods achieve acceptable performances. III) The orientation of the future research in confidence interval estimation for steady state autocorrelated simulation outputs

    A Survey of Phase Classification Techniques for Characterizing Variable Application Behavior

    Full text link
    Adaptable computing is an increasingly important paradigm that specializes system resources to variable application requirements, environmental conditions, or user requirements. Adapting computing resources to variable application requirements (or application phases) is otherwise known as phase-based optimization. Phase-based optimization takes advantage of application phases, or execution intervals of an application, that behave similarly, to enable effective and beneficial adaptability. In order for phase-based optimization to be effective, the phases must first be classified to determine when application phases begin and end, and ensure that system resources are accurately specialized. In this paper, we present a survey of phase classification techniques that have been proposed to exploit the advantages of adaptable computing through phase-based optimization. We focus on recent techniques and classify these techniques with respect to several factors in order to highlight their similarities and differences. We divide the techniques by their major defining characteristics---online/offline and serial/parallel. In addition, we discuss other characteristics such as prediction and detection techniques, the characteristics used for prediction, interval type, etc. We also identify gaps in the state-of-the-art and discuss future research directions to enable and fully exploit the benefits of adaptable computing.Comment: To appear in IEEE Transactions on Parallel and Distributed Systems (TPDS

    Semiannual final report, 1 October 1991 - 31 March 1992

    Get PDF
    A summary of research conducted at the Institute for Computer Applications in Science and Engineering in applied mathematics, numerical analysis, and computer science during the period 1 Oct. 1991 through 31 Mar. 1992 is presented

    Performance of Computer Systems; Proceedings of the 4th International Symposium on Modelling and Performance Evaluation of Computer Systems, Vienna, Austria, February 6-8, 1979

    Get PDF
    These proceedings are a collection of contributions to computer system performance, selected by the usual refereeing process from papers submitted to the symposium, as well as a few invited papers representing significant novel contributions made during the last year. They represent the thrust and vitality of the subject as well as its capacity to identify important basic problems and major application areas. The main methodological problems appear in the underlying queueing theoretic aspects, in the deterministic analysis of waiting time phenomena, in workload characterization and representation, in the algorithmic aspects of model processing, and in the analysis of measurement data. Major areas for applications are computer architectures, data bases, computer networks, and capacity planning. The international importance of the area of computer system performance was well reflected at the symposium by participants from 19 countries. The mixture of participants was also evident in the institutions which they represented: 35% from universities, 25% from governmental research organizations, but also 30% from industry and 10% from non-research government bodies. This proves that the area is reaching a stage of maturity where it can contribute directly to progress in practical problems

    Graduate School: Course Decriptions, 1972-73

    Full text link
    Official publication of Cornell University V.64 1972/7
    corecore