436 research outputs found

    Surrogate modeling of computer experiments with sequential experimental design

    Get PDF

    Use of orthogonal arrays, quasi-Monte Carlo sampling and kriging response models for reservoir simulation with many varying factors

    Get PDF
    Asset development teams may adjust simulation model parameters using experimental design to reveal which factors have the greatest impact on the reservoir performance. Response surfaces and experimental design make sensitivity analysis less expensive and more accurate, helping to optimize recovery under geological and economical uncertainties. In this thesis, experimental designs including orthogonal arrays, factorial designs, Latin hypercubes and Hammersley sequences are compared and analyzed. These methods are demonstrated for a gas well with water coning problem to illustrate the efficiency of orthogonal arrays. Eleven geologic factors are varied while optimizing three engineering factors (total of fourteen factors). The objective is to optimize completion length, tubing head pressure, and tubing diameter for a partially penetrating well with uncertain reservoir properties. A nearly orthogonal array was specified with three levels for eight factors and four levels for the remaining six geologic and engineering factors. This design requires only 36 simulations compared to (26,873,856) runs for a full factorial design. Hyperkriging surfaces are an alternative model form for large numbers. Hyperkriging uses the maximum likelihood variogram model parameters to minimize prediction errors. Kriging is compared to conventional polynomial response models. The robustness of the response surfaces generated by kriging and polynomial regression are compared using jackknifing and bootstrapping. Sensitivity analysis and uncertainty analysis can be performed inexpensively and efficiently using response surfaces. The proposed design approach requires fewer simulations and provides accurate response models, efficient optimization, and flexible sensitivity and uncertainty assessment

    Efficient Monte Carlo Based Methods for Variability Aware Analysis and Optimization of Digital Circuits.

    Full text link
    Process variability is of increasing concern in modern nanometer-scale CMOS. The suitability of Monte Carlo based algorithms for efficient analysis and optimization of digital circuits under variability is explored in this work. Random sampling based Monte Carlo techniques incur high cost of computation, due to the large sample size required to achieve target accuracy. This motivates the need for intelligent sample selection techniques to reduce the number of samples. As these techniques depend on information about the system under analysis, there is a need to tailor the techniques to fit the specific application context. We propose efficient smart sampling based techniques for timing and leakage power consumption analysis of digital circuits. For the case of timing analysis, we show that the proposed method requires 23.8X fewer samples on average to achieve comparable accuracy as a random sampling approach, for benchmark circuits studied. It is further illustrated that the parallelism available in such techniques can be exploited using parallel machines, especially Graphics Processing Units. Here, we show that SH-QMC implemented on a Multi GPU is twice as fast as a single STA on a CPU for benchmark circuits considered. Next we study the possibility of using such information from statistical analysis to optimize digital circuits under variability, for example to achieve minimum area on silicon though gate sizing while meeting a timing constraint. Though several techniques to optimize circuits have been proposed in literature, it is not clear how much gains are obtained in these approaches specifically through utilization of statistical information. Therefore, an effective lower bound computation technique is proposed to enable efficient comparison of statistical design optimization techniques. It is shown that even techniques which use only limited statistical information can achieve results to within 10% of the proposed lower bound. We conclude that future optimization research should shift focus from use of more statistical information to achieving more efficiency and parallelism to obtain speed ups.Ph.D.Electrical EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/78936/1/tvvin_1.pd

    Architectural level delay and leakage power modelling of manufacturing process variation

    Get PDF
    PhD ThesisThe effect of manufacturing process variations has become a major issue regarding the estimation of circuit delay and power dissipation, and will gain more importance in the future as device scaling continues in order to satisfy market place demands for circuits with greater performance and functionality per unit area. Statistical modelling and analysis approaches have been widely used to reflect the effects of a variety of variational process parameters on system performance factor which will be described as probability density functions (PDFs). At present most of the investigations into statistical models has been limited to small circuits such as a logic gate. However, the massive size of present day electronic systems precludes the use of design techniques which consider a system to comprise these basic gates, as this level of design is very inefficient and error prone. This thesis proposes a methodology to bring the effects of process variation from transistor level up to architectural level in terms of circuit delay and leakage power dissipation. Using a first order canonical model and statistical analysis approach, a statistical cell library has been built which comprises not only the basic gate cell models, but also more complex functional blocks such as registers, FIFOs, counters, ALUs etc. Furthermore, other sensitive factors to the overall system performance, such as input signal slope, output load capacitance, different signal switching cases and transition types are also taken into account for each cell in the library, which makes it adaptive to an incremental circuit design. The proposed methodology enables an efficient analysis of process variation effects on system performance with significantly reduced computation time compared to the Monte Carlo simulation approach. As a demonstration vehicle for this technique, the delay and leakage power distributions of a 2-stage asynchronous micropipeline circuit has been simulated using this cell library. The experimental results show that the proposed method can predict the delay and leakage power distribution with less than 5% error and at least 50,000 times faster computation time compare to 5000-sample SPICE based Monte Carlo simulation. The methodology presented here for modelling process variability plays a significant role in Design for Manufacturability (DFM) by quantifying the direct impact of process variations on system performance. The advantages of being able to undertake this analysis at a high level of abstraction and thus early in the design cycle are two fold. First, if the predicted effects of process variation render the circuit performance to be outwith specification, design modifications can be readily incorporated to rectify the situation. Second, knowing what the acceptable limits of process variation are to maintain design performance within its specification, informed choices can be made regarding the implementation technology and manufacturer selected to fabricate the design

    Optimization in Quasi-Monte Carlo Methods for Derivative Valuation

    No full text
    Computational complexity in financial theory and practice has seen an immense rise recently. Monte Carlo simulation has proved to be a robust and adaptable approach, well suited for supplying numerical solutions to a large class of complex problems. Although Monte Carlo simulation has been widely applied in the pricing of financial derivatives, it has been argued that the need to sample the relevant region as uniformly as possible is very important. This led to the development of quasi-Monte Carlo methods that use deterministic points to minimize the integration error. A major disadvantage of low-discrepancy number generators is that they tend to lose their ability of homogeneous coverage as the dimensionality increases. This thesis develops a novel approach to quasi-Monte Carlo methods to evaluate complex financial derivatives more accurately by optimizing the sample coordinates in such a way so as to minimize the discrepancies that appear when using lowdiscrepancy sequences. The main focus is to develop new methods to, optimize the sample coordinate vector, and to test their performance against existing quasi-Monte Carlo methods in pricing complicated multidimensional derivatives. Three new methods are developed, the Gear, the Simulated Annealing and the Stochastic Tunneling methods. These methods are used to evaluate complex multi-asset financial derivatives (geometric average and rainbow options) for dimensions up to 2000. It is shown that the two stochastic methods, Simulated Annealing and Stochastic Tunneling, perform better than existing quasi-Monte Carlo methods, Faure' and Sobol'. This difference in performance is more evident in higher dimensions, particularly when a low number of points is used in the Monte Carlo simulations. Overall, the Stochastic Tunneling method yields the smallest percentage root mean square relative error and requires less computational time to converge to a global solution, proving to be the most promising method in pricing complex derivativesImperial Users onl

    Stochastic Testing Method for Transistor-Level Uncertainty Quantification Based on Generalized Polynomial Chaos

    Get PDF
    Uncertainties have become a major concern in integrated circuit design. In order to avoid the huge number of repeated simulations in conventional Monte Carlo flows, this paper presents an intrusive spectral simulator for statistical circuit analysis. Our simulator employs the recently developed generalized polynomial chaos expansion to perform uncertainty quantification of nonlinear transistor circuits with both Gaussian and non-Gaussian random parameters. We modify the nonintrusive stochastic collocation (SC) method and develop an intrusive variant called stochastic testing (ST) method. Compared with the popular intrusive stochastic Galerkin (SG) method, the coupled deterministic equations resulting from our proposed ST method can be solved in a decoupled manner at each time point. At the same time, ST requires fewer samples and allows more flexible time step size controls than directly using a nonintrusive SC solver. These two properties make ST more efficient than SG and than existing SC methods, and more suitable for time-domain circuit simulation. Simulation results of several digital, analog and RF circuits are reported. Since our algorithm is based on generic mathematical models, the proposed ST algorithm can be applied to many other engineering problems

    Stochastic Testing Simulator for Integrated Circuits and MEMS: Hierarchical and Sparse Techniques

    Get PDF
    Process variations are a major concern in today's chip design since they can significantly degrade chip performance. To predict such degradation, existing circuit and MEMS simulators rely on Monte Carlo algorithms, which are typically too slow. Therefore, novel fast stochastic simulators are highly desired. This paper first reviews our recently developed stochastic testing simulator that can achieve speedup factors of hundreds to thousands over Monte Carlo. Then, we develop a fast hierarchical stochastic spectral simulator to simulate a complex circuit or system consisting of several blocks. We further present a fast simulation approach based on anchored ANOVA (analysis of variance) for some design problems with many process variations. This approach can reduce the simulation cost and can identify which variation sources have strong impacts on the circuit's performance. The simulation results of some circuit and MEMS examples are reported to show the effectiveness of our simulatorComment: Accepted to IEEE Custom Integrated Circuits Conference in June 2014. arXiv admin note: text overlap with arXiv:1407.302
    • …
    corecore