14,123 research outputs found

    Response-surface-model-based system sizing for nearly/net zero energy buildings under uncertainty

    Get PDF
    Properly treating uncertainty is critical for robust system sizing of nearly/net zero energy buildings (ZEBs). To treat uncertainty, the conventional method conducts Monte Carlo simulations for thousands of possible design options, which inevitably leads to computation load that is heavy or even impossible to handle. In order to reduce the number of Monte Carlo simulations, this study proposes a response-surface-model-based system sizing method. The response surface models of design criteria (i.e., the annual energy match ratio, self-consumption ratio and initial investment) are established based on Monte Carlo simulations for 29 specific design points which are determined by Box-Behnken design. With the response surface models, the overall performances (i.e., the weighted performance of the design criteria) of all design options (i.e., sizing combinations of photovoltaic, wind turbine and electric storage) are evaluated, and the design option with the maximal overall performance is finally selected. Cases studies with 1331 design options have validated the proposed method for 10,000 randomly produced decision scenarios (i.e., users’ preferences to the design criteria). The results show that the established response surface models reasonably predict the design criteria with errors no greater than 3.5% at a cumulative probability of 95%. The proposed method reduces the number of Monte Carlos simulations by 97.8%, and robustly sorts out top 1.1% design options in expectation. With the largely reduced Monte Carlo simulations and high overall performance of the selected design option, the proposed method provides a practical and efficient means for system sizing of nearly/net ZEBs under uncertainty

    Identifying the important factors in simulation models with many factors

    Get PDF
    Simulation models may have many parameters and input variables (together called factors), while only a few factors are really important (parsimony principle). For such models this paper presents an effective and efficient screening technique to identify and estimate those important factors. The technique extends the classical binary search technique to situations with more than a single important factor. The technique uses a low-order polynomial approximation to the input/output behavior of the simulation model. This approximation may account for interactions among factors. The technique is demonstrated by applying it to a complicated ecological simulation that models the increase of temperatures worldwide.Simulation Models;econometrics

    Statistical Analog Circuit Simulation: Motivation and Implementation

    Get PDF

    Design methodology for low-jitter differential clock recovery circuits in high performance ADCs

    Get PDF
    This paper presents a design methodology for the simultaneous optimization of jitter and power consumption in ultra-low jitter clock recovery circuits (<100fsrms) for high-performance ADCs. The key ideas of the design methodology are: a) a smart parameterization of transistor sizes to have smooth dependence of specifications on the design variables, b) based on this parameterization, carrying out a design space sub-sampling which allows capturing the whole circuit performance for reducing computation resources and time during optimization. The proposed methodology, which can easily incorporate process voltage and temperature (PVT) variations, has been used to perform a systematic design space exploration that provides sub-100fs jitter clock recovery circuits in two CMOS commercial processes at different technological nodes (1.8V 0.18μm and 1.2V 90nm). Post-layout simulation results for a case of study with typical jitter of 68fs for a 1.8V 80dB-SNDR 100Msps Pipeline ADC application are also shown as demonstrator.Gobierno de España TEC2015-68448-REuropean Space Agency 4000108445-13-NL-R

    Self learning strategies for experimental design and response surface optimization

    Get PDF
    Most preset RSM designs offer ease of implementation and good performance over a wide range of process and design optimization applications. These designs often lack the ability to adapt the design based on the characteristics of application and experimental space so as to reduce the number of experiments necessary. Hence, they are not cost effective for applications where the cost of experimentation is high or when the experimentation resources are limited. In this dissertation, we present a number of self-learning strategies for optimization of different types of response surfaces for industrial experiments with noise, high experimentation cost, and requiring high design optimization performance. The proposed approach is a sequential adaptive experimentation approach which combines concepts from nonlinear optimization, non-parametric regression, statistical analysis, and response surface optimization. The proposed strategies uses the information gained from the previous experiments to design the subsequent experiment by simultaneously reducing the region of interest and identifying factor combinations for new experiments. Its major advantage is the experimentation efficiency such that, for a given response target, it identifies the input factor combination (or containing region) in less number of experiments than the classical designs. Through extensive simulated experiments and real-world case studies, we show that the proposed ASRSM method clearly outperforms the classical CCD and BBD methods, works superior to optimal A- D- and V- optimal designs on average and compares favorably with global optimizations methods including Gaussian Process and RBF

    Registration of 3D Face Scans with Average Face Models

    Get PDF
    The accuracy of a 3D face recognition system depends on a correct registration that aligns the facial surfaces and makes a comparison possible. The best results obtained so far use a costly one-to-all registration approach, which requires the registration of each facial surface to all faces in the gallery. We explore the approach of registering the new facial surface to an average face model (AFM), which automatically establishes correspondence to the pre-registered gallery faces. We propose a new algorithm for constructing an AFM, and show that it works better than a recent approach. Extending the single-AFM approach, we propose to employ category-specific alternative AFMs for registration, and evaluate the effect on subsequent classification. We perform simulations with multiple AFMs that correspond to different clusters in the face shape space and compare these with gender and morphology based groupings. We show that the automatic clustering approach separates the faces into gender and morphology groups, consistent with the other race effect reported in the psychology literature. We inspect thin-plate spline and iterative closest point based registration schemes under manual or automatic landmark detection prior to registration. Finally, we describe and analyse a regular re-sampling method that significantly increases the accuracy of registration

    Design guidelines for assessing and controlling spacecraft charging effects

    Get PDF
    The need for uniform criteria, or guidelines, to be used in all phases of spacecraft design is discussed. Guidelines were developed for the control of absolute and differential charging of spacecraft surfaces by the lower energy space charged particle environment. Interior charging due to higher energy particles is not considered. A guide to good design practices for assessing and controlling charging effects is presented. Uniform design practices for all space vehicles are outlined

    Combined parametric and worst case circuit analysis via Taylor models

    Get PDF
    This paper proposes a novel paradigm to generate a parameterized model of the response of linear circuits with the inclusion of worst case bounds. The methodology leverages the so-called Taylor models and represents parameter-dependent responses in terms of a multivariate Taylor polynomial, in conjunction with an interval remainder accounting for the approximation error. The Taylor model representation is propagated from input parameters to circuit responses through a suitable redefinition of the basic operations, such as addition, multiplication or matrix inversion, that are involved in the circuit solution. Specifically, the remainder is propagated in a conservative way based on the theory of interval analysis. While the polynomial part provides an accurate, analytical and parametric representation of the response as a function of the selected design parameters, the complementary information on the remainder error yields a conservative, yet tight, estimation of the worst case bounds. Specific and novel solutions are proposed to implement complex-valued matrix operations and to overcome well-known issues in the state-of-the-art Taylor model theory, like the determination of the upper and lower bound of the multivariate polynomial part. The proposed framework is applied to the frequency-domain analysis of linear circuits. An in-depth discussion of the fundamental theory is complemented by a selection of relevant examples aimed at illustrating the technique and demonstrating its feasibility and strength
    corecore