436 research outputs found

    A Probabilistic Collocation Method Based Statistical Gate Delay Model Considering Process Variations and Multiple Input Switching

    Full text link
    Since the advent of new nanotechnologies, the variability of gate delay due to process variations has become a major concern. This paper proposes a new gate delay model that includes impact from both process variations and multiple input switching. The proposed model uses orthogonal polynomial based probabilistic collocation method to construct a delay analytical equation from circuit timing performance. From the experimental results, our approach has less that 0.2% error on the mean delay of gates and less than 3% error on the standard deviation.Comment: Submitted on behalf of EDAA (http://www.edaa.com/

    Statistical Static Timing Analysis for Performance of Logic Gates

    Get PDF
    In the recent nanotechnology, the variation in the gate propagation delay is the big concern. This paper proposes the new model for gate delay propagation using the Statistical Static Timing Analysis and the results of it are compared with another modelling called as Monte-Carlo analysis. The proposed model uses Statistical analysis to find accurate propagation delay of the logic gates with reduced simulation time for 16nm technology. DOI: 10.17762/ijritcc2321-8169.15057

    Statistical Static Timing Analysis for Digital Circuitry

    Get PDF
    This paper proposed the impact of variations on delay in CMOS technology of 32 nm. The magnitude of process variations have grown, there has been an increasing realization that traditional design methodologies both for analysis and optimization are no longer acceptable. The main objective of the project is that Statistical Static Timing Analysis method has the result closer to best method and less time consuming which is far more acceptable. So we consider Statistical Static timing Analysis is the best and acceptable method for timing analysis of digital Circuits. The variation in propagation delay is big concern. The proposed system considers the variations in the designing process and finds the propagation delay. This is compared with another method called as Monte Carlo method. Also the simulation time required for both the methods are considered

    Surrogate - Assisted Optimisation -Based Verification & Validation

    Get PDF
    This thesis deals with the application of optimisation based Validation and Verification (V&V) analysis on aerospace vehicles in order to determine their worst case performance metrics. To this end, three aerospace models relating to satellite and launcher vehicles provided by European Space Agency (ESA) on various projects are utilised. As a means to quicken the process of optimisation based V&V analysis, surrogate models are developed using polynomial chaos method. Surro- gate models provide a quick way to ascertain the worst case directions as computation time required for evaluating them is very small. A sin- gle evaluation of a surrogate model takes less than a second. Another contribution of this thesis is the evaluation of operational safety margin metric with the help of surrogate models. Operational safety margin is a metric defined in the uncertain parameter space and is related to the distance between the nominal parameter value and the first instance of performance criteria violation. This metric can help to gauge the robustness of the controller but requires the evaluation of the model in the constraint function and hence could be computationally intensive. As surrogate models are computationally very cheap, they are utilised to rapidly compute the operational safety margin metric. But this metric focuses only on finding a safe region around the nominal parameter value and the possibility of other disjoint safe regions are not explored. In order to find other safe or failure regions in the param- eter space, the method of Bernstein expansion method is utilised on surrogate polynomial models to help characterise the uncertain param- eter space into safe and failure regions. Furthermore, Binomial failure analysis is used to assign failure probabilities to failure regions which might help the designer to determine if a re-design of the controller is required or not. The methodologies of optimisation based V&V, surrogate modelling, operational safety margin, Bernstein expansion method and risk assessment have been combined together to form the WCAT-II MATLAB toolbox

    Advances in Reinforcement Learning

    Get PDF
    Reinforcement Learning (RL) is a very dynamic area in terms of theory and application. This book brings together many different aspects of the current research on several fields associated to RL which has been growing rapidly, producing a wide variety of learning algorithms for different applications. Based on 24 Chapters, it covers a very broad variety of topics in RL and their application in autonomous systems. A set of chapters in this book provide a general overview of RL while other chapters focus mostly on the applications of RL paradigms: Game Theory, Multi-Agent Theory, Robotic, Networking Technologies, Vehicular Navigation, Medicine and Industrial Logistic

    Cellular Automata

    Get PDF
    Modelling and simulation are disciplines of major importance for science and engineering. There is no science without models, and simulation has nowadays become a very useful tool, sometimes unavoidable, for development of both science and engineering. The main attractive feature of cellular automata is that, in spite of their conceptual simplicity which allows an easiness of implementation for computer simulation, as a detailed and complete mathematical analysis in principle, they are able to exhibit a wide variety of amazingly complex behaviour. This feature of cellular automata has attracted the researchers' attention from a wide variety of divergent fields of the exact disciplines of science and engineering, but also of the social sciences, and sometimes beyond. The collective complex behaviour of numerous systems, which emerge from the interaction of a multitude of simple individuals, is being conveniently modelled and simulated with cellular automata for very different purposes. In this book, a number of innovative applications of cellular automata models in the fields of Quantum Computing, Materials Science, Cryptography and Coding, and Robotics and Image Processing are presented

    H2 Control for Improved Stability of Multi-area Electric Power System with High Levels of Inverter-Based Generation

    Get PDF
    Increased generation capacity from non-dispatchable energy resources such as wind and solar has created challenges to ensuring the reliable delivery of electric power. This research develops a systematic three-step method of assessing the reliability of electric power systems under a variety of different possible fault conditions to ensure that overall system stability is preserved in a manner the meets regulatory requirements. The first step is a risk-based reliability method (RBRM) that accounts for the probability of a line outage versus the severity of impact. This allows system planners to judiciously allocate expenses for reliability improvements based on the greatest economic benefit. The second approach is the synchrophasor validation method (SVM) which allows system planners and analysis to develop accurate models of electric power system behavior. This improves the decision making capability for implementing new system designs and equipment choices. The third new area is the development of norm-based wide-area control methods that optimize system stability and reliability based on the statistical characteristics found in the first two steps. This norm-based approach includes calculating optimal values for parameters of flexible ac transmission system (FACTS) devices and high voltage direct current (HVDC) links in order to have results within the regulatory requirements of the North American Electric Reliability Corporation (NERC). Power flow and frequency criteria are used to verify conformance with the regulations. These criteria are evaluated under N-1-1 conditions in two reduced order models to demonstrate the ability of the norm-based wide-area controller to maintain performance of these systems within acceptable ranges. The obtained simulation results confirm the benefits of the proposed technique in meeting regulatory requirements under conditions of N-1-1 contingencies in electric power systems with large amounts of renewable energy resources

    Forecasting Models for Integration of Large-Scale Renewable Energy Generation to Electric Power Systems

    Get PDF
    Amid growing concerns about climate change and non-renewable energy sources deple¬tion, vari¬able renewable energy sources (VRESs) are considered as a feasible substitute for conventional environment-polluting fossil fuel-based power plants. Furthermore, the transition towards clean power systems requires additional transmission capacity. Dynamic thermal line rating (DTLR) is being considered as a potential solution to enhance the current transmission line capacity and omit/postpone transmission system expansion planning, while DTLR is highly dependent on weather variations. With increasing the accommodation of VRESs and application of DTLR, fluctuations and variations thereof impose severe and unprecedented challenges on power systems operation. Therefore, short-term forecasting of large-scale VERSs and DTLR play a crucial role in the electric power system op¬eration problems. To this end, this thesis devotes on developing forecasting models for two large-scale VRESs types (i.e., wind and tidal) and DTLR. Deterministic prediction can be employed for a variety of power system operation problems solved by deterministic optimization. Also, the outcomes of deterministic prediction can be employed for conditional probabilistic prediction, which can be used for modeling uncertainty, used in power system operation problems with robust optimization, chance-constrained optimization, etc. By virtue of the importance of deterministic prediction, deterministic prediction models are developed. Prevalently, time-frequency decomposition approaches are adapted to decompose the wind power time series (TS) into several less non-stationary and non-linear components, which can be predicted more precisely. However, in addition to non-stationarity and nonlinearity, wind power TS demonstrates chaotic characteristics, which reduces the predictability of the wind power TS. In this regard, a wind power generation prediction model based on considering the chaosity of the wind power generation TS is addressed. The model consists of a novel TS decomposition approach, named multi-scale singular spectrum analysis (MSSSA), and least squares support vector machines (LSSVMs). Furthermore, deterministic tidal TS prediction model is developed. In the proposed prediction model, a variant of empirical mode decomposition (EMD), which alleviates the issues associated with EMD. To further improve the prediction accuracy, the impact of different components of wind power TS with different frequencies (scales) in the spatiotemporal modeling of the wind farm is assessed. Consequently, a multiscale spatiotemporal wind power prediction is developed, using information theory-based feature selection, wavelet decomposition, and LSSVM. Power system operation problems with robust optimization and interval optimization require prediction intervals (PIs) to model the uncertainty of renewables. The advanced PI models are mainly based on non-differentiable and non-convex cost functions, which make the use of heuristic optimization for tuning a large number of unknown parameters of the prediction models inevitable. However, heuristic optimization suffers from several issues (e.g., being trapped in local optima, irreproducibility, etc.). To this end, a new wind power PI (WPPI) model, based on a bi-level optimization structure, is put forward. In the proposed WPPI, the main unknown parameters of the prediction model are globally tuned based on optimizing a convex and differentiable cost function. In line with solving the non-differentiability and non-convexity of PI formulation, an asymmetrically adaptive quantile regression (AAQR) which benefits from a linear formulation is proposed for tidal uncertainty modeling. In the prevalent QR-based PI models, for a specified reliability level, the probabilities of the quantiles are selected symmetrically with respect the median probability. However, it is found that asymmetrical and adaptive selection of quantiles with respect to median can provide more efficient PIs. To make the formulation of AAQR linear, extreme learning machine (ELM) is adapted as the prediction engine. Prevalently, the parameters of activation functions in ELM are selected randomly; while different sets of random values might result in dissimilar prediction accuracy. To this end, a heuristic optimization is devised to tune the parameters of the activation functions. Also, to enhance the accuracy of probabilistic DTLR, consideration of latent variables in DTLR prediction is assessed. It is observed that convective cooling rate can provide informative features for DTLR prediction. Also, to address the high dimensional feature space in DTLR, a DTR prediction based on deep learning and consideration of latent variables is put forward. Numerical results of this thesis are provided based on realistic data. The simulations confirm the superiority of the proposed models in comparison to traditional benchmark models, as well as the state-of-the-art models
    corecore