13,578 research outputs found

    Techniques for the Fast Simulation of Models of Highly dependable Systems

    Get PDF
    With the ever-increasing complexity and requirements of highly dependable systems, their evaluation during design and operation is becoming more crucial. Realistic models of such systems are often not amenable to analysis using conventional analytic or numerical methods. Therefore, analysts and designers turn to simulation to evaluate these models. However, accurate estimation of dependability measures of these models requires that the simulation frequently observes system failures, which are rare events in highly dependable systems. This renders ordinary Simulation impractical for evaluating such systems. To overcome this problem, simulation techniques based on importance sampling have been developed, and are very effective in certain settings. When importance sampling works well, simulation run lengths can be reduced by several orders of magnitude when estimating transient as well as steady-state dependability measures. This paper reviews some of the importance-sampling techniques that have been developed in recent years to estimate dependability measures efficiently in Markov and nonMarkov models of highly dependable system

    Bayesian improved cross entropy method with categorical mixture models

    Full text link
    We employ the Bayesian improved cross entropy (BiCE) method for rare event estimation in static networks and choose the categorical mixture as the parametric family to capture the dependence among network components. At each iteration of the BiCE method, the mixture parameters are updated through the weighted maximum a posteriori (MAP) estimate, which mitigates the overfitting issue of the standard improved cross entropy (iCE) method through a novel balanced prior, and we propose a generalized version of the expectation-maximization (EM) algorithm to approximate this weighted MAP estimate. The resulting importance sampling distribution is proved to be unbiased. For choosing a proper number of components KK in the mixture, we compute the Bayesian information criterion (BIC) of each candidate KK as a by-product of the generalized EM algorithm. The performance of the proposed method is investigated through a simple illustration, a benchmark study, and a practical application. In all these numerical examples, the BiCE method results in an efficient and accurate estimator that significantly outperforms the standard iCE method and the BiCE method with the independent categorical distribution

    Short and long-term wind turbine power output prediction

    Get PDF
    In the wind energy industry, it is of great importance to develop models that accurately forecast the power output of a wind turbine, as such predictions are used for wind farm location assessment or power pricing and bidding, monitoring, and preventive maintenance. As a first step, and following the guidelines of the existing literature, we use the supervisory control and data acquisition (SCADA) data to model the wind turbine power curve (WTPC). We explore various parametric and non-parametric approaches for the modeling of the WTPC, such as parametric logistic functions, and non-parametric piecewise linear, polynomial, or cubic spline interpolation functions. We demonstrate that all aforementioned classes of models are rich enough (with respect to their relative complexity) to accurately model the WTPC, as their mean squared error (MSE) is close to the MSE lower bound calculated from the historical data. We further enhance the accuracy of our proposed model, by incorporating additional environmental factors that affect the power output, such as the ambient temperature, and the wind direction. However, all aforementioned models, when it comes to forecasting, seem to have an intrinsic limitation, due to their inability to capture the inherent auto-correlation of the data. To avoid this conundrum, we show that adding a properly scaled ARMA modeling layer increases short-term prediction performance, while keeping the long-term prediction capability of the model

    On the Reliability Estimation of Stochastic Binary System

    Get PDF
    A stochastic binary system is a multi-component on-off system subject to random independent failures on its components. After potential failures, the state of the subsystem is ruled by a logical function (called structure function) that determines whether the system is operational or not. Stochastic binary systems (SBS) serve as a natural generalization of network reliability analysis, where the goal is to find the probability of correct operation of the system (in terms of connectivity, network diameter or different measures of success). A particular subclass of interest is stochastic monotone binary systems (SMBS), which are characterized by non-decreasing structure. We explore the combinatorics of SBS, which provide building blocks for system reliability estimation, looking at minimal non-operational subsystems, called mincuts. One key concept to understand the underlying combinatorics of SBS is duality. As methods for exact evaluation take exponential time, we discuss the use of Monte Carlo algorithms. In particular, we discuss the F-Monte Carlo method for estimating the reliability polynomial for homogeneous SBS, the Recursive Variance Reduction (RVR) for SMBS, which builds upon the efficient determination of mincuts, and three additional methods that combine in different ways the well--known techniques of Permutation Monte Carlo and Splitting. These last three methods are based on a stochastic process called Creation Process, a temporal evolution of the SBS which is static by definition. All the methods are compared using different topologies, showing large efficiency gains over the basic Monte Carlo scheme.Agencia Nacional de Investigación e InnovaciónMath-AMSU

    Two-Layered Superposition of Broadcast/Multicast and Unicast Signals in Multiuser OFDMA Systems

    Full text link
    We study optimal delivery strategies of one common and KK independent messages from a source to multiple users in wireless environments. In particular, two-layered superposition of broadcast/multicast and unicast signals is considered in a downlink multiuser OFDMA system. In the literature and industry, the two-layer superposition is often considered as a pragmatic approach to make a compromise between the simple but suboptimal orthogonal multiplexing (OM) and the optimal but complex fully-layered non-orthogonal multiplexing. In this work, we show that only two-layers are necessary to achieve the maximum sum-rate when the common message has higher priority than the KK individual unicast messages, and OM cannot be sum-rate optimal in general. We develop an algorithm that finds the optimal power allocation over the two-layers and across the OFDMA radio resources in static channels and a class of fading channels. Two main use-cases are considered: i) Multicast and unicast multiplexing when KK users with uplink capabilities request both common and independent messages, and ii) broadcast and unicast multiplexing when the common message targets receive-only devices and KK users with uplink capabilities additionally request independent messages. Finally, we develop a transceiver design for broadcast/multicast and unicast superposition transmission based on LTE-A-Pro physical layer and show with numerical evaluations in mobile environments with multipath propagation that the capacity improvements can be translated into significant practical performance gains compared to the orthogonal schemes in the 3GPP specifications. We also analyze the impact of real channel estimation and show that significant gains in terms of spectral efficiency or coverage area are still available even with estimation errors and imperfect interference cancellation for the two-layered superposition system

    OPERATIONAL RELIABILITY AND RISK EVALUATION FRAMEWORKS FOR SUSTAINABLE ELECTRIC POWER SYSTEMS

    Get PDF
    Driven by a confluence of multiple environmental, social, technical, and economic factors, traditional electric power systems are undergoing a momentous transition toward sustainable electric power systems. One of the important facets of this transformation is the inclusion of high penetration of variable renewable energy sources, the chief among them being wind power. The new source of uncertainty that stems from imperfect wind power forecasts, coupled with the traditional uncertainties in electric power systems, such as unplanned component outages, introduces new challenges for power system operators. In particular, the short-term or operational reliability of sustainable electric power systems could be at increased risk as limited remedial resources are available to the operators to handle uncertainties and outages during system operation. Furthermore, as sustainable electric power systems and natural gas networks become increasingly coupled, the impacts of outages in one network can quickly propagate into the other, thereby reducing the operational reliability of integrated electric power-gas networks (IEPGNs). In light of the above discussion, a successful transition to sustainable electric power systems necessitates a new set of tools to assist the power system operators to make risk-informed decisions amid multiple sources of uncertainties. Such tools should be able to realistically evaluate the hour- and day-ahead operational reliability and risk indices of sustainable electric power systems in a computationally efficient manner while giving full attention to the uncertainties of wind power and IEGPNs. To this end, the research is conducted on five related topics. First, a simulation-based framework is proposed to evaluate the operational reliability indices of generating systems using the fixed-effort generalized splitting approach. Simulations show improvement in computational performance when compared to the traditional Monte-Carlo simulation (MCS). Second, a hybrid analytical-simulation framework is proposed for the short-term risk assessment of wind-integrated power systems. The area risk method – an analytical technique, is combined with the importance sampling (IS)-based MCS to integrate the proposed reliability models of wind speed and calculate the risk indices with a low computational burden. Case studies validate the efficacy of the proposed framework. Third, the importance sampling-based MCS framework is extended to include the proposed data-driven probabilistic models of wind power to avoid the drawbacks of wind speed models. Fourth, a comprehensive framework for the operational reliability evaluation of IEPGNs is developed. This framework includes new reliability models for natural gas pipelines and natural gas-fired generators with dual fuel capabilities. Simulations show the importance of considering the coupling between the two networks while evaluating operational reliability indices. Finally, a new chance-constrained optimization model to consider the operational reliability constraints while determining the optimal operational schedule for microgrids is proposed. Case studies show the tradeoff between the reliability and the operating costs when scheduling the microgrids

    Robust and Efficient Uncertainty Quantification and Validation of RFIC Isolation

    Get PDF
    Modern communication and identification products impose demanding constraints on reliability of components. Due to this statistical constraints more and more enter optimization formulations of electronic products. Yield constraints often require efficient sampling techniques to obtain uncertainty quantification also at the tails of the distributions. These sampling techniques should outperform standard Monte Carlo techniques, since these latter ones are normally not efficient enough to deal with tail probabilities. One such a technique, Importance Sampling, has successfully been applied to optimize Static Random Access Memories (SRAMs) while guaranteeing very small failure probabilities, even going beyond 6-sigma variations of parameters involved. Apart from this, emerging uncertainty quantifications techniques offer expansions of the solution that serve as a response surface facility when doing statistics and optimization. To efficiently derive the coefficients in the expansions one either has to solve a large number of problems or a huge combined problem. Here parameterized Model Order Reduction (MOR) techniques can be used to reduce the work load. To also reduce the amount of parameters we identify those that only affect the variance in a minor way. These parameters can simply be set to a fixed value. The remaining parameters can be viewed as dominant. Preservation of the variation also allows to make statements about the approximation accuracy obtained by the parameter-reduced problem. This is illustrated on an RLC circuit. Additionally, the MOR technique used should not affect the variance significantly. Finally we consider a methodology for reliable RFIC isolation using floor-plan modeling and isolation grounding. Simulations show good comparison with measurements
    corecore