1,598 research outputs found

    Weighted Sum of Correlated Lognormals: Convolution Integral Solution

    Get PDF
    Probability density function (pdf) for sum of n correlated lognormal variables is deducted as a special convolution integral. Pdf for weighted sums (where weights can be any real numbers) is also presented. The result for four dimensions was checked by Monte Carlo simulation

    Computing Tails of Compound Distributions Using Direct Numerical Integration

    Full text link
    An efficient adaptive direct numerical integration (DNI) algorithm is developed for computing high quantiles and conditional Value at Risk (CVaR) of compound distributions using characteristic functions. A key innovation of the numerical scheme is an effective tail integration approximation that reduces the truncation errors significantly with little extra effort. High precision results of the 0.999 quantile and CVaR were obtained for compound losses with heavy tails and a very wide range of loss frequencies using the DNI, Fast Fourier Transform (FFT) and Monte Carlo (MC) methods. These results, particularly relevant to operational risk modelling, can serve as benchmarks for comparing different numerical methods. We found that the adaptive DNI can achieve high accuracy with relatively coarse grids. It is much faster than MC and competitive with FFT in computing high quantiles and CVaR of compound distributions in the case of moderate to high frequencies and heavy tails

    Filtering and Forecasting Spot Electricity Prices in the Increasingly Deregulated Australian Electricity Market

    Get PDF
    Modelling and forecasting the volatile spot pricing process for electricity presents a number of challenges. For increasingly deregulated electricity markets, like that in the Australian state of New South Wales, there is need to price a range of derivative securities used for hedging. Any derivative pricing model that hopes to capture the pricing dynamics within this market must be able to cope with the extreme volatility of the observed spot prices. By applying wavelet analysis, we examine both the price and demand series at different time locations and levels of resolution to reveal and differentiate what is signal and what is noise. Further, we cleanse the data of leakage from the high frequency, mean reverting price spikes into the more fundamental levels of frequency resolution. As it is from these levels that we base the reconstruction of our filtered series, we need to ensure they are least contaminated by noise. Using the filtered data, we explore time series models as possible candidates for explaining the pricing process and evaluate their forecasting ability. These models include one from the threshold autoregressive (AR) model. What we find is that models from the TAR class produce forecasts that best appear to capture the mean and variance components of the actual data.electricity; wavelets, time series models; forecasting

    CMB Likelihood Functions for Beginners and Experts

    Full text link
    Although the broad outlines of the appropriate pipeline for cosmological likelihood analysis with CMB data has been known for several years, only recently have we had to contend with the full, large-scale, computationally challenging problem involving both highly-correlated noise and extremely large datasets (N>1000N > 1000). In this talk we concentrate on the beginning and end of this process. First, we discuss estimating the noise covariance from the data itself in a rigorous and unbiased way; this is essentially an iterated minimum-variance mapmaking approach. We also discuss the unbiased determination of cosmological parameters from estimates of the power spectrum or experimental bandpowers.Comment: Long-delayed submission. In AIP Conference Proceedings "3K Cosmology" held in Rome, Oct 5-10, 1998, edited by Luciano Maiani, Francesco Melchiorri and Nicola Vittorio, 343-347, New York, American Institute of Physics 199

    Quantification of uncertainty in probabilistic safety analysis

    Get PDF
    This thesis develops methods for quantification and interpretation of uncertainty in probabilistic safety analysis, focussing on fault trees. The output of a fault tree analysis is, usually, the probability of occurrence of an undesirable event (top event) calculated using the failure probabilities of identified basic events. The standard method for evaluating the uncertainty distribution is by Monte Carlo simulation, but this is a computationally intensive approach to uncertainty estimation and does not, readily, reveal the dominant reasons for the uncertainty. A closed form approximation for the fault tree top event uncertainty distribution, for models using only lognormal distributions for model inputs, is developed in this thesis. Its output is compared with the output from two sampling based approximation methods; standard Monte Carlo analysis, and Wilks’ method, which is based on order statistics using small sample sizes. Wilks’ method can be used to provide an upper bound for the percentiles of top event distribution, and is computationally cheap. The combination of the lognormal approximation and Wilks’ Method can be used to give, respectively, the overall shape and high confidence on particular percentiles of interest. This is an attractive, practical option for evaluation of uncertainty in fault trees and, more generally, uncertainty in certain multilinear models. A new practical method of ranking uncertainty contributors in lognormal models is developed which can be evaluated in closed form, based on cutset uncertainty. The method is demonstrated via examples, including a simple fault tree model and a model which is the size of a commercial PSA model for a nuclear power plant. Finally, quantification of “hidden uncertainties” is considered; hidden uncertainties are those which are not typically considered in PSA models, but may contribute considerable uncertainty to the overall results if included. A specific example of the inclusion of a missing uncertainty is explained in detail, and the effects on PSA quantification are considered. It is demonstrated that the effect on the PSA results can be significant, potentially permuting the order of the most important cutsets, which is of practical concern for the interpretation of PSA models. Finally, suggestions are made for the identification and inclusion of further hidden uncertainties.Open Acces

    Statistical Characterisation of Speckle in Clinical Echocardiographic Images with Pearson Family of Distributions

    Get PDF
    The statistical characterisation of gray level distribution of echocardiographic images is commonly done in terms of unimodal probability densities such as Rayleigh, Gamma, Weibull, Nakagami, and Lognormal. Amongst these distributions, the Gamma density is found to provide better empirical model that fits to real data sets. We propose to extend the class of probability distributions by exploring Pearson family to characterise blood and tissue in echocardiographic images. It is found that Pearson Type I characterises the tissue regions whereas Type I, Type IV and Type VI classify blood regions. The statistical measures viz. Jensen-Shannon (JS) divergence and Kolmogorov-Smirnov (KS) statistics reveal that Pearson family of curves outperforms the Gamma distribution.Defence Science Journal, 2011, 61(5), pp.473-478, DOI:http://dx.doi.org/10.14429/dsj.62.116
    • …
    corecore