15 research outputs found
Advances in the Modeling of Heavy-tailed Distributions
Several advances are proposed in connection with the approximation and estimation of heavy-tailed distributions, some of which also apply to other types of distributions. It is first explained that on initially applying the Esscher transform to heavy-tailed density functions such as the Pareto, Student-t and Cauchy densities, one can utilize a moment-based technique whereby the tilted density functions are expressed as the product of a base density function and a polynomial adjustment. Alternatively, density approximants can be secured by appropriately truncating the distributions or mapping them onto compact supports. The validity of these approaches is corroborated by simulation studies. Extensions to the context of density estimation, in which case sample moments are employed in lieu of exact moments are discussed, and illustrative applications involving actuarial data sets are presented. Novel approaches involving making use of the Box-Cox transform in conjunction with empirical saddlepoint density estimates and generalized beta density functions are introduced for determining the endpoints of empirical distribution. Additionally, an iterative algorithm and a technique relying on approximating a function by means of Bernstein polynomials are proposed for obtaining smooth bona fide density functions. Finally, a polynomial adjustment is applied to a bivariate empirical saddlepoint estimate which is obtained from a sample estimate of the bivariate cumulant generating function. A significant contribution of this dissertation resides in the implementation of the proposed methodologies such as the constrained estimation of the four parameters of the generalized beta distribution and the adjusted bivariate empirical saddlepoint density estimation technique in the symbolic computing package Mathematica
STK /WST 795 Research Reports
These documents contain the honours research reports for each year for the Department of Statistics.Honours Research Reports - University of Pretoria 20XXStatisticsBSs (Hons) Mathematical Statistics, BCom (Hons) Statistics, BCom (Hons) Mathematical StatisticsUnrestricte
A statistical model for contamination due to long-range atmospheric transport of radionuclides
Imperial Users onl
Recommended from our members
Financial Portfolio Risk Management: Model Risk, Robustness and Rebalancing Error
Risk management has always been in key component of portfolio management. While more and more complicated models are proposed and implemented as research advances, they all inevitably rely on imperfect assumptions and estimates. This dissertation aims to investigate the gap between complicated theoretical modelling and practice. We mainly focus on two directions: model risk and reblancing error. In the first part of the thesis, we develop a framework for quantifying the impact of model error and for measuring and minimizing risk in a way that is robust to model error. This robust approach starts from a baseline model and finds the worst-case error in risk measurement that would be incurred through a deviation from the baseline model, given a precise constraint on the plausibility of the deviation. Using relative entropy to constrain model distance leads to an explicit characterization of worst-case model errors; this characterization lends itself to Monte Carlo simulation, allowing straightforward calculation of bounds on model error with very little computational effort beyond that required to evaluate performance under the baseline nominal model. This approach goes well beyond the effect of errors in parameter estimates to consider errors in the underlying stochastic assumptions of the model and to characterize the greatest vulnerabilities to error in a model. We apply this approach to problems of portfolio risk measurement, credit risk, delta hedging, and counterparty risk measured through credit valuation adjustment. In the second part, we apply this robust approach to a dynamic portfolio control problem. The sources of model error include the evolution of market factors and the influence of these factors on asset returns. We analyze both finite- and infinite-horizon problems in a model in which returns are driven by factors that evolve stochastically. The model incorporates transaction costs and leads to simple and tractable optimal robust controls for multiple assets. We illustrate the performance of the controls on historical data. Robustness does improve performance in out-of-sample tests in which the model is estimated on a rolling window of data and then applied over a subsequent time period. By acknowledging uncertainty in the estimated model, the robust rules lead to less aggressive trading and are less sensitive to sharp moves in underlying prices. In the last part, we analyze the error between a discretely rebalanced portfolio and its continuously rebalanced counterpart in the presence of jumps or mean-reversion in the underlying asset dynamics. With discrete rebalancing, the portfolio's composition is restored to a set of fixed target weights at discrete intervals; with continuous rebalancing, the target weights are maintained at all times. We examine the difference between the two portfolios as the number of discrete rebalancing dates increases. We derive the limiting variance of the relative error between the two portfolios for both the mean-reverting and jump-diffusion cases. For both cases, we derive ``volatility adjustments'' to improve the approximation of the discretely rebalanced portfolio by the continuously rebalanced portfolio, based on on the limiting covariance between the relative rebalancing error and the level of the continuously rebalanced portfolio. These results are based on strong approximation results for jump-diffusion processes
Reliable Statistical Methods and their Applications for Testing Incomplete Multidisciplinary Data
Recently, left-truncated distributions have proved to be of use in modelling a range of phenomena in fields as diverse as finance, insurance, medicine, earthquake prediction and wind power. In this thesis, we present a comprehensive analysis of the left-truncated Weibull, loglogistic, lognormal and Pareto distributions in cases where the scale, shape or both parameters are unlmown and estimated from the data with the maximum likelihood estimator. We define criteria which ensure that the maximum likelihood equations have a unique solution. We determine the critical values of the Kolmogorov-Smirnov, Kuiper, Cramer-von Mises and Anderson-Darling goodness-of-fit tests when the parameters are unknown for all of the left-truncated distributions via quantile analysis. In this work, these critical values are coupled with a rigorous point estimation and uncertainty analysis, and compared to the critical values of the complete (untruncated) distributions in the literature. We find strong agreement between our results and the most recent additions to the literature. Analytically, we provide evidence that the critical values are parameter independent for all of the left-truncated distributions and goodness-of-fit tests. This result is verified by determining the critical values via Monte Carlo simulations for a range of parameter values. We find that the critical values are dependent upon sample size and truncation level (as percentage of the complete distribution), and determine suitable models to describe this behaviour. We modelled these critical values successfully for each of the three fitting scenarios (i) truncation level dependence, (ii) sample size dependence and (iii) truncation level and sample size dependence, which describes the behaviour for the critical values of all goodness-of-fit tests, left-truncated distributions and significance levels. The fact that one functional form describes the critical values for all different goodness-of-fit tests and distributions is a very useful and interesting result. The models are validated through an exhaustive power testing procedure, which also serves to compare the discriminatory power the four tests. We find the Anderson-Darling test has marginally better statistical power than the others in every situation and that the discrimantory power of all tests is weak for small sample sizes. We conclude the work by applying all these statistical methods to analysing the interarrival times of market orders on the London Stock Exchange for a range truncation values and sample sizes. We find that the left-truncated Weibull distribution most accurately describes this data and that increasing the truncation level significantly increases the pass rates.Thesis (MPhil) -- University of Adelaide, School of Physical Sciences, 201
Untangling hotel industry’s inefficiency: An SFA approach applied to a renowned Portuguese hotel chain
The present paper explores the technical efficiency of four hotels from Teixeira Duarte Group - a renowned Portuguese hotel chain. An efficiency ranking is established from these four hotel units located in Portugal using Stochastic Frontier Analysis. This methodology allows to discriminate between measurement error and systematic inefficiencies in the estimation process enabling to investigate the main inefficiency causes. Several suggestions concerning efficiency improvement are undertaken for each hotel studied.info:eu-repo/semantics/publishedVersio