4 research outputs found

    Modelling expected shortfall using tail entropy

    Get PDF
    Given the recent replacement of value-at-risk as the regulatory standard measure of risk with expected shortfall (ES) undertaken by the Basel Committee on Banking Supervision, it is imperative that ES gives correct estimates for the value of expected levels of losses in crisis situations. However, the measurement of ES is affected by a lack of observations in the tail of the distribution. While kernel-based smoothing techniques can be used to partially circumvent this problem, in this paper we propose a simple nonparametric tail measure of risk based on information entropy and compare its backtesting performance with that of other standard ES models

    Modeling Naturally Occurring Wildfires Across the US Using Niche Modeling

    Get PDF
    Wildfires can cause significant damage to an area by destroying forested and agricultural areas, homes, businesses, and leading to the potential loss of life. Climate change may further increase the frequency of wildfires. Thus, developing a quick, simple, and accurate method for identifying key drivers that cause wildfires and modeling and predicting their occurrence becomes very important and urgent. Various modeling methods have been developed and applied for this purpose. The objective of this study was to identify key drivers and search for an appropriate method for modeling and predicting natural wildfire occurrence for the United States. In this thesis, various vegetation, topographic and climate variables were examined and key drivers were identified based on their spatial distributions and using their correlations with natural wildfire occurrence. Five models including General Linearized Models (GLM) with Binomial and Poisson distribution, MaxEnt, Random Forests, Artificial Neural Networks, and Multiple Adaptive Regression Splines, were compared to predict natural wildfire occurring for seven different climate regions across the United States. The comparisons were conducted using three datasets including LANDFIRE consisting of thirteen variables including characteristics of vegetation, topography and disturbance, BIOCLIM containing climate variables such as temperature and precipitation, and composite data that combine the most important variables from LANDFIRE and BIOCLIM after the multicollinearity test of the variables done using variance inflation factor (VIF). This results of this study showed that niche modeling techniques such as MaxEnt, GLM with logistic regression (LR), and binomial distribution were an appropriate choice for modeling natural wildfire occurrence. MaxEnt provided highly accurate predictions of natural wildfire occurrence for most of seven different climate regions across the United States. This implied that MaxEnt offered a powerful solution for modeling natural wildfire occurrence for complex and highly specialized systems. This study also showed that although MaxEnt and GLM were quite similar, both models produced very different spatial distributions of probability for natural wildfire occurrence in some regions. Moreover, it was found that natural wildfire occurrence in the western regions was more influenced by precipitation and drought conditions while in the eastern regions the natural wildfire occurrence was more affected by extreme temperature

    Statistical Consequences of Fat Tails: Real World Preasymptotics, Epistemology, and Applications

    Full text link
    The monograph investigates the misapplication of conventional statistical techniques to fat tailed distributions and looks for remedies, when possible. Switching from thin tailed to fat tailed distributions requires more than "changing the color of the dress". Traditional asymptotics deal mainly with either n=1 or n=∞n=\infty, and the real world is in between, under of the "laws of the medium numbers" --which vary widely across specific distributions. Both the law of large numbers and the generalized central limit mechanisms operate in highly idiosyncratic ways outside the standard Gaussian or Levy-Stable basins of convergence. A few examples: + The sample mean is rarely in line with the population mean, with effect on "naive empiricism", but can be sometimes be estimated via parametric methods. + The "empirical distribution" is rarely empirical. + Parameter uncertainty has compounding effects on statistical metrics. + Dimension reduction (principal components) fails. + Inequality estimators (GINI or quantile contributions) are not additive and produce wrong results. + Many "biases" found in psychology become entirely rational under more sophisticated probability distributions + Most of the failures of financial economics, econometrics, and behavioral economics can be attributed to using the wrong distributions. This book, the first volume of the Technical Incerto, weaves a narrative around published journal articles

    Foundations of Trusted Autonomy

    Get PDF
    Trusted Autonomy; Automation Technology; Autonomous Systems; Self-Governance; Trusted Autonomous Systems; Design of Algorithms and Methodologie
    corecore