2,708 research outputs found

    Using Conservative Estimation for Conditional Probability instead of Ignoring Infrequent Case

    Full text link
    There are several estimators of conditional probability from observed frequencies of features. In this paper, we propose using the lower limit of confidence interval on posterior distribution determined by the observed frequencies to ascertain conditional probability. In our experiments, this method outperformed other popular estimators.Comment: The 2016 International Conference on Advanced Informatics: Concepts, Theory and Application (ICAICTA2016

    Markov-Switching GARCH Modelling of Value-at-RisK

    Get PDF
    This paper proposes an asymmetric Markov regime-switching (MS) GARCH model to estimate value-at-risk (VaR) for both long and short positions. This model improves on existing VaR methods by taking into account both regime change and skewness or leverage effects. The performance of our MS model and single-regime models is compared through an innovative backtesting procedure using daily data for UK and US market stock indices. The findings from exceptions and regulatory-based tests indicate the MS-GARCH specifications clearly outperform other models in estimating the VaR for both long and short FTSE positions and also do quite well for S&P positions. We conclude that ignoring skewness and regime changes has the effect of imposing larger than necessary conservative capital requirements

    Infrequent Shocks and Rating Revenue Insurance: A Contingent Claims Approach

    Get PDF
    Revenue insurance represents an important new risk management tool for agricultural producers. While there are many farm-level products, Group Risk Income Protection (GRIP) is an area-based alternative. Insurers set premium rates for GRIP on the assumption of a continuous revenue distribution, but discrete events may cause the actual value of insurance to differ by a significant amount. This study develops a contingent claims approach to determining the error inherent in ignoring these infrequent events in rating GRIP insurance. An empirical example from the California grape industry demonstrates the significance of this error and suggests an alternative method of determining revenue insurance premiums.Black-Scholes, contingent claim, grapes, insurance, jump-diffusion, option pricing, Risk and Uncertainty,

    Data Sketches for Disaggregated Subset Sum and Frequent Item Estimation

    Full text link
    We introduce and study a new data sketch for processing massive datasets. It addresses two common problems: 1) computing a sum given arbitrary filter conditions and 2) identifying the frequent items or heavy hitters in a data set. For the former, the sketch provides unbiased estimates with state of the art accuracy. It handles the challenging scenario when the data is disaggregated so that computing the per unit metric of interest requires an expensive aggregation. For example, the metric of interest may be total clicks per user while the raw data is a click stream with multiple rows per user. Thus the sketch is suitable for use in a wide range of applications including computing historical click through rates for ad prediction, reporting user metrics from event streams, and measuring network traffic for IP flows. We prove and empirically show the sketch has good properties for both the disaggregated subset sum estimation and frequent item problems. On i.i.d. data, it not only picks out the frequent items but gives strongly consistent estimates for the proportion of each frequent item. The resulting sketch asymptotically draws a probability proportional to size sample that is optimal for estimating sums over the data. For non i.i.d. data, we show that it typically does much better than random sampling for the frequent item problem and never does worse. For subset sum estimation, we show that even for pathological sequences, the variance is close to that of an optimal sampling design. Empirically, despite the disadvantage of operating on disaggregated data, our method matches or bests priority sampling, a state of the art method for pre-aggregated data and performs orders of magnitude better on skewed data compared to uniform sampling. We propose extensions to the sketch that allow it to be used in combining multiple data sets, in distributed systems, and for time decayed aggregation

    An analysis of recent studies of the effect of foreign exchange intervention

    Get PDF
    Two recent strands of research have contributed to our understanding of the effects of foreign exchange intervention: (i) the use of high-frequency data and (ii) the use of event studies to evaluate the effects of intervention. This article surveys recent empirical studies of the effect of foreign exchange intervention and analyzes the implicit assumptions and limitations of such work. After explicitly detailing such drawbacks, the paper suggests ways to better investigate the effects of intervention.Foreign exchange ; Time-series analysis

    Long memory or shifting means? A new approach and application to realised volatility

    Get PDF
    It is now recognised that long memory and structural change can be confused because the statistical properties of times series of lengths typical of financial and econometric series are similar for both models. We propose a new set of methods aimed at distinguishing between long memory and structural change. The approach, which utilises the computational efficient methods based upon Atheoretical Regression Trees (ART), establishes through simulation the bivariate distribution of the fractional integration parameter, d, with regime length for simulated fractionally integrated series. This bivariate distribution is then compared with the data for the time series. We also combine ART with the established goodness of fit test for long memory series due to Beran. We apply these methods to the realized volatility series of 16 stocks in the Dow Jones Industrial Average. We show that in these series the value of the fractional integration parameter is not constant with time. The mathematical consequence of this is that the definition of H self-similarity is violated. We present evidence that these series have structural breaks.Long-range dependence; Strong dependence; Global dependence; Hurst phenomena

    Measuring the Relative Performance of Providers of a Health Service

    Get PDF
    A methodology is developed and applied to compare the performance of publicly funded agencies providing treatment for alcohol abuse in Maine. The methodology estimates a Wiener process that determines the duration of completed treatments, while allowing for agency differences in the effectiveness of treatment, standards for completion of treatment, patient attrition, and the characteristics of patient populations. Notably, the Wiener process model separately identifies agency fixed effects that describe differences in the effectiveness of treatment ('treatment effects'), and effects that describe differences in the unobservable characteristics of patients ('population effects'). The estimated model enables hypothetical comparisons of how different agencies would treat the same populations. The policy experiment of transferring the treatment practices of more cost-effective agencies suggests that Maine could have significantly reduced treatment costs without compromising health outcomes by identifying and transferring best practices.
    corecore