2,381 research outputs found

    Maximum Lqq-likelihood estimation

    Full text link
    In this paper, the maximum Lqq-likelihood estimator (MLqqE), a new parameter estimator based on nonextensive entropy [Kibernetika 3 (1967) 30--35] is introduced. The properties of the MLqqE are studied via asymptotic analysis and computer simulations. The behavior of the MLqqE is characterized by the degree of distortion qq applied to the assumed model. When qq is properly chosen for small and moderate sample sizes, the MLqqE can successfully trade bias for precision, resulting in a substantial reduction of the mean squared error. When the sample size is large and qq tends to 1, a necessary and sufficient condition to ensure a proper asymptotic normality and efficiency of MLqqE is established.Comment: Published in at http://dx.doi.org/10.1214/09-AOS687 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Reliable inference for complex models by discriminative composite likelihood estimation

    Full text link
    Composite likelihood estimation has an important role in the analysis of multivariate data for which the full likelihood function is intractable. An important issue in composite likelihood inference is the choice of the weights associated with lower-dimensional data sub-sets, since the presence of incompatible sub-models can deteriorate the accuracy of the resulting estimator. In this paper, we introduce a new approach for simultaneous parameter estimation by tilting, or re-weighting, each sub-likelihood component called discriminative composite likelihood estimation (D-McLE). The data-adaptive weights maximize the composite likelihood function, subject to moving a given distance from uniform weights; then, the resulting weights can be used to rank lower-dimensional likelihoods in terms of their influence in the composite likelihood function. Our analytical findings and numerical examples support the stability of the resulting estimator compared to estimators constructed using standard composition strategies based on uniform weights. The properties of the new method are illustrated through simulated data and real spatial data on multivariate precipitation extremes.Comment: 29 pages, 4 figure

    Efficient and robust estimation for financial returns: an approach based on q-entropy

    Get PDF
    We consider a new robust parametric estimation procedure, which minimizes an empirical version of the Havrda-Charvàt-Tsallis entropy. The resulting estimator adapts according to the discrepancy between the data and the assumed model by tuning a single constant q, which controls the trade-off between robustness and effciency. The method is applied to expected return and volatility estimation of financial asset returns under multivariate normality. Theoretical properties, ease of implementability and empirical results on simulated and financial data make it a valid alternative to classic robust estimators and semi-parametric minimum divergence methods based on kernel smoothing.q-entropy; robust estimation; power-divergence; financial returns

    The Maximum Lq-Likelihood Method: an Application to Extreme Quantile Estimation in Finance

    Get PDF
    Estimating financial risk is a critical issue for banks and insurance companies. Recently, quantile estimation based on Extreme Value Theory (EVT) has found a successful domain of application in such a context, outperforming other approaches. Given a parametric model provided by EVT, a natural approach is Maximum Likelihood estimation. Although the resulting estimator is asymptotically efficient, often the number of observations available to estimate the parameters of the EVT models is too small in order to make the large sample property trustworthy. In this paper, we study a new estimator of the parameters, the Maximum Lq-Likelihood estimator (MLqE), introduced by Ferrari and Yang (2007). We show that the MLqE can outperform the standard MLE, when estimating tail probabilities and quantiles of the Generalized Extreme Value (GEV) and the Generalized Pareto (GP) distributions. First, we assess the relative efficiency between the the MLqE and the MLE for various sample sizes, using Monte Carlo simulations. Second, we analyze the performance of the MLqE for extreme quantile estimation using real-world financial data. The MLqE is characterized by a distortion parameter q and extends the traditional log-likelihood maximization procedure. When q?1, the new estimator approaches the traditionalMaximum Likelihood Estimator (MLE), recovering its desirable asymptotic properties; when q 6=1 and the sample size is moderate or small, the MLqE successfully trades bias for variance, resulting in an overall gain in terms of accuracy (Mean Squared Error).Maximum Likelihood, Extreme Value Theory, q-Entropy, Tail-related Risk Measures

    The Maximum Lq-Likelihood Method: an Application to Extreme Quantile Estimation in Finance

    Get PDF
    Estimating financial risk is a critical issue for banks and insurance companies. Recently, quantile estimation based on Extreme Value Theory (EVT) has found a successful domain of application in such a context, outperforming other approaches. Given a parametric model provided by EVT, a natural approach is Maximum Likelihood estimation. Although the resulting estimator is asymptotically efficient, often the number of observations available to estimate the parameters of the EVT models is too small in order to make the large sample property trustworthy. In this paper, we study a new estimator of the parameters, the Maximum Lq-Likelihood estimator (MLqE), introduced by Ferrari and Yang (2007). We show that the MLqE can outperform the standard MLE, when estimating tail probabilities and quantiles of the Generalized Extreme Value (GEV) and the Generalized Pareto (GP) distributions. First, we assess the relative efficiency between the the MLqE and the MLE for various sample sizes, using Monte Carlo simulations. Second, we analyze the performance of the MLqE for extreme quantile estimation using real-world financial data. The MLqE is characterized by a distortion parameter q and extends the traditional log-likelihood maximization procedure. When q→1, the new estimator approaches the traditionalMaximum Likelihood Estimator (MLE), recovering its desirable asymptotic properties; when q 6=1 and the sample size is moderate or small, the MLqE successfully trades bias for variance, resulting in an overall gain in terms of accuracy (Mean Squared Error).Maximum Likelihood, Extreme Value Theory, q-Entropy, Tail-related Risk Measures

    Efficient and robust estimation for financial returns: an approach based on q-entropy

    Get PDF
    We consider a new robust parametric estimation procedure, which minimizes an empirical version of the Havrda-Charv_at-Tsallis entropy. The resulting estimator adapts according to the discrepancy between the data and the assumed model by tuning a single constant q, which controls the trade-o_ between robustness and e_ciency. The method is applied to expected re- turn and volatility estimation of _nancial asset returns under multivariate normality. Theoretical properties, ease of implementability and empirical re- sults on simulated and _nancial data make it a valid alternative to classic robust estimators and semi-parametric minimum divergence methods based on kernel smoothingq-entropy, robust estimation, power-divergence, _nancial returns
    corecore