1,333 research outputs found

    Parameter Estimation in Semi-Linear Models Using a Maximal Invariant Likelihood Function

    Get PDF
    In this paper, we consider the problem of estimation of semi-linear regression models. Using invariance arguments, Bhowmik and King (2001) have derived the probability density functions of the maximal invariant statistic for the nonlinear component of these models. Using these density functions as likelihood functions allows us to estimate these models in a two-step process. First the nonlinear component parameters are estimated by maximising the maximal invariant likelihood function. Then the nonlinear component, with the parameter values replaced by estimates, is treated as a regressor and ordinary least squares is used to estimate the remaining parameters. We report the results of a simulation study conducted to compare the accuracy of this approach with full maximum likelihood estimation. We find maximising the maximal invariant likelihood function typically results in less biased and lower variance estimates than those from full maximum likelihood.Maximum likelihood estimation, nonlinear modelling, simulation experiment, two-step estimation.

    Influence Diagnostics in GARCH Processes

    Get PDF
    Influence diagnostics have become an important tool for statistical analysis since the seminal work by Cook (1986). In this paper we present a curvature-based diagnostic to access local influence of minor perturbations on the modified likelihood displacement in a regression model. Using the proposed diagnostic, we study the local influence in the GARCH model under two perturbation schemes which involve, respectively, model perturbation and data perturbation. We find that the curvature-based diagnostic often provides more information on the local influence being examined than the slope-based diagnostic, especially when the GARCH model is under investigation. An empirical study involving GARCH modeling of the percentage daily returns of the NYSE composite index illustrates the effectiveness of the proposed diagnostic and shows that the curvature-based diagnostic may provide information that cannot be uncovered by the slope-based diagnostic. We find that the effect or influence of each observation is not invariant across different perturbation schemes, thus it is advisable to study the local influence under different perturbation schemes through curvature-based diagnostics.Normal curvature, modified likelihood displacement, GARCH models.

    Bayesian semiparametric GARCH models

    Get PDF
    This paper aims to investigate a Bayesian sampling approach to parameter estimation in the semiparametric GARCH model with an unknown conditional error density, which we approximate by a mixture of Gaussian densities centered at individual errors and scaled by a common standard deviation. This mixture density has the form of a kernel density estimator of the errors with its bandwidth being the standard deviation. The proposed investigation is motivated by the lack of robustness in GARCH models with any parametric assumption of the error density for the purpose of error-density based inference such as value-at-risk (VaR) estimation. The contribution of the paper is to construct the likelihood and posterior of model and bandwidth parameters under the proposed mixture error density, and to forecast the one-step out-of-sample density of asset returns. The resulting VaR measure therefore would be distribution-free. Applying the semiparametric GARCH(1,1) model to daily stock-index returns in eight stock markets, we find that this semiparametric GARCH model is favoured against the GARCH(1,1) model with Student t errors for five indices, and that the GARCH model underestimates VaR compared to its semiparametric counterpart. We also investigate the use and benefit of localized bandwidths in the proposed mixture density of the errors.Bayes factors, kernel-form error density, localized bandwidths, Markov chain Monte Carlo, value-at-risk

    Bandwidth Selection for Multivariate Kernel Density Estimation Using MCMC

    Get PDF
    Kernel density estimation for multivariate data is an important technique that has a wide range of applications in econometrics and finance. However, it has received significantly less attention than its univariate counterpart. The lower level of interest in multivariate kernel density estimation is mainly due to the increased difficulty in deriving an optimal data-driven bandwidth as the dimension of data increases. We provide Markov chain Monte Carlo (MCMC) algorithms for estimating optimal bandwidth matrices for multivariate kernel density estimation. Our approach is based on treating the elements of the bandwidth matrix as parameters whose posterior density can be obtained through the likelihood cross-validation criterion. Numerical studies for bivariate data show that the MCMC algorithm generally performs better than the plug-in algorithm under the Kullback-Leibler information criterion, and is as good as the plug-in algorithm under the mean integrated squared errors (MISE) criterion. Numerical studies for 5 dimensional data show that our algorithm is superior to the normal reference rule. Our MCMC algorithm is the first data-driven bandwidth selector for kernel density estimation with more than two variables, and the sampling algorithm involves no increased difficulty as the dimension of data increaseBandwidth matrices; Cross-validation; Kullback-Leibler information; mean integrated squared errors; Sampling algorithms.

    A New Procedure For Multiple Testing Of Econometric Models

    Get PDF
    A significant role for hypothesis testing in econometrics involves diagnostic checking. When checking the adequacy of a chosen model, researchers typically employ a range of diagnostic tests, each of which is designed to detect a particular form of model inadequacy. A major problem is how best to control the overall probability of rejecting the model when it is true and multiple test statistics are used. This paper presents a new multiple testing procedure, which involves checking whether the calculated values of the diagnostic statistics are consistent with the postulated model being true. This is done through a combination of bootstrapping to obtain a multivariate kernel density estimator of the joint density of the test statistics under the null hypothesis and Monte Carlo simulations to obtain a p value using this kernel density. We prove that under some regularity conditions, the estimated p value of our test procedure is a consistent estimate of the true p value. The proposed testing procedure is applied to tests for autocorrelation in an observed time series, for normality, and for model misspecification through the information matrix. We find that our testing procedure has correct or nearly correct sizes and good powers, particular for more complicated testing problems. We believe it is the first good method for calculating the overall p value for a vector of test statistics based on simulation.Bootstrapping, consistency, information matrix test, Markov chain Monte Carlo simulation, multivariate kernel density, normality, serial correlation, test vector

    Bandwidth Selection for Multivariate Kernel Density Estimation Using MCMC

    Get PDF
    We provide Markov chain Monte Carlo (MCMC) algorithms for computing the bandwidth matrix for multivariate kernel density estimation. Our approach is based on treating the elements of the bandwidth matrix as parameters to be estimated, which we do by optimizing the likelihood cross-validation criterion. Numerical results show that the resulting bandwidths are superior to all existing methods; for dimensions greater than two, our algorithm is the first practical method for estimating the optimal bandwidth matrix. Moreover, the MCMC algorithm for bandwidth selection for multivariate data has no increased difficulty as the dimension of data increases.Bandwidth selection, cross-validation, multivariate kernel density estimation, sampling algorithms.

    A Bayesian approach to bandwidth selection for multivariate kernel regression with an application to state-price density estimation.

    Get PDF
    Multivariate kernel regression is an important tool for investigating the relationship between a response and a set of explanatory variables. It is generally accepted that the performance of a kernel regression estimator largely depends on the choice of bandwidth rather than the kernel function. This nonparametric technique has been employed in a number of empirical studies including the state-price density estimation pioneered by Aït-Sahalia and Lo (1998). However, the widespread usefulness of multivariate kernel regression has been limited by the difficulty in computing a data-driven bandwidth. In this paper, we present a Bayesian approach to bandwidth selection for multivariate kernel regression. A Markov chain Monte Carlo algorithm is presented to sample the bandwidth vector and other parameters in a multivariate kernel regression model. A Monte Carlo study shows that the proposed bandwidth selector is more accurate than the rule-of-thumb bandwidth selector known as the normal reference rule according to Scott (1992) and Bowman and Azzalini (1997). The proposed bandwidth selection algorithm is applied to a multivariate kernel regression model that is often used to estimate the state-price density of Arrow-Debreu securities. When applying the proposed method to the S&P 500 index options and the DAX index options, we find that for short-maturity options, the proposed Bayesian bandwidth selector produces an obviously different state-price density from the one produced by using a subjective bandwidth selector discussed in Aït-Sahalia and Lo (1998).Black-Scholes formula, Likelihood, Markov chain Monte Carlo, Posterior density.

    Local Linear Forecasts Using Cubic Smoothing Splines

    Get PDF
    We show how cubic smoothing splines fitted to univariate time series data can be used to obtain local linear forecasts. Our approach is based on a stochastic state space model which allows the use of a likelihood approach for estimating the smoothing parameter, and which enables easy construction of prediction intervals. We show that our model is a special case of an ARIMA(0,2,2) model and we provide a simple upper bound for the smoothing parameter to ensure an invertible model. We also show that the spline model is not a special case of Holt's local linear trend method. Finally we compare the spline forecasts with Holt's forecasts and those obtained from the full ARIMA(0,2,2) model, showing that the restricted parameter space does not impair forecast performance.ARIMA models; exponential smoothing; Holt's local linear forecasts; maximum likelihood estimation; nonparametric regression; smoothing splines; state space model, stochastic trends.

    Exponential Smoothing Model Selection for Forecasting

    Get PDF
    Applications of exponential smoothing to forecast time series usually rely on three basic methods: simple exponential smoothing, trend corrected exponential smoothing and a seasonal variation thereof. A common approach to select the method appropriate to a particular time series is based on prediction validation on a withheld part of the sample using criteria such as the mean absolute percentage error. A second approach is to rely on the most appropriate general case of the three methods. For annual series this is trend corrected exponential smoothing: for sub-annual series it is the seasonal adaptation of trend corrected exponential smoothing. The rationale for this approach is that a general method automatically collapses to its nested counterparts when the pertinent conditions pertain in the data. A third approach may be based on an information criterion when maximum likelihood methods are used in conjunction with exponential smoothing to estimate the smoothing parameters. In this paper, such approaches for selecting the appropriate forecasting method are compared in a simulation study. They are also compared on real time series from the M3 forecasting competition. The results indicate that the information criterion approach appears to provide the best basis for an automated approach to method selection, provided that it is based on Akaike's information criterion.Model Selection; Exponential Smoothing; Information Criteria; Prediction; Forecast Validation
    corecore