3 research outputs found

    Almost Unbiased Ridge Estimator in the Inverse Gaussian Regression Model

    Get PDF
    The inverse Gaussian regression (IGR) model is a very common model when the shape of the response variable is positively skewed. The traditional maximum likelihood estimator (MLE) is used to estimate the IGR model parameters. However, when multicollinearity is existed among the explanatory variables, the MLE becomes not efficient estimator as the mean squared error (MSE) becomes inflated. In order to remedy this problem, the ridge estimator (RE) is used. In this paper, we present an almost unbiased ridge estimator for the IGR model in order to overcome multicollinearity problem. We also investigate the performance of the almost unbiased ridge estimator using a Monte Carlo simulation. The results of the almost unbiased ridge estimator are compared with those of the MLE and of the RE in terms of the MSE measure. In addition, a real example of dataset is used and the results show that the performance of the suggested estimator is superior when the multicollinearity is presented among the explanatory variables in the IGR model

    Almost Unbiased Ridge Estimator in the Inverse Gaussian Regression Model

    Get PDF
    The inverse Gaussian regression (IGR) model is a very common model when the shape of the response variable is positively skewed. The traditional maximum likelihood estimator (MLE) is used to estimate the IGR model parameters. However, when multicollinearity is existed among the explanatory variables, the MLE becomes not efficient estimator as the mean squared error (MSE) becomes inflated. In order to remedy this problem, the ridge estimator (RE) is used. In this paper, we present an almost unbiased ridge estimator for the IGR model in order to overcome multicollinearity problem. We also investigate the performance of the almost unbiased ridge estimator using a Monte Carlo simulation. The results of the almost unbiased ridge estimator are compared with those of the MLE and of the RE in terms of the MSE measure. In addition, a real example of dataset is used and the results show that the performance of the suggested estimator is superior when the multicollinearity is presented among the explanatory variables in the IGR model

    A comparison of different methods for building Bayesian kriging models

    No full text
    Kriging is a statistical approach for analyzing computer experiments. Kriging models can be used as fast running surrogate models for computationally expensive computer codes. Kriging models can be built using different methods, the maximum likelihood estimation method and the leave-one-out cross validation method. The objective of this paper is to evaluate and compare these different methods for building kriging models. These evaluation and comparison are achieved via some measures that test the assumptions that are used in building kriging models. We apply kriging models that are built based on the two different methods on a real high dimensional example of a computer code. We demonstrate our evaluation and comparison through some measures on this real computer code
    corecore