99 research outputs found

    RePEc and S-WoPEc: Internet access to electronic preprints in Economics

    Get PDF
    The first electronic Economics preprint appeared in 1993. Since then the growth has been dramatic as the use of the World Wide Web has exploded. RePEc has been instrumental in facilitating access to Economics preprints and in bringing order to the chaos that the WWW frequently represents. In a related effort S-WoPEc provides user friendly tools for adding data to the RePEc system. While this is significant in itself it has also been instrumental in fulfilling S-WoPEc's second goal--to provide increased exposure to Swedish Economics research

    Maximum-Likelihood Based Inference in the Two-Way Random Effects Model with Serially Correlated Time Effects

    Get PDF
    This paper considers maximum likelihood estimation and inference in the two-way random effects model with serial correlation. We derive a straightforward maximum likelihood estimator when the time-specific component follow an AR(1) or MA(1) process. The estimator is easily generalized to arbitrary stationary and strictly invertible ARMA processes. Furthermore we derive tests of the null hypothesis of no serial correlation as well as tests for discriminating between the AR(1) and MA(1) specifications. A Monte-Carlo experiment evaluates the finite-sample properties of the estimators and test-statistics

    Computational Efficiency in Bayesian Model and Variable Selection

    Get PDF
    This paper is concerned with the efficient implementation of Bayesian model averaging (BMA) and Bayesian variable selection, when the number of candidate variables and models is large, and estimation of posterior model probabilities must be based on a subset of the models. Efficient implementation is concerned with two issues, the efficiency of the MCMC algorithm itself and efficient computation of the quantities needed to obtain a draw from the MCMC algorithm. For the first aspect, it is desirable that the chain moves well and quickly through the model space and takes draws from regions with high probabilities. In this context there is a natural trade-off between local moves, which make use of the current parameter values to propose plausible values for model parameters, and more global transitions, which potentially allow exploration of the distribution of interest in fewer steps, but where each step is more computationally intensive. We assess the convergence properties of simple samplers based on local moves and some recently proposed algorithms intended to improve on the basic samplers. For the second aspect, efficient computation within the sampler, we focus on the important case of linear models where the computations essentially reduce to least squares calculations. When the chain makes local moves, adding or dropping a variable, substantial gains in efficiency can be made by updating the previous least squares solution.

    Specification and estimation of random effects models with serial correlation of general form

    Get PDF
    This paper is concerned with maximum likelihood based inference in random effects models with serial correlation. Allowing for individual effects we introduce serial correlation of general form in the time effects as well as the idiosyncratic errors. A straightforward maximum likelihood estimator is derived and a coherent model selection strategy is suggested for determining the orders of serial correlation as well as the importance of time and individual effects. The methods are applied to the estimation of a production function for the Japanese chemical industry using a sample of 72 firms observed during 1968-1987. Empirically, our focus is on measuring the returns to scale and technical change for the industry.Panel data; serial correlation; random effects

    Choosing Factors in a Multifactor Asset Pricing Model: A Bayesian Approach

    Get PDF
    We use Bayesian techniques to select factors in a general multifactor asset pricing model. From a given set of 15 factors we evaluate all possible pricing models by the extent to which they describe the data as given by the posterior model probabilities. Interest rates, premiums, returns on broadbased portfolios and macroeconomic variables are included in the set of considered factors. Using different portfolios as the investment universe we find strong evidence that a general multifactor pricing model should include market excess return, size premium, value premium and the momentum factor. There is some evidence that yearly growth rate in industrial production and term spread also are important factors.asset pricing; factor models; Bayesian model selection

    Asymptotics for random effects models with serial correlation

    Get PDF
    This paper considers the large sample behavior of the maximum likelihood estimator of random effects models. Consistent estimation and asymptotic normality as N and/or T grows large is established for a comprehensive specification which allows for serial correlation in the form of AR(1) for the idiosyncratic or time-specific error component. The consistency and asymptotic normality properties of all commonly used random effects models are obtained as special cases of the comprehensive model. When N or T \rightarrow \infty only a subset of the parameters are consistent and asymptotic normality is established for the consistent subsets.Panel data; error components; consistency; asymptotic normality; maximum likelihood.

    An Embarrassment of Riches: Forecasting Using Large Panels

    Get PDF
    The increasing availability of data and potential predictor variables poses new challenges to forecasters. The task of formulating a single forecasting model that can extract all the relevant information is becoming increasingly difficult in the face of this abundance of data. The two leading approaches to addressing this "embarrassment of riches" are philosophically distinct. One approach builds forecast models based on summaries of the predictor variables, such as principal components, and the second approach is analogous to forecast combination, where the forecasts from a multitude of possible models are averaged. Using several data sets we compare the performance of the two approaches in the guise of the diffusion index or factor models popularized by Stock and Watson and forecast combination as an application of Bayesian model averaging. We find that none of the methods is uniformly superior and that no method performs better than, or is outperformed by, a simple AR(p) process.Bayesian model averaging; Diffusion indexes; GDP growth rate; Inflation rate

    An Embarrassment of Riches: Forecasting Using Large Panels

    Get PDF
    The problem of having to select a small subset of predictors from a large number of useful variables can be circumvented nowadays in forecasting. One possibility is to efficiently and systematically evaluate all predictors and almost all possible models that these predictors in combination can give rise to. The idea of combining forecasts from various indicator models by using Bayesian model averaging is explored, and compared to diffusion indexes, another method using large number of predictors to forecast. In addition forecasts based on the median model are considered.

    Asymptotic properties of the maximum likelihood estimator of random effects models with serial correlation

    Get PDF
    This paper considers the large sample behavior of the maximum likelihood estimator of random effects models with serial correlation in the form of AR(1) for the idiosyncratic or time-specific error component. Consistent estimation and asymptotic normality as N and/or T grows large is established for a comprehensive specification which nests these models as well as all commonly used random effects models. When only N or T grows large only a subset of the parameters are consistent and asymptotic normality is established for the consistent subsets.Panel data; serial correlation; random effects

    Computational Efficiency in Bayesian Model and Variable Selection

    Get PDF
    Large scale Bayesian model averaging and variable selection exercises present, despite the great increase in desktop computing power, considerable computational challenges. Due to the large scale it is impossible to evaluate all possible models and estimates of posterior probabilities are instead obtained from stochastic (MCMC) schemes designed to converge on the posterior distribution over the model space. While this frees us from the requirement of evaluating all possible models the computational effort is still substantial and efficient implementation is vital. Efficient implementation is concerned with two issues: the efficiency of the MCMC algorithm itself and efficient computation of the quantities needed to obtain a draw from the MCMC algorithm. We evaluate several different MCMC algorithms and find that relatively simple algorithms with local moves perform competitively except possibly when the data is highly collinear. For the second aspect, efficient computation within the sampler, we focus on the important case of linear models where the computations essentially reduce to least squares calculations. Least squares solvers that update a previous model estimate are appealing when the MCMC algorithm makes local moves and we find that the Cholesky update is both fast and accurate.Bayesian Model Averaging; Sweep operator; Cholesky decomposition; QR decomposition; Swendsen-Wang algorithm
    corecore