1,442 research outputs found

    Data Augmentation in the Bayesian Multivariate Probit Model

    Get PDF
    This paper is concerned with the Bayesian estimation of a Multivariate Probit model. In particular, this paper provides an algorithm that obtains draws with low correlation much faster than a pure Gibbs sampling algorithm. The algorithm consists in sampling some characteristics of slope and variance parameters marginally on the latent data. Estimations with simulated datasets illustrate that the proposed algorithm can be much faster than a pure Gibbs sampling algorithm. For some datasets, the algorithm is also much faster than the eĀ±cient algorithm proposed by Liu and Wu (1999) in the context of the univariate Probit model.

    Bayesian model averaging in the instrumental variable regression model

    Get PDF
    This paper considers the instrumental variable regression model when there is uncertainty about the set of instruments, exogeneity restrictions, the validity of identifying restrictions and the set of exogenous regressors. This uncertainty can result in a huge number of models. To avoid statistical problems associated with standard model selection procedures, we develop a reversible jump Markov chain Monte Carlo algorithm that allows us to do Bayesian model averaging. The algorithm is very flexible and can be easily adapted to analyze any of the different priors that have been proposed in the Bayesian instrumental variables literature. We show how to calculate the probability of any relevant restriction (e.g. the posterior probability that over-identifying restrictions hold) and discuss diagnostic checking using the posterior distribution of discrepancy vectors. We illustrate our methods in a returns-to-schooling application

    Robust Benefit Function Transfer: A Bayesian Model Averaging Approach

    Get PDF
    A Benefit Function Transfer obtains estimates of Willingness-to-Pay (WTP) for the evaluation of a given policy at a site by combining existing information from different study sites. This has the advantage that more efficient estimates are obtained, but it relies on the assumption that the heterogeneity between sites is appropriately captured in the Benefit Transfer model. A more expensive alternative to estimate WTP is to analyse only data from the policy site in question while ignoring information from other sites. We make use of the fact that these two choices can be viewed as a model selection problem and extend the set of models to allow for the hypothesis that the benefit function is only applicable to a subset of sites. We show how Bayesian Model Averaging (BMA) techniques can be used to optimally combine information from all models. The Bayesian algorithm searches for the set of sites that can form the basis for estimating a Benefit function and reveals whether such information can be transferred to new sites for which only a small dataset is available. We illustrate the method with a sample of 42 forests from U.K. and Ireland. We find that BMA benefit function transfer produces reliable estimates and can increase about 8 times the information content of a small sample when the forest is ā€˜poolableā€™.Benefit Transfer; Bayesian Model Averaging; Exchangeability; Non-market Valuation; Panel Data

    Bayesian Estimation and Model Selection in the Generalised Stochastic Unit Root Model

    Get PDF
    We develop Bayesian techniques for estimation and model comparison in a novel Generalised Stochastic Unit Root (GSTUR) model. This allows us to investigate the presence of a deterministic time trend in economic series, while allowing the degree of persistence to change over time. In particular the model allows for shifts from stationarity I(0) to nonstationarity I(1) or vice versa. The empirical analysis demonstrates that the GSTUR model provides new insights on the properties of some macroeconomic time series such as stock market indices, in ation and ex- change rates.Stochastic Unit Root, MCMC, Bayesian

    Levy model of cancer

    Full text link
    A small portion of a tissue defines a microstate in gene expression space. Mutations, epigenetic events or external factors cause microstate displacements which are modeled by combining small independent gene expression variations and large Levy jumps, resulting from the collective variations of a set of genes. The risk of cancer in a tissue is estimated as the microstate probability to transit from the normal to the tumor region in gene expression space. The formula coming from the contribution of large Levy jumps seems to provide a qualitatively correct description of the lifetime risk of cancer, and reveals an interesting connection between the risk and the way the tissue is protected against infections.Comment: arXiv admin note: text overlap with arXiv:1507.0692

    Socioeconomic Determinants of Mortality in Taiwan: Combining Individual Data and Aggregate Data

    Get PDF
    There is a very large literature that examines the relationship between health and income. Two main hypotheses have been investigated: the relative income hypothesis and the absolute income hypothesis. Most of previous studies that used mortality data have been criticized for estimating an aggregate model that does not account for non-linear links between health and income at the individual level. In this paper we follow a novel approach to avoid this bias, combining aggregate mortality data with individual level data on socio-economic characteristics. We test the relative and absolute income hypotheses using county level mortality data from Life Statistic of Department of Health and individual level data from Taiwan census FIES for 1976-2003. We find that there is no strong evidence supporting either hypothesis in the case of the general population. In contrast, we find strong evidence that education does have significant effects on individualsā€™ health and the estimates are not sensitive to income equivalent scales.mortality, relative income hypothesis, aggregation bias

    Growth, Convergence and Public Investment. A Bayesian Model Averaging Approach

    Get PDF
    The aim of this paper is twofold. First, we study the determinants of economic growth among a wide set of potential variables for the Spanish provinces (NUTS3). Among others, we include various types of private, public and human capital in the group of growth factors. Also, we analyse whether Spanish provinces have converged in economic terms in recent decades. The second objective is to obtain cross-section and panel data parameter estimates that are robust to model speci cation. For this purpose, we use a Bayesian Model Averaging (BMA) approach. Bayesian methodology constructs parameter estimates as a weighted average of linear regression estimates for every possible combination of included variables. The weight of each regression estimate is given by the posterior probability of each model.convergence, public investment, growth, bayesian model averaging

    Efficient posterior simulation in cointegration models with priors on the cointegration space

    Get PDF
    A message coming out of the recent Bayesian literature on cointegration is that it is important to elicit a prior on the space spanned by the cointegrating vectors (as opposed to a particular identiā€¦ed choice for these vectors). In this note, we discuss a sensible way of eliciting such a prior. Furthermore, we develop a collapsed Gibbs sampling algorithm to carry out eĀ¢ cient posterior simulation in cointegration models. The computational advantages of our algorithm are most pronounced with our model, since the form of our prior precludes simple posterior simulation using conventional methods (e.g. a Gibbs sampler involves non-standard posterior conditionals). However, the theory we draw upon implies our algorithm will be more eĀ¢ cient even than the posterior simulation methods which are used with identiā€¦ed versions of cointegration models

    Bayesian inference in a cointegrating panel data model

    Get PDF
    This paper develops methods of Bayesian inference in a cointegrating panel data model. This model involves each cross-sectional unit having a vector error correction representation. It is flexible in the sense that different cross-sectional units can have different cointegration ranks and cointegration spaces. Furthermore, the parameters which characterize short-run dynamics and deterministic components are allowed to vary over cross-sectional units. In addition to a noninformative prior, we introduce an informative prior which allows for information about the likely location of the cointegration space and about the degree of similarity in coefficients in different cross-sectional units. A collapsed Gibbs sampling algorithm is developed which allows for efficient posterior inference. Our methods are illustrated using real and artificial data

    Bayesian inference in the time varying cointegration model

    Get PDF
    There are both theoretical and empirical reasons for believing that the parameters of macroeconomic models may vary over time. However, work with time-varying parameter models has largely involved Vector autoregressions (VARs), ignoring cointegration. This is despite the fact that cointegration plays an important role in informing macroeconomists on a range of issues. In this paper we develop time varying parameter models which permit coin- tegration. Time-varying parameter VARs (TVP-VARs) typically use state space representations to model the evolution of parameters. In this paper, we show that it is not sensible to use straightforward extensions of TVP-VARs when allowing for cointegration. Instead we develop a speciā€¦cation which allows for the cointegrating space to evolve over time in a manner comparable to the random walk variation used with TVP-VARs. The properties of our approach are investigated before developing a method of posterior simulation. We use our methods in an empirical investigation involving a permanent/transitory variance decomposition for inflation
    • ā€¦
    corecore