522 research outputs found
Recommended from our members
Bayes: Radical, liberal, or conservative?
This study analyses the effect of pollution to the optimal taxation and public provision. Environmental deterioration is modelled to be created by a harmful externality of consumption. It is assumed that there are two types of households with high and low productivity, endogenous wages and a government using a mixed taxation scheme. As a result of the asymmetric information the government needs to take the self-selection constraint into account when designing the optimal tax policy. It will be shown that the social valuation of the externality consist of terms indicating the effects directed on consumers, producers, the government and the labour markets respectively. It turns out that in the consumer side the direct effect and the self-selection effect have opposite signs indicating that the environmental and the redistributive objectives are inconsistent. However, in the producer side the direct effect and the indirect effect from wage adjustment in the labour markets have both positive signs indicating consistence between these two goals of the government. Another considerable result comes from the commodity taxation: it is shown that Dixit\'s principle of targeting continues to hold unless the endogeneity assumption is enlarged to apply all factor prices
Rejoinder: Struggles with survey weighting and regression modeling
I was motivated to write this paper, with its controversial opening line, "Survey weighting is a mess," from various experiences as an applied statistician
Recommended from our members
Bayes, Jeffreys, Prior Distributions and the Philosophy of Statistics
I actually own a copy of Harold Jeffreys's Theory of Probability but have only read small bits of it, most recently over a decade ago to confirm that, indeed, Jeffreys was not too proud to use a classical chi-squared p-value when he wanted to check the misfit of a model to data (Gelman, Meng and Stern, 2006). I do, however, feel that it is important to understand where our probability models come from, and I welcome the opportunity to use the present article by Robert, Chopin and Rousseau as a platform for further discussion of foundational issues. In this brief discussion I will argue the following: (1) in thinking about prior distributions, we should go beyond Jeffreys's principles and move toward weakly informative priors; (2) it is natural for those of us who work in social and computational sciences to favor complex models, contra Jeffreys's preference for simplicity; and (3) a key generalization of Jeffreys's ideas is to explicitly include model checking in the process of data analysis
Recommended from our members
Comment: Bayesian Checking of the Second Levels of Hierarchical Models
Bayarri and Castellanos (BC) have written an interesting paper discussing two forms of posterior model check, one based on cross-validation and one based on replication of new groups in a hierarchical model. We think both these checks are good ideas and can become even more effective when understood in the context of posterior predictive checking. For the purpose of discussion, however, it is most interesting to focus on the areas where we disagree with BC
Recommended from our members
Discussion of the Article "Website Morphing"
The article under discussion illustrates the trade-off between optimization and exploration that is fundamental to statistical experimental design. In this discussion, I suggest that the research under discussion could be made even more effective by checking the fit of the model by comparing observed data to replicated data sets simulated from the fitted model
Fully Bayesian computing
A fully Bayesian computing environment calls for the possibility of defining vector and array objects that may contain both random and deterministic quantities, and syntax rules that allow treating these objects much like any variables or numeric arrays. Working within the statistical package R, we introduce a new object-oriented framework based on a new random variable data type that is implicitly represented by simulations. We seek to be able to manipulate random variables and posterior simulation objects conveniently and transparently and provide a basis for further development of methods and functions that can access these objects directly. We illustrate the use of this new programming environment with several examples of Bayesian computing, including posterior predictive checking and the manipulation of posterior simulations. This new environment is fully Bayesian in that the posterior simulations can be handled directly as random variables
Recommended from our members
One vote, many Mexicos: Income and vote choice in the 1994, 2000, and 2006 presidential elections
Using multilevel modeling of state-level economic data and individual-level exit poll data from the 1994, 2000 and 2006 Mexican presidential elections, we find that income has a stronger effect in predicting the vote for the conservative party in poorer states than in richer states -- a pattern that has also been found in recent U.S. elections. In addition (and unlike in the U.S.), richer states on average tend to support the conservative party at higher rates than poorer states. Our findings raise questions regarding the role that income polarization and region play in vote choice. The electoral results since 1994 reveal that collapsing multiple states into large regions entails significant loss of information that otherwise may uncover sharper and quiet revealing differences in voting patterns between rich and poor states as well as rich and poor individuals within states
Sampling for Bayesian computation with large datasets
Multilevel models are extremely useful in handling large hierarchical datasets. However, computation can be a challenge, both in storage and CPU time per iteration of Gibbs sampler or other Markov chain Monte Carlo algorithms. We propose a computational strategy based on sampling the data, computing separate posterior distributions based on each sample, and then combining these to get a consensus posterior inference. With hierarchical data structures, we perform cluster sampling into subsets with the same structures as the original data. This reduces the number of parameters as well as sample size for each separate model fit. We illustrate with examples from climate modeling and newspaper marketing
- …