146 research outputs found

    Happiness as a Driver of Risk-Avoiding Behavior

    Get PDF
    Most governments try to discourage their citizens from taking extreme risks with their health and lives. Yet, for reasons not understood, many people continue to do so. We suggest a new approach to this longstanding question. First, we show that expected-utility theory predicts that 'happier' people will be less attracted to risky behaviors. Second, using BRFSS data on seatbelt use in a sample of 300,000 Americans, we document evidence strongly consistent with that prediction. Our result is demonstrated with various methodological approaches, including Bayesian model-selection and instrumental-variable estimation (based on unhappiness caused by widowhood). Third, using data on road accidents from the Add Health data set, we find strongly corroborative longitudinal evidence. These results suggest that government policy may need to address the underlying happiness of individuals rather than focus on behavioural symptoms.subjective well-being, risky behaviors, effects of well-being, rational carelessness

    Happiness as a Driver of Risk-Avoiding Behavior

    Get PDF
    Understanding the reasons why individuals take risks, particularly unnecessary risks, remains an important question in economics. We provide the first evidence of a powerful connection between happiness and risk-avoidance. Using data on 300,000 Americans, we demonstrate that happier individuals wear seatbelts more frequently. This result is obtained with five different methodological approaches, including Bayesian model-selection and an instrumented analysis based on unhappiness through widowhood. Independent longitudinal data corroborate the finding, showing that happiness is predictive of future motor vehicle accidents. Our results are consistent with a rational-choice explanation: happy people value life and thus act to preserve it.risk preferences, seatbelt usage, vehicle accidents, subjective well-being, happiness

    A numerically stable algorithm for integrating Bayesian models using Markov melding

    Get PDF
    AbstractWhen statistical analyses consider multiple data sources, Markov melding provides a method for combining the source-specific Bayesian models. Markov melding joins together submodels that have a common quantity. One challenge is that the prior for this quantity can be implicit, and its prior density must be estimated. We show that error in this density estimate makes the two-stage Markov chain Monte Carlo sampler employed by Markov melding unstable and unreliable. We propose a robust two-stage algorithm that estimates the required prior marginal self-density ratios using weighted samples, dramatically improving accuracy in the tails of the distribution. The stabilised version of the algorithm is pragmatic and provides reliable inference. We demonstrate our approach using an evidence synthesis for inferring HIV prevalence, and an evidence synthesis of A/H1N1 influenza.</jats:p

    A Gibbs Sampler for Learning DAGs.

    Get PDF
    We propose a Gibbs sampler for structure learning in directed acyclic graph (DAG) models. The standard Markov chain Monte Carlo algorithms used for learning DAGs are random-walk Metropolis-Hastings samplers. These samplers are guaranteed to converge asymptotically but often mix slowly when exploring the large graph spaces that arise in structure learning. In each step, the sampler we propose draws entire sets of parents for multiple nodes from the appropriate conditional distribution. This provides an efficient way to make large moves in graph space, permitting faster mixing whilst retaining asymptotic guarantees of convergence. The conditional distribution is related to variable selection with candidate parents playing the role of covariates or inputs. We empirically examine the performance of the sampler using several simulated and real data examples. The proposed method gives robust results in diverse settings, outperforming several existing Bayesian and frequentist methods. In addition, our empirical results shed some light on the relative merits of Bayesian and constraint-based methods for structure learning.Part of this work was ... supported by the Economic and Social Research Council (ESRC) and Engineering and Physical Sciences Research Council (EPSRC)

    Bayesian structural inference with applications in social science

    Get PDF
    Structural inference for Bayesian networks is useful in situations where the underlying relationship between the variables under study is not well understood. This is often the case in social science settings in which, whilst there are numerous theories about interdependence between factors, there is rarely a consensus view that would form a solid base upon which inference could be performed. However, there are now many social science datasets available with sample sizes large enough to allow a more exploratory structural approach, and this is the approach we investigate in this thesis. In the first part of the thesis, we apply Bayesian model selection to address a key question in empirical economics: why do some people take unnecessary risks with their lives? We investigate this question in the setting of road safety, and demonstrate that less satisfied individuals wear seatbelts less frequently. Bayesian model selection over restricted structures is a useful tool for exploratory analysis, but fuller structural inference is more appealing, especially when there is a considerable quantity of data available, but scant prior information. However, robust structural inference remains an open problem. Surprisingly, it is especially challenging for large n problems, which are sometimes encountered in social science. In the second part of this thesis we develop a new approach that addresses this problem|a Gibbs sampler for structural inference, which we show gives robust results in many settings in which existing methods do not. In the final part of the thesis we use the sampler to investigate depression in adolescents in the US, using data from the Add Health survey. The result stresses the importance of adolescents not getting medical help even when they feel they should, an aspect that has been discussed previously, but not emphasised

    MultiBUGS: A Parallel Implementation of the BUGS Modeling Framework for Faster Bayesian Inference

    Get PDF
    MultiBUGS is a new version of the general-purpose Bayesian modeling software BUGS that implements a generic algorithm for parallelizing Markov chain Monte Carlo (MCMC) algorithms to speed up posterior inference of Bayesian models. The algorithm parallelizes evaluation of the product-form likelihoods formed when a parameter has many children in the directed acyclic graph (DAG) representation; and parallelizes sampling of conditionally-independent sets of parameters. A heuristic algorithm is used to decide which approach to use for each parameter and to apportion computation across computational cores. This enables MultiBUGS to automatically parallelize the broad range of statistical models that can be fitted using BUGS-language software, making the dramatic speed-ups of modern multi-core computing accessible to applied statisticians, without requiring any experience of parallel programming. We demonstrate the use of MultiBUGS on simulated data designed to mimic a hierarchical e-health linked-data study of methadone prescriptions including 425,112 observations and 20,426 random effects. Posterior inference for the e-health model takes several hours in existing software, but MultiBUGS can perform inference in only 28 minutes using 48 computational cores
    corecore