10,009 research outputs found

    Estimating a preference-based index from the Japanese SF-36

    Get PDF
    Objective: The main objective of the study was to estimate a preference-bascd Short Form (SF)-6D index from the SF-36 for Japan and compare it with the UK results. Study Design and Setting: The SF-6D was translated into Japanese. Two hundred and forty-nine health states defined by this version of the SF-6D were then valued by a representative sample of 600 members of the Japanese general population using standard gamble (SG). These health-state values were modeled using classical parametric random-effect methods with individual-level data and ordinary least squares (OLS) on mean health-state values, together with a new nonparametric approach using Bayesian methods of estimation. Results: All parametric models estimated on Japanese data were found to perform less well than their UK counterparts in terms of poorer goodness of fit, more inconsistencies, larger prediction errors and bias, and evidence of systematic bias in the predictions. Nonparametric models produce a substantial improvement in out-of-sample predictions. The physical, role, and social dimensions have relatively larger decrements than pain and mental health compared with those in the United Kingdom. Conclusion: The differences between Japanese and UK valuations of the SF-6D make it important to use the Japanese valuation data set estimated using the nonparametric Bayesian technique presented in this article. (C) 2009 Elsevier Inc. All rights reserved

    Zero-Truncated Poisson Tensor Factorization for Massive Binary Tensors

    Full text link
    We present a scalable Bayesian model for low-rank factorization of massive tensors with binary observations. The proposed model has the following key properties: (1) in contrast to the models based on the logistic or probit likelihood, using a zero-truncated Poisson likelihood for binary data allows our model to scale up in the number of \emph{ones} in the tensor, which is especially appealing for massive but sparse binary tensors; (2) side-information in form of binary pairwise relationships (e.g., an adjacency network) between objects in any tensor mode can also be leveraged, which can be especially useful in "cold-start" settings; and (3) the model admits simple Bayesian inference via batch, as well as \emph{online} MCMC; the latter allows scaling up even for \emph{dense} binary data (i.e., when the number of ones in the tensor/network is also massive). In addition, non-negative factor matrices in our model provide easy interpretability, and the tensor rank can be inferred from the data. We evaluate our model on several large-scale real-world binary tensors, achieving excellent computational scalability, and also demonstrate its usefulness in leveraging side-information provided in form of mode-network(s).Comment: UAI (Uncertainty in Artificial Intelligence) 201

    Semiparametric Bayesian inference in multiple equation models

    Get PDF
    This paper outlines an approach to Bayesian semiparametric regression in multiple equation models which can be used to carry out inference in seemingly unrelated regressions or simultaneous equations models with nonparametric components. The approach treats the points on each nonparametric regression line as unknown parameters and uses a prior on the degree of smoothness of each line to ensure valid posterior inference despite the fact that the number of parameters is greater than the number of observations. We develop an empirical Bayesian approach that allows us to estimate the prior smoothing hyperparameters from the data. An advantage of our semiparametric model is that it is written as a seemingly unrelated regressions model with independent normal-Wishart prior. Since this model is a common one, textbook results for posterior inference, model comparison, prediction and posterior computation are immediately available. We use this model in an application involving a two-equation structural model drawn from the labour and returns to schooling literatures

    Probabilistic Inference from Arbitrary Uncertainty using Mixtures of Factorized Generalized Gaussians

    Full text link
    This paper presents a general and efficient framework for probabilistic inference and learning from arbitrary uncertain information. It exploits the calculation properties of finite mixture models, conjugate families and factorization. Both the joint probability density of the variables and the likelihood function of the (objective or subjective) observation are approximated by a special mixture model, in such a way that any desired conditional distribution can be directly obtained without numerical integration. We have developed an extended version of the expectation maximization (EM) algorithm to estimate the parameters of mixture models from uncertain training examples (indirect observations). As a consequence, any piece of exact or uncertain information about both input and output values is consistently handled in the inference and learning stages. This ability, extremely useful in certain situations, is not found in most alternative methods. The proposed framework is formally justified from standard probabilistic principles and illustrative examples are provided in the fields of nonparametric pattern classification, nonlinear regression and pattern completion. Finally, experiments on a real application and comparative results over standard databases provide empirical evidence of the utility of the method in a wide range of applications
    corecore