751 research outputs found

    Connected Hopf algebras and iterated Ore extensions

    Get PDF
    We investigate when a skew polynomial extension T = R[x; {\sigma}, {\delta}] of a Hopf algebra R admits a Hopf algebra structure, substantially generalising a theorem of Panov. When this construction is applied iteratively in characteristic 0 one obtains a large family of connected noetherian Hopf algebras of finite Gelfand-Kirillov dimension, including for example all enveloping algebras of finite dimensional solvable Lie algebras and all coordinate rings of unipotent groups. The properties of these Hopf algebras are investigated

    Estimating population cardinal health state valuation models from individual ordinal (rank) health state preference data

    Get PDF
    Ranking exercises have routinely been used as warm-up exercises within health state valuation surveys. Very little use has been made of the information obtained in this process. Instead, research has focussed upon the analysis of health state valuation data obtained using the visual analogue scale, standard gamble and time trade off methods. Thurstone’s law of comparative judgement postulates a stable relationship between ordinal and cardinal preferences, based upon the information provided by pairwise choices. McFadden proposed that this relationship could be modelled by estimating conditional logistic regression models where alternatives had been ranked. In this paper we report the estimation of such models for the Health Utilities Index Mark 2 and the SF-6D. The results are compared to the conventional regression models estimated from standard gamble data, and to the observed mean standard gamble health state valuations. For both the HUI2 and the SF-6D, the models estimated using rank data are broadly comparable to the models estimated on standard gamble data and the predictive performance of these models is close to that of the standard gamble models. Our research indicates that rank data has the potential to provide useful insights into community health state preferences. However, important questions remain

    Estimating population cardinal health state valuation models from individual ordinal (rank) health state preference data

    Get PDF
    Ranking exercises have routinely been used as warm-up exercises within health state valuation surveys. Very little use has been made of the information obtained in this process. Instead, research has focussed upon the analysis of health state valuation data obtained using the visual analogue scale, standard gamble and time trade off methods. Thurstone’s law of comparative judgement postulates a stable relationship between ordinal and cardinal preferences, based upon the information provided by pairwise choices. McFadden proposed that this relationship could be modelled by estimating conditional logistic regression models where alternatives had been ranked. In this paper we report the estimation of such models for the Health Utilities Index Mark 2 and the SF-6D. The results are compared to the conventional regression models estimated from standard gamble data, and to the observed mean standard gamble health state valuations. For both the HUI2 and the SF-6D, the models estimated using rank data are broadly comparable to the models estimated on standard gamble data and the predictive performance of these models is close to that of the standard gamble models. Our research indicates that rank data has the potential to provide useful insights into community health state preferences. However, important questions remain.health state valuation; HUI-2; SF-6D

    Estimating population cardinal health state valuation models from individual ordinal (rank) health state preference data

    Get PDF
    Ranking exercises have routinely been used as warm-up exercises within health state valuation surveys. Very little use has been made of the information obtained in this process. Instead, research has focussed upon the analysis of health state valuation data obtained using the visual analogue scale, standard gamble and time trade off methods. Thurstone’s law of comparative judgement postulates a stable relationship between ordinal and cardinal preferences, based upon the information provided by pairwise choices. McFadden proposed that this relationship could be modelled by estimating conditional logistic regression models where alternatives had been ranked. In this paper we report the estimation of such models for the Health Utilities Index Mark 2 and the SF-6D. The results are compared to the conventional regression models estimated from standard gamble data, and to the observed mean standard gamble health state valuations. For both the HUI2 and the SF-6D, the models estimated using rank data are broadly comparable to the models estimated on standard gamble data and the predictive performance of these models is close to that of the standard gamble models. Our research indicates that rank data has the potential to provide useful insights into community health state preferences. However, important questions remain

    Modelling the cost effectiveness of interferon beta and glatiramer acetate in the management of multiple sclerosis

    Get PDF
    OBJECTIVE: To evaluate the cost effectiveness of four disease modifying treatments (interferon betas and glatiramer acetate) for relapsing remitting and secondary progressive multiple sclerosis in the United Kingdom. DESIGN: Modelling cost effectiveness. SETTING: UK NHS. PARTICIPANTS: Patients with relapsing remitting multiple sclerosis and secondary progressive multiple sclerosis. MAIN OUTCOME MEASURES: Cost per quality adjusted life year gained. RESULTS: The base case cost per quality adjusted life year gained by using any of the four treatments ranged from £42 000 ($66 469; 61 630) to £98 000 based on efficacy information in the public domain. Uncertainty analysis suggests that the probability of any of these treatments having a cost effectiveness better than £20 000 at 20 years is below 20%. The key determinants of cost effectiveness were the time horizon, the progression of patients after stopping treatment, differential discount rates, and the price of the treatments. CONCLUSIONS: Cost effectiveness varied markedly between the interventions. Uncertainty around point estimates was substantial. This uncertainty could be reduced by conducting research on the true magnitude of the effect of these drugs, the progression of patients after stopping treatment, the costs of care, and the quality of life of the patients. Price was the key modifiable determinant of the cost effectiveness of these treatments

    Evaluation of elicitation methods to quantify Bayes linear models

    Get PDF
    The Bayes linear methodology allows decision makers to express their subjective beliefs and adjust these beliefs as observations are made. It is similar in spirit to probabilistic Bayesian approaches, but differs as it uses expectation as its primitive. While substantial work has been carried out in Bayes linear analysis, both in terms of theory development and application, there is little published material on the elicitation of structured expert judgement to quantify models. This paper investigates different methods that could be used by analysts when creating an elicitation process. The theoretical underpinnings of the elicitation methods developed are explored and an evaluation of their use is presented. This work was motivated by, and is a precursor to, an industrial application of Bayes linear modelling of the reliability of defence systems. An illustrative example demonstrates how the methods can be used in practice

    Supporting User-Defined Functions on Uncertain Data

    Get PDF
    Uncertain data management has become crucial in many sensing and scientific applications. As user-defined functions (UDFs) become widely used in these applications, an important task is to capture result uncertainty for queries that evaluate UDFs on uncertain data. In this work, we provide a general framework for supporting UDFs on uncertain data. Specifically, we propose a learning approach based on Gaussian processes (GPs) to compute approximate output distributions of a UDF when evaluated on uncertain input, with guaranteed error bounds. We also devise an online algorithm to compute such output distributions, which employs a suite of optimizations to improve accuracy and performance. Our evaluation using both real-world and synthetic functions shows that our proposed GP approach can outperform the state-of-the-art sampling approach with up to two orders of magnitude improvement for a variety of UDFs. 1

    Consistent Application of Maximum Entropy to Quantum-Monte-Carlo Data

    Full text link
    Bayesian statistics in the frame of the maximum entropy concept has widely been used for inferential problems, particularly, to infer dynamic properties of strongly correlated fermion systems from Quantum-Monte-Carlo (QMC) imaginary time data. In current applications, however, a consistent treatment of the error-covariance of the QMC data is missing. Here we present a closed Bayesian approach to account consistently for the QMC-data.Comment: 13 pages, RevTeX, 2 uuencoded PostScript figure

    Calculating partial expected value of perfect information via Monte Carlo sampling algorithms

    Get PDF
    Partial expected value of perfect information (EVPI) calculations can quantify the value of learning about particular subsets of uncertain parameters in decision models. Published case studies have used different computational approaches. This article examines the computation of partial EVPI estimates via Monte Carlo sampling algorithms. The mathematical definition shows 2 nested expectations, which must be evaluated separately because of the need to compute a maximum between them. A generalized Monte Carlo sampling algorithm uses nested simulation with an outer loop to sample parameters of interest and, conditional upon these, an inner loop to sample remaining uncertain parameters. Alternative computation methods and shortcut algorithms are discussed and mathematical conditions for their use considered. Maxima of Monte Carlo estimates of expectations are biased upward, and the authors show that the use of small samples results in biased EVPI estimates. Three case studies illustrate 1) the bias due to maximization and also the inaccuracy of shortcut algorithms 2) when correlated variables are present and 3) when there is nonlinearity in net benefit functions. If relatively small correlation or nonlinearity is present, then the shortcut algorithm can be substantially inaccurate. Empirical investigation of the numbers of Monte Carlo samples suggests that fewer samples on the outer level and more on the inner level could be efficient and that relatively small numbers of samples can sometimes be used. Several remaining areas for methodological development are set out. A wider application of partial EVPI is recommended both for greater understanding of decision uncertainty and for analyzing research priorities
    corecore