2,214,850 research outputs found
Comparing parametric and semi-parametric approaches for bayesian cost-effectiveness analyses in health economics
We consider the problem of assessing new and existing technologies for their cost-effectiveness in the case where data on both costs and effects are available from a clinical trial, and we address it by means of the cost-effectiveness acceptability curve. The main difficulty in these analyses is that cost data usually exhibit highly skew and heavytailed distributions, so that it can be extremely difficult to produce realistic probabilistic models for the underlying population distribution, and in particular to model accurately the tail of the distribution, which is highly influential in estimating the population mean. Here, in order to integrate the uncertainty about the model into the analysis of cost data and into cost-effectiveness analyses, we consider an approach based on Bayesian model averaging: instead of choosing a single parametric model, we specify a set of plausible models for costs and estimate the mean cost with its posterior expectation, that can be obtained as a weighted mean of the posterior expectations under each model, with weights given by the posterior model probabilities. The results are compared with those obtained with a semi-parametric approach that does not require any assumption about the distribution of costs. 1 IntroductionHealthcare cost data, cost-effectiveness analysis, mixture models, Bayesian model averaging
Recommended from our members
Artificial Neural Network Model for a Low Cost Failure Sensor: Performance Assessment in Pipeline Distribution
YesThis paper describes an automated event detection and
location system for water distribution pipelines which is based upon
low-cost sensor technology and signature analysis by an Artificial
Neural Network (ANN). The development of a low cost failure
sensor which measures the opacity or cloudiness of the local water
flow has been designed, developed and validated, and an ANN based
system is then described which uses time series data produced by
sensors to construct an empirical model for time series prediction and
classification of events. These two components have been installed,
tested and verified in an experimental site in a UK water distribution
system. Verification of the system has been achieved from a series of
simulated burst trials which have provided real data sets. It is
concluded that the system has potential in water distribution network
management
Benchmarking and incentive regulation of quality of service: an application to the UK electricity distribution utilities
Quality of service has emerged as an important issue in post-reform regulation of electricity distribution networks. Regulators have employed partial incentive schemes to promote cost saving, investment efficiency, and service quality. This paper presents a quality-incorporated benchmarking study of the electricity distribution utilities in the UK between 1991/92 and 1998/99. We calculate technical efficiency of the utilities using Data Envelopment Analysis technique and productivity change over time using quality-incorporated Malmquist indices. We find that cost efficient firms do not necessarily exhibit high service quality and that efficiency scores of cost-only models do not show high correlation with those of quality-based models. The results also show that improvements in service quality have made a significant contribution to the sector’s total productivity change. In addition, we show that integrating quality of service in regulatory benchmarking is preferable to cost-only approaches.quality of service, benchmarking, incentive regulation, data envelopment analysis, electricity
Incorporating Equity in Regulatory and Benefit-Cost Analysis Using Risk Based Preferences
Governmental guidance for regulatory and benefit-cost analysis is targeted for applied analysts. Existing Federal guidance recommends sensitivity analysis in general without being specific regarding the implicit distributional assumptions of standard benefit-cost analysis. Recommendations for Federal guidance are developed to: 1) better communicate expectations for distributional analysis, 2) develop guidance for descriptive statistics related to distributional issues, and 3) integrate Government published measures of inequality aversion and to evaluate compensation for identified sensitive populations. While such actions have a data collection and analysis cost, they may make the results of regulatory analysis more relevant by investigating both efficiency and equity measures.benefit, risk, equity, distribution, income
A bayesian model averaging approach with non-informative priors for cost-effectiveness analyses in health economics
We consider the problem of assessing new and existing technologies for their cost-effectiveness in the case where data on both costs and effects are available from a clinical trial, and we address it by means of the cost-effectiveness acceptability curve. The main difficulty in these analyses is that cost data usually exhibit highly skew and heavy-tailed distributions, so that it can be extremely difficult to produce realistic probabilistic models for the underlying population distribution, and in particular to model accurately the tail of the distribution, which is highly influential in estimating the population mean. Here, in order to integrate the uncertainty about the model into the analysis of cost data and into cost-effectiveness analyses, we consider an approach based on Bayesian model averaging in the particular case of weak prior informations about the unknown parameters of the different models involved in the procedure. The main consequence of this assumption is that the marginal densities required by Bayesian model averaging are undetermined. However in accordance with the theory of partial Bayes factors and in particular of fractional Bayes factors, we suggest replacing each marginal density with a ratio of integrals, that can be efficiently computed via Path Sampling. The results in terms of cost-effectiveness are compared with those obtained with a semi-parametric approach that does not require any assumption about the distribution of costs.Bayesian model averaging, Cost data, Health economics, MCMC, Non-informative priors,
Recommended from our members
Benchmarking and incentive regulation of quality of service: an application to the UK electricity distribution utilities
Quality of service has emerged as an important issue in post-reform regulation of electricity distribution networks. Regulators have employed partial incentive schemes to promote cost saving, investment efficiency, and service quality. This paper presents a quality-incorporated benchmarking study of the electricity distribution utilities in the UK between 1991/92 and 1998/99. We calculate technical efficiency of the utilities using Data Envelopment Analysis technique and productivity change over time using quality-incorporated Malmquist indices. We find that cost efficient firms do not necessarily exhibit high service quality and that efficiency scores of cost-only models do not show high correlation with those of quality-based models. The results also show that improvements in service quality have made a significant contribution to the sector�s total productivity change. In addition, we show that integrating quality of service in regulatory benchmarking is preferable to cost-only approaches
Reliability analysis of distribution network
The knowledge of the reliability of distribution networks and systems is important
consideration in the system planning and operations for development and
improvements of power distribution systems. To achieve the target as minimum
interruptions as possible to customers, utilities must strive to improve the reliability
but at the same time reduce cost. It is a known fact that most of customer
interruptions are caused by the failure in distribution system. However, valid data are
not easy to collect and the reliability performance statistic not easy to obtain. There is
always uncertainty associated with the distribution network reliability. For evaluation
and analysis of reliability, it is necessary to have data on the number and range of the
examined piece of equipment. It’s important to have database for failure rates, repair
time and unavailability for each component in distribution network. These studies
present the analysis of distribution networks and systems by using analytical methods
in SESB’s distribution substations and network systems. These studies use analytical
methods to determine the reliability indices and effect of distribution substation
configuration and network to the reliability indices performance. Then the result
obtained will be compare with the actual data from SESB to determine the area of
improvement required for mutual benefit and also for improvement in the future
studies
On the Evaluation of RDF Distribution Algorithms Implemented over Apache Spark
Querying very large RDF data sets in an efficient manner requires a
sophisticated distribution strategy. Several innovative solutions have recently
been proposed for optimizing data distribution with predefined query workloads.
This paper presents an in-depth analysis and experimental comparison of five
representative and complementary distribution approaches. For achieving fair
experimental results, we are using Apache Spark as a common parallel computing
framework by rewriting the concerned algorithms using the Spark API. Spark
provides guarantees in terms of fault tolerance, high availability and
scalability which are essential in such systems. Our different implementations
aim to highlight the fundamental implementation-independent characteristics of
each approach in terms of data preparation, load balancing, data replication
and to some extent to query answering cost and performance. The presented
measures are obtained by testing each system on one synthetic and one
real-world data set over query workloads with differing characteristics and
different partitioning constraints.Comment: 16 pages, 3 figure
- …