3,272 research outputs found

    Eliciting Preferences for Collectively Financed Health Programmes: the Willingness to Assign Approach

    Get PDF
    Improving public involvement in health system decision making stands as a primary goal in health systems reform. However, still limited evidence is found on how best to elicit preferences for health care programs. This paper examines a contingent choice technique to elicit preferences among health programs so called, willingness to assign (WTAS). Moreover, we elicited contingent rankings as well as the willingness to pay extra taxes for comparative purposes. We argue that WTAS reveals relative (monetary-based) values of a set of competing public programmes under a hypothetical healthcare budget assessment. Experimental evidence is reported from a deliberative empirical study valuing ten health programmes in the context of the Catalan Health Service. Evidence from a our experimental study reveals that preferences are internally more consistent and slightly less affected by preference reversals as compared to values revealed from the willingness to pay (WTP) extra taxes approach. Consistent with prior studies, we find that the deliberative approach helped to avoid possible misunderstandings. Interestingly, although programmes promoting health received the higher relative valuation, those promoting other health benefits also ranked highly.willingness to assign, willingness to pay, health system benefits

    Incremental willingness to pay: a theoretical and empirical exposition

    Get PDF
    Applications of willingness to pay (WTP) have shown the difficultly to discriminate between various options. This reflects the problem of embedding in both its specific sense, of options being nested within one another, and its more-general sense, whereby respondents cannot discriminate between close substitutes or between more-disparate rivals for the same budget. Furthermore, high proportions of reversals between WTP-value and simple preference based rankings of options are often highlighted. Although an incremental WTP approach was devised to encourage more differentiated answers and a higher degree of consistency among respondents, a theoretical basis for this approach has not been elucidated, and there is little evidence to show that this approach might indeed achieve greater consistency between explicit and implicit rankings inferred from WTP values.We address both these issues. Following our theoretical exposition, standard and incremental approaches were compared with explicit ranking in a study assessing preferences for different French emergency care services. 280 persons, representative of the French adult population, were interviewed. Half received the incremental version, the other half the standard version. Results suggest that the incremental approach provides a ranking of options fully in line with explicit ranking. The standard approach was reasonably consistent with explicit ranking but proved unable to differentiate between the five most preferred providers, as predicted by theory. Our findings suggest that the incremental approach provides results which can be used in priority-setting contexts

    Decision by sampling

    Get PDF
    We present a theory of decision by sampling (DbS) in which, in contrast with traditional models, there are no underlying psychoeconomic scales. Instead, we assume that an attribute’s subjective value is constructed from a series of binary, ordinal comparisons to a sample of attribute values drawn from memory and is its rank within the sample. We assume that the sample reflects both the immediate distribution of attribute values from the current decision’s context and also the background, real-world distribution of attribute values. DbS accounts for concave utility functions; losses looming larger than gains; hyperbolic temporal discounting; and the overestimation of small probabilities and the underestimation of large probabilities

    Decision by sampling

    Get PDF
    We present a theory of decision by sampling (DbS) in which, in contrast with traditional models, there are no underlying psychoeconomic scales. Instead, we assume that an attribute's subjective value is constructed from a series of binary, ordinal comparisons to a sample of attribute values drawn from memory and is its rank within the sample. We assume that the sample reflects both the immediate distribution of attribute values from the current decision's context and also the background, real-world distribution of attribute values. DbS accounts for concave utility functions; losses looming larger than gains; hyperbolic temporal discounting; and the overestimation of small probabilities and the underestimation of large probabilities

    A framework for the selection of the right nuclear power plant

    Get PDF
    Civil nuclear reactors are used for the production of electrical energy. In the nuclear industry vendors propose several nuclear reactor designs with a size from 35–45 MWe up to 1600–1700 MWe. The choice of the right design is a multidimensional problem since a utility has to include not only financial factors as levelised cost of electricity (LCOE) and internal rate of return (IRR), but also the so called “external factors” like the required spinning reserve, the impact on local industry and the social acceptability. Therefore it is necessary to balance advantages and disadvantages of each design during the entire life cycle of the plant, usually 40–60 years. In the scientific literature there are several techniques for solving this multidimensional problem. Unfortunately it does not seem possible to apply these methodologies as they are, since the problem is too complex and it is difficult to provide consistent and trustworthy expert judgments. This paper fills the gap, proposing a two-step framework to choosing the best nuclear reactor at the pre-feasibility study phase. The paper shows in detail how to use the methodology, comparing the choice of a small-medium reactor (SMR) with a large reactor (LR), characterised, according to the International Atomic Energy Agency (2006), by an electrical output respectively lower and higher than 700 MWe

    An extension of the Becker proposition to non-expected utility theory

    Get PDF
    In a seminal paper, Becker (1968) showed that the most efficient way to deter crime is to impose the severest possible penalty (to maintain adequate deterrence) with the lowest possible probability (to economize on costs of enforcement). We shall call this the Becker proposition (BP). The BP is derived under the assumptions of expected utility theory (EU). However, EU is heavily rejected by the evidence.  A range of non-expected utility theories have been proposed to explain the evidence.  The two leading alternatives to EU are rank dependent utility (RDU) and cumulative prospect theory (CP). The main contributions of this paper are: (1) We formalize the BP in a more satisfactory manner. (2) We show that the BP holds under RDU and CP. (3) We give a formal behavioral approach to crime and punishment that could have applicability to a wide range of problems in the economics of crime.Crime and punishment; non-linear weighting of probabilities; cumulative prospect theory; rank dependent utility; probability weighting functions; punishment functions.

    An axiomatization of cumulative prospect theory

    Get PDF
    This paper presents a method for axiomatizing a variety of models for decision making under uncertainty, including Expected Utility and Cumulative Prospect Theory. This method identifies, for each model, the situations that permit consistent inferences about the ordering of value differences. Examples of rankdependent and sign-dependent preference patterns are used to motivate the models and the tradeoff consistency axioms that characterize them. The major properties of the value function in Cumulative Prospect Theory—diminishing sensitivity and loss aversion—are contrasted with the principle of diminishing marginal utility that is commonly assumed in Expected Utility

    Measuring covariation between preference parameters: A simulation study

    Full text link
    Much of the empirical success of Rank-Dependent Expected Utility Theory and Cumulative Prospect Theory is due to the fact that they allow for nonlinearity towards both outcomes (through the utility function) and probabilities (through the probability weighting function). Since risk attitude is jointly determined by the shapes of the two functions, it would be instructive to measure how the degree of risk aversion incorporated in the utility function empirically covaries with its counterpart from the probability weighting function. We conduct a large-scale simulation to assess whether an elicitation procedure based on the trade-off method, which essentially equals that used in recent empirical studies, allows to reliably measure the quantity of interest. We find a strong systematic distortion of measurement, which points at the limitations of the presently available elicitation techniques
    corecore