8,409 research outputs found

    Designs efficiency for non-market valuation with choice modelling: how to measure it, what to report and why

    Get PDF
    We review the basic principles for the evaluation of design efficiency in discrete choice modelling with a focus on efficiency of WTP estimates from the multinomial logit model. The discussion is developed under the realistic assumption that researchers can plausibly define a prior on the utility coefficients. Some new measures of design performance in applied studies are proposed and their rationale discussed. An empirical example based on the generation and comparison of fifteen separate designs from a common set of assumptions illustrates the relevant considerations to the context of non-market valuation, with particular emphasis placed on C-efficiency. Conclusions are drawn for the practice of reporting in non-market valuation and for future work on design research

    The role of the reference alternative in the specification of asymmetric discrete choice models

    Get PDF
    Within the discrete choice modelling literature, there has been growing interest in including reference alternatives within stated choice survey tasks. Recent studies have investigated asymmetric utility specifications by estimating discrete choice models that include different parameters according to gains and losses relative to the values of the reference attributes. This paper analyses asymmetric discrete choice models by comparing specifications expressed as deviations from the reference point and specifications expressed in absolute values. The results suggest that the selection of the appropriate asymmetric model specification should be based on the type of the stated choice experiment.stated choice experiments, reference alternative, preference asymmetry, willingness to pay

    A comparison of prospect theory in WTP and preference space

    Get PDF
    The importance of willingness to pay (WTP) and willingness to accept (WTA) measures in the evaluation of policy measures has led to a constant stream of research examining survey methods and model specifications seeking to capture and explain the concept of marginal rates of substitution as much as possible. Stated choice experiments pivoted around a reference alternative allow the specification of discrete choice models to accommodate the prospect theory reference dependence assumption. This permits an investigation of theories related to loss aversion and diminishing sensitivity, and to test the discrepancy between WTP and WTA, widely documented within the literature. With more advanced classes of discrete choice models at our disposal, it is now possible to test different preference specifications that are better able to measure WTP and WTA values. One such model allowing for utility to be directly specified in WTP space has recently shown interesting qualities. This paper compares and contrasts models estimated in preference space to those estimated in WTP space allowing for asymmetry in the marginal utilities by estimating different parameters according to reference, gain and loss values. The results suggest a better model fit for the data estimated in WTP space, contradicting the findings of previous researches. The parameter estimates report significant evidence of loss aversion and diminishing sensitivities even though the symmetric specification outperforms the asymmetric ones. Finally, the analysis of the WTP and WTA measures confirms the higher degree of WTA compared to WTP, and highlights the appeal of the WTP space specification in terms of plausibility of the estimated measures.choice experiments, willingness to pay space, preference asymmetry

    Interpreting discrete choice models based on best-worst data: A matter of framing

    Get PDF
    Best worst choice response tasks have become increasingly popular as a means of increasing the amount of information captured from respondents undertaking stated preference experiments. In analysis, best worst data is often exploded to provide additional pseudo observations which may aid in model estimation. Recent studies however have questioned many of the underlying assumptions which typically accompany best worst studies, such as the symmetry of preferences across the best and worst responses as well as assumptions about equal error variances across the two response types. This paper first provides a detailed description of the various best worst tasks that have appeared within the literature before arguing that violations of preference symmetry and homogeneity of error variance should be the norm. This is because in asking respondent to choose their most and least preferred option out of a set of alternatives reflects different response frames, one positive and one negative, and behaviourally there exists no reason why one would assume that the preferences (and error variances) obtained from one type of question should precisely mirror that of the other. Using an empirical case study, the impact of the framing of these questions is examined. Finally, the argument put forward is that best-worst data should be treated in a similar to data fusion methods, where one combines two different sources of discrete choice data

    Evaluating the effects of organizational culture on post-merger integration

    Get PDF
    This doctoral research project examines the impact of organizational culture on post-merger integration in the travel and travel services industry for acquisitions valued under $5B. The study uses a mixed-method research approach to determine whether culture plays a critical role in the success or failure of M&A deals. The research focuses on 50 M&A transactions that occurred between three and five years ago at the time of the study. Participants from both sides of the transactions completed an integration outcomes survey, reporting on financial, cultural, and overall success. In addition, each side had five participants who completed the Organizational Cultural Assessment Instrument (OCAI), and six individuals were interviewed (three from each side) regarding the three most successful and three least successful transactions. The study\u27s findings shed light on key factors impacting the integration process. The OCAI results revealed that, on average, acquired companies exhibited a more clan-like culture, while acquiring companies tended to be hierarchies. In addition, cultural similarities between merging companies did not significantly influence their success. The interviews emphasized the importance of addressing cultural differences between merging institutions, involving founders in the integration process, engaging employees, and understanding the acquired company\u27s business. These findings have practical implications for executives involved in M&A activities, guiding how to facilitate successful integration. Organizations can increase the likelihood of a successful merger or acquisition by identifying potential cultural conflicts early on and taking appropriate steps to mitigate their impact

    Experiences with the Greenstone digital library software for international development

    Get PDF
    Greenstone is a versatile open source multilingual digital library environment, emerging from research on text compression within the New Zealand Digital Library Research Project in the Department of Computer Science at the University of Waikato. In 1997 we began to work with Human Info NGO to help them produce fully-searchable CD-ROM collections of humanitarian information. The software has since evolved to support a variety of application contexts. Rather than being simply a delivery mechanism, we have emphasised the empowerment of users to create and distribute their own digital collections

    Observed efficiency of a d-optimal design in an interactive agency choice experiment

    Get PDF
    There have been a number of recent calls within the choice literature to examine the role of social interactions upon preference formation. McFadden (2001a,b) recently stated that this area should be a high priority research agenda for choice modellers. Manski (2000) has also came to a similar conclusion and offered a plea for better data to assist in understanding the role of interactions between social agents. The interactive agency choice experiment (IACE) methodology represents a recent development in the area of discrete choice directed towards these pleas (see e.g., Brewer and Hensher 2000). The study of the influences that group interactions have upon choice bring with them not only issues that need to be overcome in terms of modelling, but also in terms of setting up the stated choice experiment itself. Currently, the state of practice in experimental design centres on orthogonal designs (Alpizar et al., 2003), which are suitable when applied to surveys with a large sample size. In a stated choice experiment involving interdependent freight stakeholders in Sydney (see Hensher and Puckett 2007, Puckett et al. 2007, Puckett and Hensher 2008), one significant empirical constraint was difficulty in recruiting unique decision-making groups to participate. The expected relatively small sample size led us to seek an alternative experimental design. That is, we decided to construct an optimal design that utilised extant information regarding the preferences and experiences of respondents, to achieve statistically significant parameter estimates under a relatively low sample size (see Rose and Bliemer, 2006). The D-efficient experimental design developed for the study is unique, in that it centred on the choices of interdependent respondents. Hence, the generation of the design had to account for the preferences of two distinct classes of decision makers: buyers and sellers of road freight transport. This paper discusses the process by which these (non-coincident) preferences were used to seed the generation of the experimental design, and then examines the relative power of the design through an extensive bootstrap analysis of increasingly restricted sample sizes for both decision-making classes in the sample. We demonstrate the strong potential for efficient designs to achieve empirical goals under sampling constraints, whilst identifying limitations to their power as sample size decreases

    A Comparison of the Impacts of Aspects of Prospect Theory on WTP/WTAEstimated in Preference and WTP/WTA Space

    Get PDF
    The importance of willingness to pay (WTP) and its counterpart willingness to accept (WTA), in the evaluation of policy measures has led to a constant stream of research examining survey methods and model specifications seeking to capture and explain the concept of marginal rates of substitution as much as possible. Stated choice experiments pivoted around a reference alternative allow the specification of discrete choice models to accommodate aspects of Prospect Theory, in the particular reference dependence. This permits an investigation of theories related to loss aversion and diminishing sensitivitywidely documented within the literature. This paper seeks to examine a number of theoretical developments. In particular, the paper seeks to empirically examine a number of aspects related to decision making processes that are posited to exist by Prospect Theory, namely reference dependence, loss aversion and diminishing sensitivities. Unlike previous research which has examined these issues in the past, we examine these assumptions on WTP/WTA rather than on the marginal utilities of decision makers. In doing this, the paper simultaneously compares and contrasts different econometric forms, in particular estimating models in preference space with WTP/WTAs calculated post estimation versus models estimated directly in WTP/WTA space where WTP/WTAvalues are directly during estimation. We find evidence for reference dependence and loss aversion in WTP/WTA for different time attributes, however we find less compelling evidence for the existence of diminishing WTP/WTAs

    Should reference alternatives in pivot design SC surveys be treated differently?

    Get PDF
    Analysts are increasingly making use of pivot style Stated Choice (SC) data in the estimation of choice models. These datasets often contain a reference alternative whose attributes remain invariant across replications for the same respondent. This paper presents some evidence to suggest that the standard specification used for such data may not be appropriate. As such, our analysis shows differences not only in the specification of the observed part of utility between the reference alternative and hypothetical SC alternatives, but also suggests differences in the error terms

    Efficiency and Sample Size Requirements for Stated Choice Studies

    Get PDF
    Stated choice (SC) experiments represent the dominant data paradigm in the study of behavioral responses of individuals, households as well as other organizations, yet little is known about the sample size requirements for models estimated from such data. Current sampling theory does not adequately address the issue and hence researchers have had to resort to simple rules of thumb or ignore the issue and collect samples of arbitrary size, hoping that the sample is sufficiently large enough to produce reliable parameter estimates. In this paper, we demonstrate how to generate efficient designs (based on D-efficiency and a newly proposed sample size S-efficiency measure) using prior parameter values to estimate multinomial logit models containing both generic and alternative-specific parameters. Sample size requirements for such designs in SC studies are investigated. In a numerical case study is shown that a D-efficient and even more an Sefficient design needs a (much) smaller sample size than a random orthogonal design. Furthermore, it is shown that wide level range has a significant positive influence on the efficiency of the design and therefore on the reliability of the parameter estimates
    • 

    corecore