262 research outputs found

    Halton Sequences for Mixed Logit

    Get PDF
    The simulation variance in the estimation of mixed logit parameters is found, in our application, to be lower with 100 Halton draws than with 1000 random draws. This finding confirms Bhat's (1999a) results and implies significant reduction in run times for mixed logit estimation. Further investigation is needed to assure that the result is not quixotic or masking other issues.

    Customer-Specific Taste Parameters and Mixed Logit: Households' Choice of Electricity Supplier

    Get PDF
    In a discrete choice situation, information about the tastes of each sampled customer is inferred from estimates of the distribution of tastes in the population. First, maximum likelihood procedures are used to estimate the distribution of tastes in the population using the pooled data for all sampled customers. Then, the distribution of tastes of each sampled customer is derived conditional on the observed data for that customer and the estimated population distribution of tastes (accounting for uncertainty in the population estimates.) We apply the method to data on residential customers' choice among energy suppliers in conjoint-type experiments. The estimated distribution of tastes provides practical information that is useful for suppliers in designing their offers. The conditioning for individual customers is found to differentiate customers effectively for marketing purposes and to improve considerably the predictions in new situations.

    On the Similarity of Classical and Bayesian Estimates of Individual Mean Partworths

    Get PDF
    An exciting development in modeling has been the ability to estimate reliable individual-level parameters for choice models. Individual partworths derived from these parameters have been very useful in segmentation, identifying extreme individuals, and in creating appropriate choice simulators. In marketing, hierarchical Bayes models have taken the lead in combining information about the aggregate distribution of tastes with the individual's choices to arrive at a conditional estimate of the individual's parameters. In economics, the same behavioral model has been derived from a classical rather than a Bayesian perspective. That is, instead of Gibbs sampling, the method of maximum simulated likelihood provides estimates of both the aggregate and the individual parameters. This paper explores the similarities and differences between classical and Bayesian methods and shows that they result in virtually equivalent conditional estimates of partworths for customers. Thus, the choice between Bayesian and classical estimation becomes one of implementation convenience and philosophical orientation, rather than pragmatic usefulness.

    Omitted Product Attributes in Discrete Choice Models

    Get PDF
    We describe two methods for correcting an omitted variables problem in discrete choice models: a fixed effects approach and a control function approach. The control function approach is easier to implement and applicable in situations for which the fixed effects approach is not. We apply both methods to a cross-section of disaggregate data on customer's choice among television options including cable, satellite, and antenna. As theory predicts, the estimated price response rises substantially when either correction is applied. All of the estimated parameters and the implied price elasticities are very similar for both methods.

    Utility in WTP space: a tool to address confounding random scale effects in destination choice to the Alps

    Get PDF
    Destination choice models with individual-specific taste variation have become the presumptive analytical approach in applied nonmarket valuation. Under the usual specification, tastes are represented by coefficients of site attributes that enter utility, and the distribution of these coefficients is estimated. The distribution of willingness-to-pay (WTP) for site attributes is then derived from the estimated distribution of coefficients. Though conceptually appealing this procedure often results in untenable distributions of willingness to pay. An alternative procedure is to estimate the distribution of willingness to pay directly, through a re-parameterization of the model. We compare hierarchical Bayes and maximum simulated likelihood estimates under both approaches, using data on site choice in the Alps. We find that models parameterized in terms of WTP provide more reasonable estimates for the distribution of WTP, and also fit the data better than models parameterized in terms of attribute coefficients. This approach to parameterizing utility is hence deemed promising for applied nonmarket valuation

    Contingent Valuation of Environmental Goods: A Comprehensive Critique

    Get PDF
    Contingent valuation is a survey-based procedure that attempts to estimate how much households are willing to pay for specific programs that improve the environment or prevent environmental degradation. For decades, the method has been the center of debate regarding its reliability: does it really measure the value that people place on environmental changes? Bringing together leading voices in the field, this timely book tells a unified story about the interrelated features of contingent valuation and how those features affect its reliability. Through empirical analysis and review of past studies, the authors identify important deficiencies in the procedure, raising questions about the technique’s continued use

    Hybrid Choice Models: Progress and Challenges

    Get PDF
    We discuss the development of predictive choice models that go beyond the random utility model in its narrowest formulation. Such approaches incorporate several elements of cognitive process that have been identified as important to the choice process, including strong dependence on history and context, perception formation, and latent constraints. A flexible and practical hybrid choice model is presented that integrates many types of discrete choice modeling methods, draws on different types of data, and allows for flexible disturbances and explicit modeling of latent psychological explanatory variables, heterogeneity, and latent segmentation. Both progress and challenges related to the development of the hybrid choice model are presented.

    Monte Carlo analysis of SP-off-RP data

    Get PDF
    SP-off-RP questions are a recent innovation in choice modelling that solicits information from respondents in a different way than standard stated-preference (SP) experiments. In particular, the alternatives and choice of a respondent in a real-world setting are observed, and the respondent is asked whether he/she would choose the same alternative or switch to another alternative if the attributes of the chosen alternative were less desirable in ways specified by the researcher and/or the attributes of non-chosen alternatives were more desirable in specified ways. This construction, called stated-preference off revealed-preference (SP-off-RP), is intended to increase the realism of the stated-preference task, relative to standard SP exercises, but creates endogeneity. In this paper, we present a series of Monte Carlo exercises that explore estimation on this type of data, using an estimator that accounts for the endogeneity. The results indicate that, when the variance in the processing error by respondents is the same for SP-off-RP data as for standard SP data, the two solicitation methods provide about the same level of efficiency in estimation, even though the SP-off-RP data contain endogeneity that the estimator must handle while the SP data do not involve endogeneity. For both solicitation methods, efficiency rises, as expected, as the variance of the processing error decreases. These results imply that, if respondents are able to answer SP-off-RP questions more accurately than standard SP questions (and hence have lower variance of processing error), then SP-off-RP data are more efficient that standard SP data. This implication needs to be viewed cautiously, since (i) the actual processing error for each solicitation method is not measured in the current study, and (ii) the results are for the specific data generation processes that are used in the Monte Carlo exercises

    On the use of a Modified Latin Hypercube Sampling (MLHS) approach in the estimation of a Mixed Logit model for vehicle choice

    Get PDF
    Quasi-random number sequences have been used extensively for many years in the simulation of integrals that do not have a closed-form expression, such as Mixed Logit and Multinomial Probit choice probabilities. Halton sequences are one example of such quasi-random number sequences, and various types of Halton sequences, including standard, scrambled, and shuffled versions, have been proposed and tested in the context of travel demand modeling. In this paper, we propose an alternative to Halton sequences, based on an adapted version of Latin Hypercube Sampling. These alternative sequences, like scrambled and shuffled Halton sequences, avoid the undesirable correlation patterns that arise in standard Halton sequences. However, they are easier to create than scrambled or shuffled Halton sequences. They also provide more uniform coverage in each dimension than any of the Halton sequences. A detailed analysis, using a 16-dimensional Mixed Logit model for choice between alternative-fuelled vehicles in California, was conducted to compare the performance of the different types of draws. The analysis shows that, in this application, the Modified Latin Hypercube Sampling (MLHS) outperforms each type of Halton sequence. This greater accuracy combined with the greater simplicity make the MLHS method an appealing approach for simulation of travel demand models and simulation-based models in general
    corecore