12 research outputs found

    Modeling parametric evolution in a random utility framework

    Get PDF
    Abstract Random Utility models have become standard econometric tools, allowing parameter inference for individual-level categorical choice data. Such models typically presume that changes in observed choices over time can be attributed to changes in either covariates or unobservables. We study how choice dynamics can be captured more faithfully by additionally modeling temporal changes in parameters directly, using a vector autoregressive process and Bayesian estimation. This approach offers a number of advantages for theorists and practitioners, including improved forecasts, prediction of long-run parameter levels, and correction for potential aggregation biases. We illustrate the method using choices for a common supermarket good, where we find strong support for parameter dynamics.

    Posterior distributions for functions of variance components

    No full text
    ANOVA, Bayesian analysis, Monte Carlo simulation, repeatability, reproducibility, 62F15, 62J10,

    Predictive Distributions in the Presence of Measurement Errors

    No full text

    Assessing Heterogeneity in Discrete Choice Models Using a Dirichlet Process Prior

    Get PDF
    The finite normal mixture model has emerged as a dominant methodology for assessing heterogeneity in choice models. Although it extends the classic mixture models by allowing within component variablility, it requires that a relatively large number of models be separately estimated and fairly difficult test procedures to determine the correct number of mixing components. We present a very general formulation, based on Dirichlet Process Piror, which yields the number and composition of mixing components a posteriori, obviating the need for post hoc test procedures and is capable of approximating any target heterogeneity distribution. Adapting Stephens (2000) algorithm allows the determination of substantively different clusters, as well as a way to sidestep problems arising from label-switching and overlapping mixtures. These methods are illustrated both on simulated data and A.C. Nielsen scanner panel data for liquid detergents. We find that the large number of mixing components required to adequately represent the heterogeneity distribution can be reduced in practice to a far smaller number of segments of managerial relevance

    Flexible Heterogeneous Utility Curves: A Bayesian Spline Approach

    Get PDF
    Empirical evidence suggests that decision makers often weight successive additional units of a valued attribute or monetary endowment unequally, so that their utility functions are intrinsically nonlinear or irregularly shaped. Although the analyst may impose various functional specifications exogenously, this approach is ad hoc, tedious, and reliant on various metrics to decide which specification is best. In this paper, we develop a method that yields individual-level, flexibly shaped utility functions for use in choice models. This flexibility at the individual level is accomplished through splines of the truncated power basis type in a general additive regression framework for latent utility. Because the number and location of spline knots are unknown, we use the birth-death process of Denison et al. (1998) and Greens (1995) reversible jump method. We further show how exogenous constraints suggested by theory, such as monotonicity of price response, can be accommodated. Our formulation is particularly suited to estimating reaction to pricing, where individual-level monotonicity is justified theoretically and empirically, but linearity is typically not. The method is illustrated in a conjoint application in which all covariates are splined simultaneously and in three panel data sets, each of which has a single price spline. Empirical results indicate that piecewise linear splines with a modest number of knots fit these data well, substantially better than heterogeneous linear and log-linear a priori specifications. In terms of price response specifically, we find that although aggregate market-level curves can be nearly linear or loglinear, individuals often deviate widely from either. Using splines, hold-out prediction improvement over the standard heterogeneous probit model ranges from 6% to 14% in the scanner applications and exceeds 20% in the conjoint study. Moreover, optimal profiles in conjoint and aggregate price response curves in the scanner applications can differ markedly under the standard and the spline-based models

    Modeling Parametric Evolution in a Random Utility Framework

    No full text
    Random utility models have become standard econometric tools, allowing parameter inference for individual-level categorical choice data. Such models typically presume that changes in observed choices over time can be attributed to changes in either covariates or unobservables. We study how choice dynamics can be captures more faithfully by also directly modeling temporal changes in parameters, using a vector autoregressive process and Bayesian estimation. This approach offers a number of advantages for theorists and practitioners, including improved forecasts, prediction of long-run parameter levels, and corection for potential aggregation biases. We illustrate the method using choices for a common supermarket good, where we find strong support for parameter dynamics

    Assessing Heterogeneity in Discrete Choice Models Using a Dirichlet Process Prior

    No full text
    The finite normal mixture model has emerged as a dominant methodology for assessing heterogeneity in choice models. Although it extends the classic mixture models by allowing within component variablility, it requires that a relatively large number of models be separately estimated and fairly difficult test procedures to determine the “correct†number of mixing components. We present a very general formulation, based on Dirichlet Process Piror, which yields the number and composition of mixing components a posteriori, obviating the need for post hoc test procedures and is capable of approximating any target heterogeneity distribution. Adapting Stephens’ (2000) algorithm allows the determination of ‘substantively’ different clusters, as well as a way to sidestep problems arising from label-switching and overlapping mixtures. These methods are illustrated both on simulated data and A.C. Nielsen scanner panel data for liquid detergents. We find that the large number of mixing components required to adequately represent the heterogeneity distribution can be reduced in practice to a far smaller number of segments of managerial relevance.Choice Models, Heterogeneity, Dirichlet Process, Bayesian Methods, Markov chain Monte Carlo

    Prediction based on response surface data obtained with random blocking

    No full text
    Fixed and Random Effects, Gibbs Sampler, Predictive Distribution, Response Surface Design,
    corecore