1,863 research outputs found

    Efficient conjoint choice designs in the presence of respondent heterogeneity.

    Get PDF
    The authors propose a fast and efficient algorithm for constructing D-optimal conjoint choice designs for mixed logit models in the presence of respondent heterogeneity. With this new algorithm, the construction of semi-Bayesian D-optimal mixed logit designs with large numbers of attributes and attribute levels becomes practically feasible. The results from the comparison of eight designs (ranging from the simple locally D-optimal design for the multinomial logit model and the nearly orthogonal design generated by Sawtooth (CBC) to the complex semi-Bayesian mixed logit design) across wide ranges of parameter values show that the semi-Bayesian mixed logit approach outperforms the competing designs not only in terms of estimation efficiency but also in terms of prediction accuracy. In particular, it was found that semi-Bayesian mixed logit designs constructed with large heterogeneity parameters are most robust against the misspecification of the values for the mean of the individual level coefficients for making precise estimations and predictions.Keywords:semi-Bayesianmixedlogitdesign,heterogeneity,predictionaccuracy,multinomiallogitdesign,model-robustdesign,D-optimality,algorithmAlgorithm; D-Optimality; Heterogeneity; Model-robust design; Multinomial logit design; Prediction accuracy; Semi-Bayesian mixed logit design;

    Optimal designs for rating-based conjoint experiments.

    Get PDF
    The scope of conjoint experiments on which we focus embraces those experiments in which each of the respondents receives a different set of profiles to rate. Carefully designing these experiments involves determining how many and which profiles each respondent has to rate and how many respondents are needed. To that end, the set of profiles offered to a respondent is viewed as a separate block in the design and a respondent effect is incorporated in the model, representing the fact that profile ratings from the same respondent are correlated. Optimal conjoint designs are then obtained by means of an adapted version of the algorithm of Goos and Vandebroek (2004). For various instances, we compute the optimal conjoint designs and provide some practical recommendations.Conjoint analysis; D-Optimality; Design; Model; Optimal; Optimal block design; Rating-based conjoint experiments; Recommendations;

    Bridiging designs for conjoint analysis: The issue of attribute importance.

    Get PDF
    Abstract: Conjoint analysis studies involving many attributes and attribute levels often occur in practice. Because such studies can cause respondent fatigue and lack of cooperation, it is important to design data collection tasks that reduce those problems. Bridging designs, incorporating two or more task subsets with overlapping attributes, can presumably lower task difficulty in such cases. In this paper, we present results of a study examining the effects on predictive validity of bridging design decisions involving important or unimportant attributes as links (bridges) between card-sort tasks and the degree of balance and consistency in estimated attribute importance across tasks. We also propose a new symmetric procedure, Symbridge, to scale the bridged conjoint solutions.Studies; Cooperation; Data; Problems; Effects; Decisions;

    Efficient and robust willingness-to-pay designs for choice experiments: some evidence from simulations.

    Get PDF
    We apply a design efficiency criterion to construct conjoint choice experiments specifically focused on the accuracy of marginal estimates. In a simulation study and a numerical example, the resulting optimal designs are compared to alternative designs suggested in the literature. It turns out that optimal designs not only improve the estimation accuracy of the marginal, as expected on the basis of the nature of the efficiency criterion, but they also considerably reduce the occurrence of extreme estimates, which also exhibit smaller deviations from the real values. The proposed criterion is there for evaluable for non-market valuation studies as it reduces the sample size required for a given degree of accuracy and it produces estimates with fewer outliers.Willingness-to-pay; Optimal design; Choice experiments; Conditional logit model; Robust;

    Multimodal and nested preference structures in choice-based conjoint analysis: a comparison of bayesian choice models with discrete and continuous representations of heterogeneity

    Get PDF
    Die Choice-Based Conjoint-Analyse (CBC) ist heutzutage die am weitesten verbreitete Variante der Conjoint-Analyse, einer Klasse von Verfahren zur Messung von Nachfragerpräferenzen. Der Hauptgrund für die zunehmende Dominanz des CBC-Ansatzes in jüngerer Zeit besteht darin, dass hier das tatsächliche Wahlverhalten von Nachfragern sehr realistisch nachgestellt werden kann, indem die Befragten wiederholt ihre bevorzugte Alternative aus einer Menge mehrerer Alternativen (Choice Sets) auswählen. Im Rahmen der CBC-Analyse ist das Multinomiale Logit- (MNL) Modell das am häufigsten verwendete diskrete Wahlmodell. Das MNL-Modell weist jedoch zwei wesentliche Einschränkungen auf: (a) Es impliziert proportionale Substitutionsmuster zwischen den Alternativen, was als Independence of Irrelevant Alternatives- (IIA) Eigenschaft bezeichnet wird, und (b) es berücksichtigt keine Nachfragerheterogenität, da per Definition Teilnutzenwerte für alle Konsumenten als homogen angenommen werden. Seit den 1990er-Jahren werden hierarchisch bayesianische (HB) Modelle für die Teilnutzenwertschätzung in der CBC-Analyse verwendet. Solche HB-Modelle ermöglichen zum einen eine Schätzung individueller Teilnutzenwerte, selbst bei einer beschränkten Datenlage, zum anderen können sie aufgrund der Modellierung von Heterogenität die IIA-Eigenschaft stark abmildern. Der Schwerpunkt der vorliegenden Thesis liegt auf der Verwendung von HB-Modellen mit unterschiedlichen Darstellungen von Nachfragerheterogenität (diskret vs. kontinuierlich) für CBC-Daten sowie außerdem auf einem speziellen HB-Modell, das die IIA-Eigenschaft durch Berücksichtigung von unterschiedlichen Ähnlichkeitsgraden zwischen Teilmengen von Alternativen (Nestern) zusätzlich abschwächt. Insbesondere wird die statistische Performance von einfachen MNL-, Latent Class- (LC) MNL-, HB-MNL-, Mixture-of-Normals- (MoN) MNL-, Dirichlet Process Mixture- (DPM) MNL- und HB-Nested Multinomialen Logit- (NMNL) Modellen (unter experimentell variierenden Bedingungen) hinsichtlich der Recovery von Präferenzstrukturen, der Anpassungsgüte und der Prognosevalidität analysiert. Dazu werden zwei umfangreiche Monte-Carlo-Studien durchgeführt, ferner werden die verschiedenen Modelltypen auf einen empirischen CBC-Datensatz angewandt. In der ersten Monte-Carlo-Studie liegt der Fokus auf dem Vergleich zwischen dem HB-MNL und dem HB-NMNL bei multimodalen und genesteten Präferenzstrukturen. Die Ergebnisse zeigen, dass es keine wesentlichen Unterschiede zwischen beiden Modelltypen hinsichtlich der Anpassungsgüte und insbesondere hinsichtlich der Prognosevalidität gibt. In Bezug auf die Recovery von Präferenzstrukturen schneidet das HB-MNL-Modell zunehmend schlechter ab, wenn die Korrelation in mindestens einem Nest höher ist, während sich das HB-NMNL-Modell erwartungsgemäß an den Grad der Ähnlichkeit zwischen Alternativen anpasst. Die zweite Monte-Carlo-Studie befasst sich mit multimodalen und segmentspezifischen Präferenzstrukturen. Um Unterschiede zwischen den Klassen von Modellen mit unterschiedlichen Darstellungen von Heterogenität herauszuarbeiten, werden hier gezielt die Grade der Heterogenität innerhalb von Segmenten und zwischen Segmenten manipuliert. Unter experimentell variierenden Bedingungen werden die state-of-the-art Ansätze zur Modellierung von Heterogenität (einfaches MNL, LC-MNL, HB-MNL) mit erweiterten Wahlmodellen, die sowohl Nachfragerheterogenität zwischen Segmenten als auch innerhalb von Segmenten abbilden können (MoN-MNL und DPM-MNL), verglichen. Das zentrale Ergebnis dieser Monte-Carlo-Studie ist, dass sich das HB-MNL-Modell, welches eine multivariate Normalverteilung zur Modellierung von Präferenzheterogenität unterstellt, als äußerst robust erweist. Darüber hinaus kristallisiert sich der LC-MNL-Segmentansatz als der beste Ansatz heraus, um die „wahre“ Anzahl von Segmenten zu identifizieren. Abschließend werden die zuvor vorgestellten Wahlmodelle auf einen realen CBC-Datensatz angewandt. Die Ergebnisse zeigen, dass Modelle mit einer kontinuierlichen Darstellung von Heterogenität (HB-MNL, HB-NMNL, MoN-MNL und DPM-MNL) eine bessere Anpassungsgüte und Prognosevalidität aufweisen als Modelle mit einer diskreten Darstellung von Heterogenität (einfaches MNL, LC-MNL). Weiterhin zeigt sich, dass das HB-MNL-Modell für Prognosezwecke sehr gut geeignet ist und im Vergleich zu den anderen (erweiterten) Modellen mindestens ebenso gute, wenn nicht sogar wesentlich bessere Vorhersagen liefert, was für Manager eine zentrale Erkenntnis darstellt.Choice-Based Conjoint (CBC) is nowadays the most widely used variant of conjoint analysis, a class of methods for measuring consumer preferences. The primary reason for the increasing dominance of the CBC approach over the last 35 years is that it closely mimics real choice behavior of consumers by asking respondents repeatedly to choose their preferred alternative from a set of several offered alternatives (choice sets), respectively. Within the framework of CBC analysis, the multinomial logit (MNL) model is the most frequently used discrete choice model. However, the MNL model suffers from two major limitations: (a) it implies proportional substitution rates across alternatives, referred to as the Independence of Irrelevant Alternatives (IIA) property and (b) it does not account for unobserved consumer heterogeneity, as part-worth utilities are assumed to be equal for all respondents by definition. Since the 1990s, Hierarchical Bayesian (HB) models have been used for part-worth utility estimation in CBC analysis. HB models are able to determine part-worth utilities at the individual respondent level even with little individual respondent information on the one hand and, as a result of addressing consumer heterogeneity, can strongly soften the IIA property on the other hand. The focus of the present thesis is on CBC analysis using HB models with different representations of heterogeneity (discrete vs. continuous) as well as using a HB model which mitigates the IIA property to a further extent by allowing for different degrees of similarity between subsets (nests) of alternatives. In particular, we systematically explore the comparative performance of simple MNL, latent class (LC) MNL, HB-MNL, mixture-of-normals (MoN) MNL, Dirichlet Process Mixture (DPM) MNL and HB nested multinomial logit (NMNL) models (under experimentally varying conditions) using statistical criteria for parameter recovery, goodness-of-fit, and predictive accuracy. We conduct two extensive Monte Carlo studies and apply the different types of models to an empirical CBC data set. In the first Monte Carlo study, the focus lies on the comparative performance of the HB-MNL versus the HB-NMNL for multimodal and nested preference structures. Our results show that there seems to be no major differences between both types of models with regard to goodness-of-fit measures and in particular their ability to predict respondents’ choice behavior. Regarding parameter recovery, the HB-MNL model performs increasingly worse when correlation in at least one nest is higher, while the HB-NMNL model adapts to the degree of similarity between alternatives, as expected. The second Monte Carlo study deals with multimodal and segment-specific preference structures. More precisely, to carve out differences between the classes of models with different representations of heterogeneity, we specifically vary the degrees of within-segment and between-segment heterogeneity. We compare state-of-the-art methods to represent heterogeneity (simple MNL, LC-MNL, HB-MNL) and more advanced choice models representing both between-segment and within-segment consumer heterogeneity (MoN-MNL and DPM-MNL) under varying experimental conditions. The core finding from our Monte Carlo study is that the HB-MNL model appears to be highly robust against violations in its assumption of a single multivariate normal distribution of consumer preferences. In addition, the LC-MNL segment solution proves to be the best approach to recover the “true” number of segments. Finally, we apply the previously presented choice models to a real-life CBC data set. The results indicate that models with a continuous representation of heterogeneity (HB-MNL, HB-NMNL, MoN-MNL and DPM-MNL) perform better than models with a discrete representation of heterogeneity (simple MNL, LC-MNL). Further, it turns out that the HB-MNL model works extremely well for predictive purposes and provides at least as good if not considerably better predictions compared to the other (advanced) models, which is an important aspect for managers

    Using Choice Experiments for Non-Market Valuation

    Get PDF
    This paper provides the latest research developments in the method of choice experiments applied to valuation of non-market goods. Choice experiments, along with the, by now, well-known contingent valuation method, are very important tools for valuing non-market goods and the results are used in both cost-benefit analyses and litigations related to damage assessments. The paper should provide the reader with both the means to carry out a choice experiment and to conduct a detailed critical analysis of its performance in order to give informed advice about the results. A discussion of the underlying economic model of choice experiments is incorporated, as well as a presentation of econometric models consistent with economic theory. Furthermore, a detailed discussion on the development of a choice experiment is provided, which in particular focuses on the design of the experiment and tests of validity. Finally, a discussion on different ways to calculate welfare effects is presented.Choice experiments; non-market goods; stated preference methods; valuation

    The impact of the soccer schedule on TV viewership and stadium attendance : evidence from the Belgian Pro League

    Get PDF
    In the past decade, television broadcasters have been investing a huge amount of money for the Belgian Pro League broadcasting rights. These companies pursue an audience rating maximization, which depends heavily on the schedule of the league matches. At the same time, clubs try to maximize their home attendance and find themselves affected by the schedule as well. Our paper aims to capture the Belgian soccer fans’ preferences with respect to scheduling options, both for watching matches on TV and in the stadium. We carried out a discrete choice experiment using an online survey questionnaire distributed on a national scale. The choice sets are based on three match characteristics: month, kickoff time, and quality of the opponent. The first part of this survey concerns television broadcasting aspects. The second part includes questions about stadium attendance. The choice data is first analyzed with a conditional logit model which assumes homogenous preferences. Then a mixed logit model is fit to model the heterogeneity among the fans. The estimates are used to calculate the expected utility of watching a Belgian Pro League match for every possible setting, either on TV or in the stadium. These predictions are validated in terms of the real audience rating and home attendance data. Our results can be used to improve the scheduling process of the Belgian Pro League in order to persuade more fans to watch the matches on TV or in a stadium

    Implicit prices of indigenous cattle traits in central Ethiopia: Application of revealed and stated preference approaches

    Get PDF
    The diversity of animal genetic resources has a quasi-public good nature that makes market prices inadequate indicator of its economic worth. Applying the characteristics theory of value, this research estimated the relative economic worth of the attributes of cattle genetic resources in central Ethiopia. Transaction level data were collected over four seasons in a year and choice experiment survey was done in five markets to generate data on both revealed and stated preferences of cattle buyers. Heteroscedasticity efficient estimation and random parameters logit were employed to analyse the data. The results essentially show that attributes related to the subsistence functions of cattle are more valued than attributes that directly influence marketable products of the animals. The findings imply the strong need to invest on improvement of attributes of cattle in the study area that enhance the subsistence functions of cattle that their owners accord higher priority to support their livelihoods than they do to tradable products

    Fast Polyhedral Adaptive Conjoint Estimation

    Get PDF
    We propose and test a new adaptive conjoint analysis method that draws on recent polyhedral “interior-point” developments in mathematical programming. The method is designed to offer accurate estimates after relatively few questions in problems involving many parameters. Each respondent’s ques-tions are adapted based upon prior answers by that respondent. The method requires computer support but can operate in both Internet and off-line environments with no noticeable delay between questions. We use Monte Carlo simulations to compare the performance of the method against a broad array of relevant benchmarks. While no method dominates in all situations, polyhedral algorithms appear to hold significant potential when (a) metric profile comparisons are more accurate than the self-explicated importance measures used in benchmark methods, (b) when respondent wear out is a concern, and (c) when product development and/or marketing teams wish to screen many features quickly. We also test hybrid methods that combine polyhedral algorithms with existing conjoint analysis methods. We close with suggestions on how polyhedral methods can be used to address other marketing problems.Sloan School of Management and the Center for Innovation in Product Development at MI

    Asymmetric preference formation in willingness to pay estimates in discrete choice models

    Get PDF
    Individuals when faced with choices amongst a number of alternatives often adopt a variety of processing rules, ranging from simple linear to complex non-linear treatment of each attribute defining the offer of each alternative. In this paper we investigate the presence of asymmetry in preferences to test for reference effects and differential willingness to pay according to whether we are valuing gains or losses. The findings offer clear evidence of an asymmetrical response to increases and decreases in attributes when compared to the corresponding values for a reference alternative, where the degree of asymmetry varies across attributes and population segments
    • …
    corecore