4 research outputs found

    Managing Product-Harm Crises

    Get PDF
    Product-harm crises are among a firm’s worst nightmares. Since marketing investments may be instrumental to convince consumers to purchase the firm's products again, it is important to provide an adequate measurement of the effectiveness of these investments, especially after the crisis. We provide a methodology through which firms can assess the impact of product crises in a quantitative way. Based on the model estimates, firms can estimate the required level of investment to recoup from the crisis. A key finding of this paper is that it is not only important to assess the extent to which business is lost as a result of the crisis, but also to find the new, postcrisis response parameters to marketing activities. The study of an Australian product-harm crisis for peanut butter reveals that a product crisis may represent a quadruple jeopardy for a firm: (i) loss of baseline sales, (ii) a reduced own effectiveness for its marketing instruments, (iii) increased vulnerability, and (iv) decreased clout. We arrive at this conclusion by using a time-varying error-correction model that allows for (i) shortand long-term marketing mix effects, (ii) intercepts and response parameters that change over time as a result of the crisis, and (iii) missing observations, which result from the absence of the impacted brands during the product-recall period. The time-varying error-correction model is applicable to other marketing-research areas in which these three requirements (or any subset thereof) apply

    Marketing Models and the Lucas Critique

    Get PDF
    The Lucas critique has been largely ignored in the marketing literature. We present a number of conditions under which the critique is most likely to (also) apply in marketing settings. Next, we provide some perspectives on how to diagnose and accommodate the Lucas critique, and identify various avenues for future research

    Consideration sets, intentions and the inclusion of "Don't know" in a two-stage model for voter choice

    Get PDF
    We present a statistical model for voter choice that incorporates a consideration set stage and final vote intention stage. The first stage involves a multivariate probit model for the vector of probabilities that a candidate or a party gets considered. The second stage of the model is a multinomial probit model for the actual choice. In both stages we use as explanatory variables data on voter choice at the previous election, as well as socio-demographic respondent characteristics. Importantly, our model explicitly accounts for the three types of "missing data" encountered in polling. First, we include a no-vote option in the final vote intention stage. Second, the "do not know" response is assumed to arise from too little difference in the utility between the two most preferred options in the consideration set. Third, the "do not want to say" response is modelled as a missing observation on the most preferred alternative in the consideration set. Thus, we consider the missing data generating mechanism to be non-ignorable and build a model based on utility maximization to describe the voting intentions of these respondents. We illustrate the merits of the model as we have information on a sample of about 5000 individuals from the Netherlands for who we know how they voted last time (if at all), which parties they would consider for the upcoming election, and what their voting intention is. A unique feature of the data set is that in

    Optimizing retail assortments

    Get PDF
    __Abstract__ \n \nRetailers face the problem of finding the assortment that maximizes category profit. This is a challenging task because the number of potential assortments is very large when there are many stock-keeping units (SKUs) to choose from. Moreover, SKU sales can be cannibalized by other SKUs in the assortment, and the more similar SKUs are, the more this happens. This paper develops an implementable and scalable assortment optimization method that allows for theory-based substitution patterns and optimizes real-life, large-scale assortments at the store level. We achieve this by adopting an attribute-based approach to capture preferences, substitution patterns, and cross-marketing mix effects. To solve the optimization problem, we propose new very large neighborhood search heuristics. We apply our methodology to store-level scanner data on liquid laundry detergent. The optimal assortments are expected to enhance retailer profit considerably (37.3%), and this profit increases even more (to 43.7%) when SKU prices are optimized simultaneously
    corecore