37 research outputs found

    Preference aggregation and repeat buying in households

    Full text link
    Marketing researchers have long used brand switching analyses and Markov transition matrices to gain insights into managerial problems. Almost without exception, this work makes (inappropriate) inferences about individual consumers by analyzing household-level data. This paper presents a procedure based on the distribution of run lengths in household level panel data that allows more insights into the choice behavior of the individuals in the household. We test these procedures in a large simulation study by attempting to recover the underlying (known) structure of the process generating a string of panel data. Finally, we use the procedure to classify the purchase behavior, with respect to powdered soft drinks, of a set of households in a panel. Our results show that marketing scientists have the potential to learn and test more hypotheses about the individuals in a household by examining the distribution of run lengths.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/47115/1/11002_2004_Article_BF00994351.pd

    Testing for Trademark Dilution in Court and the Lab

    Get PDF
    Federal courts are currently split, even within particular districts, on the basic question of what a plaintiff must show to establish that a defendant’s conduct constitutes trademark dilution by blurring. Federal trademark law defines “dilution by blurring” as “association arising from the similarity between a mark or trade name and a famous mark that impairs the distinctiveness of the famous mark.” In construing this statutory language, a majority of courts have held that to establish blurring, a plaintiff need only show that consumers associate the defendant’s mark with the plaintiff’s famous mark. These courts appear to assume that to the extent that there is consumer association, this association alone will “impair[ ] the distinctiveness” of the famous mark. A minority of courts have held that the plaintiff must show both consumer association and that the consumer association “impairs the distinctiveness” of the famous mark. In this Article, we make three contributions to the current debate over what must be shown to establish dilution by blurring. First, we report the results of a set of experiments that reveal that the majority approach is fundamentally deficient. These experiments demonstrate that even when consumers associate a junior mark with a famous senior mark, this association does not necessarily result in any impairment of the ability of the senior mark to identify its source and associations. Second, we set forth a new method for determining when association is likely to lead to impairment. This method, which we term the “association strength test,” evaluates changes in how strongly survey respondents associate a mark with its source or attributes upon exposure to a diluting stimulus. Third, we evaluate the current state of the art in trademark dilution survey methodology: response time surveys. These surveys purportedly show both consumer association and impairment. Through a set of experiments, we demonstrate that these surveys currently use the wrong control and are invalid. In light of our findings, we reflect more generally on the question of whether dilution by blurring ever occurs and on how the blurring cause of action may be reconfigured to better comport with courts’ intuitions about the true nature of the harm that the cause of action seeks to address

    Consumer Uncertainty in Trademark Law: An Experimental Investigation

    Get PDF
    Nearly every important issue in trademark litigation turns on the question of what consumers in the marketplace believe to be true. To address this question, litigants frequently present consumer survey evidence, which can play a decisive role in driving the outcomes of trademark disputes. But trademark survey evidence has often proven to be highly controversial, not least because it has sometimes been perceived as open to expert manipulation. In this Article, we identify and present empirical evidence of a fundamental problem with trademark survey evidence: while the leading survey formats in trademark law test for whether consumers hold a particular belief, they do not examine the strength or the varying degrees of certainty with which consumers hold that belief. Yet as the social science literature has long recognized, the strength with which consumers hold particular beliefs shapes their behavior in the marketplace, and thus it should also shape how trademark disputes play out in the courtroom. Through a series of experiments using the three leading trademark survey formats (the so-called Teflon, Eveready, and Squirt formats), we show the remarkable degree to which these formats as conventionally designed overlook—or suppress—crucial information about consumer uncertainty. We further demonstrate how low-cost, easily administered, and relatively simple modifications to these formats can reveal that information. We explain both the practical and theoretical implications of our findings. As a practical matter, trademark survey evidence that shows only weakly held beliefs (or that does not even test for belief strength) should not, without more, satisfy a litigant’s burden of persuasion on the issue addressed by the survey. Furthermore, in line with courts’ growing efforts in intellectual property cases to tailor injunctive relief, survey evidence showing only weakly held mistaken beliefs may provide courts with the opportunity to fashion more limited forms of relief short of an outright injunction. As a theoretical matter, trademark survey formats that reveal the true extent of consumer uncertainty in the marketplace may finally force trademark law and policy to confront normative questions it has long left unanswered going to exactly what kind of harm trademark law is meant to prevent

    Prospects and problems in modeling group decisions

    Full text link
    This paper summarizes some of the major issues related to group decision modeling. We briefly review the existing work on group choice models in marketing and consumer research. We draw some generalizations about which models work well when and use those generalizations to provide guidelines for future research.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/47074/1/11002_2004_Article_BF00554128.pd

    A Genome-Wide Association Study of Diabetic Kidney Disease in Subjects With Type 2 Diabetes

    Get PDF
    dentification of sequence variants robustly associated with predisposition to diabetic kidney disease (DKD) has the potential to provide insights into the pathophysiological mechanisms responsible. We conducted a genome-wide association study (GWAS) of DKD in type 2 diabetes (T2D) using eight complementary dichotomous and quantitative DKD phenotypes: the principal dichotomous analysis involved 5,717 T2D subjects, 3,345 with DKD. Promising association signals were evaluated in up to 26,827 subjects with T2D (12,710 with DKD). A combined T1D+T2D GWAS was performed using complementary data available for subjects with T1D, which, with replication samples, involved up to 40,340 subjects with diabetes (18,582 with DKD). Analysis of specific DKD phenotypes identified a novel signal near GABRR1 (rs9942471, P = 4.5 x 10(-8)) associated with microalbuminuria in European T2D case subjects. However, no replication of this signal was observed in Asian subjects with T2D or in the equivalent T1D analysis. There was only limited support, in this substantially enlarged analysis, for association at previously reported DKD signals, except for those at UMOD and PRKAG2, both associated with estimated glomerular filtration rate. We conclude that, despite challenges in addressing phenotypic heterogeneity, access to increased sample sizes will continue to provide more robust inference regarding risk variant discovery for DKD.Peer reviewe

    A stochastic multidimensional scaling procedure for the empirical determination of convex indifference curves for preference/choice analysis

    Full text link
    The vast majority of existing multidimensional scaling (MDS) procedures devised for the analysis of paired comparison preference/choice judgments are typically based on either scalar product (i.e., vector) or unfolding (i.e., ideal-point) models. Such methods tend to ignore many of the essential components of microeconomic theory including convex indifference curves, constrained utility maximization, demand functions, et cetera. This paper presents a new stochastic MDS procedure called MICROSCALE that attempts to operationalize many of these traditional microeconomic concepts. First, we briefly review several existing MDS models that operate on paired comparisons data, noting the particular nature of the utility functions implied by each class of models. These utility assumptions are then directly contrasted to those of microeconomic theory. The new maximum likelihood based procedure, MICROSCALE, is presented, as well as the technical details of the estimation procedure. The results of a Monte Carlo analysis investigating the performance of the algorithm as a number of model, data, and error factors are experimentally manipulated are provided. Finally, an illustration in consumer psychology concerning a convenience sample of thirty consumers providing paired comparisons judgments for some fourteen brands of over-the-counter analgesics is discussed.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/45748/1/11336_2005_Article_BF02294463.pd

    The seeds of divergence: the economy of French North America, 1688 to 1760

    Get PDF
    Generally, Canada has been ignored in the literature on the colonial origins of divergence with most of the attention going to the United States. Late nineteenth century estimates of income per capita show that Canada was relatively poorer than the United States and that within Canada, the French and Catholic population of Quebec was considerably poorer. Was this gap long standing? Some evidence has been advanced for earlier periods, but it is quite limited and not well-suited for comparison with other societies. This thesis aims to contribute both to Canadian economic history and to comparative work on inequality across nations during the early modern period. With the use of novel prices and wages from Quebec—which was then the largest settlement in Canada and under French rule—a price index, a series of real wages and a measurement of Gross Domestic Product (GDP) are constructed. They are used to shed light both on the course of economic development until the French were defeated by the British in 1760 and on standards of living in that colony relative to the mother country, France, as well as the American colonies. The work is divided into three components. The first component relates to the construction of a price index. The absence of such an index has been a thorn in the side of Canadian historians as it has limited the ability of historians to obtain real values of wages, output and living standards. This index shows that prices did not follow any trend and remained at a stable level. However, there were episodes of wide swings—mostly due to wars and the monetary experiment of playing card money. The creation of this index lays the foundation of the next component. The second component constructs a standardized real wage series in the form of welfare ratios (a consumption basket divided by nominal wage rate multiplied by length of work year) to compare Canada with France, England and Colonial America. Two measures are derived. The first relies on a “bare bones” definition of consumption with a large share of land-intensive goods. This measure indicates that Canada was poorer than England and Colonial America and not appreciably richer than France. However, this measure overestimates the relative position of Canada to the Old World because of the strong presence of land-intensive goods. A second measure is created using a “respectable” definition of consumption in which the basket includes a larger share of manufactured goods and capital-intensive goods. This second basket better reflects differences in living standards since the abundance of land in Canada (and Colonial America) made it easy to achieve bare subsistence, but the scarcity of capital and skilled labor made the consumption of luxuries and manufactured goods (clothing, lighting, imported goods) highly expensive. With this measure, the advantage of New France over France evaporates and turns slightly negative. In comparison with Britain and Colonial America, the gap widens appreciably. This element is the most important for future research. By showing a reversal because of a shift to a different type of basket, it shows that Old World and New World comparisons are very sensitive to how we measure the cost of living. Furthermore, there are no sustained improvements in living standards over the period regardless of the measure used. Gaps in living standards observed later in the nineteenth century existed as far back as the seventeenth century. In a wider American perspective that includes the Spanish colonies, Canada fares better. The third component computes a new series for Gross Domestic Product (GDP). This is to avoid problems associated with using real wages in the form of welfare ratios which assume a constant labor supply. This assumption is hard to defend in the case of Colonial Canada as there were many signs of increasing industriousness during the eighteenth and nineteenth centuries. The GDP series suggest no long-run trend in living standards (from 1688 to circa 1765). The long peace era of 1713 to 1740 was marked by modest economic growth which offset a steady decline that had started in 1688, but by 1760 (as a result of constant warfare) living standards had sunk below their 1688 levels. These developments are accompanied by observations that suggest that other indicators of living standard declined. The flat-lining of incomes is accompanied by substantial increases in the amount of time worked, rising mortality and rising infant mortality. In addition, comparisons of incomes with the American colonies confirm the results obtained with wages— Canada was considerably poorer. At the end, a long conclusion is provides an exploratory discussion of why Canada would have diverged early on. In structural terms, it is argued that the French colony was plagued by the problem of a small population which prohibited the existence of scale effects. In combination with the fact that it was dispersed throughout the territory, the small population of New France limited the scope for specialization and economies of scale. However, this problem was in part created, and in part aggravated, by institutional factors like seigneurial tenure. The colonial origins of French America’s divergence from the rest of North America are thus partly institutional

    The Seeds of Divergence: The Economy of French North America, 1688 to 1760

    Full text link

    The Role of Consumer Surveys in Trademark Infringement: Empirical Evidence from the Federal Courts

    Get PDF
    With millions, perhaps billions, of dollars at stake in the value of a brand,1 brand equity can be one of the most important assets in a firm’s portfolio.2 Unfortunately, brand equity is an asset that is uniquely vulnerable to harm. Firms can lose the strength, and thus the selling power, of those brands through the ordinary course of business.3 Even a firm’s * Associate Professor and Northeast Utilities Chair in Business Ethics, School of Business, University of Connecticut. We thank Meredith Long and Michael Thomason for valuable research assistance. We also thank Barton Beebe for sharing his data with us. * * Professor of Marketing, Stern School of Business, New York University. 1. A ranking by Bloomberg Businessweek of the top one hundred global brands reveals estimated values for many top brands in excess of ten billion dollars. BLOOMBERG BUSINESSWEEK, 100 BEST GLOBAL BRANDS 1, available at
    corecore