396 research outputs found

    Replications and Extensions in Marketing – Rarely Published But Quite Contrary

    Get PDF
    Replication is rare in marketing. Of 1,120 papers sampled from three major marketing journals, none were replications. Only 1.8% of the papers were extensions, and they consumed 1.1% of the journal space. On average, these extensions appeared seven years after the original study. The publication rate for such works has been decreasing since the 1970s. Published extensions typically produced results that conflicted with the original studies; of the 20 extensions published, 12 conflicted with the earlier results, and only 3 provided full confirmation. Published replications do not attract as many citations after publication as do the original studies, even when the results fail to support the original studies

    Are Null Results Becoming an Endangered Species in Marketing?

    Get PDF
    Editorial procedures in the social and biomedical sciences are said to promote studies that falsely reject the null hypothesis. This problem may also exist in major marketing journals. Of 692 papers using statistical significance tests sampled from the Journal of Marketing, Journal of Marketing Research, and Journal of Consumer Research between 1974 and 1989, only 7.8% failed to reject the null hypothesis. The percentage of null results declined by one-half from the 1970s to the 1980s. The JM and the JMR registered marked decreases. The small percentage of insignificant results could not be explained as being due to inadequate statistical power. Various scholars have claimed that editorial policies in the social and medical sciences are biased against studies reporting null results, and thus encourage the proliferation of Type 1 errors (erroneous rejection of the null hypothesis). Greenwald (1975, p. 15) maintains that Type I publication errors are underestimated to the extent that they are: “. . . frightening, even calling into question the scientific basis for much published literature.” Our paper examines the publication frequency of null results in marketing. First, we discuss how editorial policies might foster an atmosphere receptive to Type I error proliferation. Second, we review the evidence on the publication of null results in the social and biomedical sciences. Third, we report on an empirical investigation of the publication frequency of null results in the marketing literature. Fourth, we examine power levels for statistically insignificant findings in marketing to see if they are underpowered and thus less deserving of publication. Finally, we provide suggestions to facilitate the publication of null results

    Publication Bias Against Null Results

    Get PDF
    Studies suggest a bias against the publication of null (p > .05) results. Instead of significance, we advocate reporting effect sizes and confidence intervals, and using replication studies. If statistical tests are used, power tests should accompany them.publication, bias, null results

    Does the Need for Agreement Among Reviewers Inhibit the Publication of Controversial Findings?

    Get PDF
    As Cicchetti indicates, agreement among reviewers is not high. This conclusion is empirically supported by Fiske and Fogg (1990), who reported that two independent reviews of the same papers typically had no critical point in common. Does this imply that journal editors should strive for a high level of reviewer consensus as a criterion for publication? Prior research suggests that such a requirement would inhibit the publication of papers with controversial findings. We summarize this research and report on a survey of editors.publication, controversial findings, review

    Are Null Results Becoming an Endangered Species in Marketing?

    Get PDF
    ditorial procedures in the social and biomedical sciences are said to promote studies that falsely reject the null hypothesis. This problem may also exist in major marketing journals. Of 692 papers using statistical significance tests sampled from the Journal of Marketing, Journal of Marketing Research, and Journal of Consumer Research between 1974 and 1989, only 7.8% failed to reject the null hypothesis. The percentage of null results declined by one-half from the 1970s to the 1980s. The JM and the JMR registered marked decreases. The small percentage of insignificant results could not be explained as being due to inadequate statistical power. Various scholars have claimed that editorial policies in the social and medical sciences are biased against studies reporting null results, and thus encourage the proliferation of Type 1 errors (erroneous rejection of the null hypothesis). Greenwald (1975, p. 15) maintains that Type I publication errors are underestimated to the extent that they are: “. . . frightening, even calling into question the scientific basis for much published literature.” Our paper examines the publication frequency of null results in marketing. First, we discuss how editorial policies might foster an atmosphere receptive to Type I error proliferation. Second, we review the evidence on the publication of null results in the social and biomedical sciences. Third, we report on an empirical investigation of the publication frequency of null results in the marketing literature. Fourth, we examine power levels for statistically insignificant findings in marketing to see if they are underpowered and thus less deserving of publication. Finally, we provide suggestions to facilitate the publication of null results.File Drawer Problem, Null Results, Publication Bias, Statistical Power Analysis, Statistical Significance

    Publication Bias Against Null Results

    Get PDF
    Studies suggest a bias against the publication of null (p \u3e .05) results. Instead of significance, we advocate reporting effect sizes and confidence intervals, and using replication studies. If statistical tests are used, power tests should accompany them

    Why We Don’t Really Know What Statistical Significance Means: A Major Educational Failure

    Get PDF
    The Neyman-Pearson theory of hypothesis testing, with the Type I error rate, α, as the significance level, is widely regarded as statistical testing orthodoxy. Fisher’s model of significance testing, where the evidential p value denotes the level of significance, nevertheless dominates statistical testing practice. This paradox has occurred because these two incompatible theories of classical statistical testing have been anonymously mixed together, creating the false impression of a single, coherent model of statistical inference. We show that this hybrid approach to testing, with its misleading p &#; α statistical significance criterion, is common in marketing research textbooks, as well as in a large random sample of papers from twelve marketing journals. That is, researchers attempt the impossible by simultaneously interpreting the p value as a Type I error rate and as a measure of evidence against the null hypothesis. The upshot is that many investigators do not know what our most cherished, and ubiquitous, research desideratum - statistical significance - really means. This, in turn, signals an educational failure of the first order. We suggest that tests of statistical significance, whether p’s or α’s, be downplayed in statistics and marketing research courses. Classroom instruction should focus instead on teaching students to emphasize the use of confidence intervals around point estimates in individual studies, and the criterion of overlapping confidence intervals when one has estimates from similar studies
    corecore