1,989 research outputs found

    Replications and Extensions in Marketing – Rarely Published But Quite Contrary

    Get PDF
    Replication is rare in marketing. Of 1,120 papers sampled from three major marketing journals, none were replications. Only 1.8% of the papers were extensions, and they consumed 1.1% of the journal space. On average, these extensions appeared seven years after the original study. The publication rate for such works has been decreasing since the 1970s. Published extensions typically produced results that conflicted with the original studies; of the 20 extensions published, 12 conflicted with the earlier results, and only 3 provided full confirmation. Published replications do not attract as many citations after publication as do the original studies, even when the results fail to support the original studies

    Are Null Results Becoming an Endangered Species in Marketing?

    Get PDF
    Editorial procedures in the social and biomedical sciences are said to promote studies that falsely reject the null hypothesis. This problem may also exist in major marketing journals. Of 692 papers using statistical significance tests sampled from the Journal of Marketing, Journal of Marketing Research, and Journal of Consumer Research between 1974 and 1989, only 7.8% failed to reject the null hypothesis. The percentage of null results declined by one-half from the 1970s to the 1980s. The JM and the JMR registered marked decreases. The small percentage of insignificant results could not be explained as being due to inadequate statistical power. Various scholars have claimed that editorial policies in the social and medical sciences are biased against studies reporting null results, and thus encourage the proliferation of Type 1 errors (erroneous rejection of the null hypothesis). Greenwald (1975, p. 15) maintains that Type I publication errors are underestimated to the extent that they are: “. . . frightening, even calling into question the scientific basis for much published literature.” Our paper examines the publication frequency of null results in marketing. First, we discuss how editorial policies might foster an atmosphere receptive to Type I error proliferation. Second, we review the evidence on the publication of null results in the social and biomedical sciences. Third, we report on an empirical investigation of the publication frequency of null results in the marketing literature. Fourth, we examine power levels for statistically insignificant findings in marketing to see if they are underpowered and thus less deserving of publication. Finally, we provide suggestions to facilitate the publication of null results

    Why We Don’t Really Know What Statistical Significance Means: A Major Educational Failure

    Get PDF
    The Neyman-Pearson theory of hypothesis testing, with the Type I error rate, α, as the significance level, is widely regarded as statistical testing orthodoxy. Fisher’s model of significance testing, where the evidential p value denotes the level of significance, nevertheless dominates statistical testing practice. This paradox has occurred because these two incompatible theories of classical statistical testing have been anonymously mixed together, creating the false impression of a single, coherent model of statistical inference. We show that this hybrid approach to testing, with its misleading p &#; α statistical significance criterion, is common in marketing research textbooks, as well as in a large random sample of papers from twelve marketing journals. That is, researchers attempt the impossible by simultaneously interpreting the p value as a Type I error rate and as a measure of evidence against the null hypothesis. The upshot is that many investigators do not know what our most cherished, and ubiquitous, research desideratum - statistical significance - really means. This, in turn, signals an educational failure of the first order. We suggest that tests of statistical significance, whether p’s or α’s, be downplayed in statistics and marketing research courses. Classroom instruction should focus instead on teaching students to emphasize the use of confidence intervals around point estimates in individual studies, and the criterion of overlapping confidence intervals when one has estimates from similar studies

    Does the Need for Agreement Among Reviewers Inhibit the Publication of Controversial Findings?

    Get PDF
    As Cicchetti indicates, agreement among reviewers is not high. This conclusion is empirically supported by Fiske and Fogg (1990), who reported that two independent reviews of the same papers typically had no critical point in common. Does this imply that journal editors should strive for a high level of reviewer consensus as a criterion for publication? Prior research suggests that such a requirement would inhibit the publication of papers with controversial findings. We summarize this research and report on a survey of editors

    Publication Bias Against Null Results

    Get PDF
    Studies suggest a bias against the publication of null (p \u3e .05) results. Instead of significance, we advocate reporting effect sizes and confidence intervals, and using replication studies. If statistical tests are used, power tests should accompany them

    Investigations of meltwater refreezing and density variations in the snowpack and firn within the percolation zone of the Greenland Ice Sheet

    Get PDF
    The mass balance of polythermal ice masses is critically dependent on the proportion of surface-generated meltwater that subsequently refreezes in the snowpack and firn. In order to quantify this effect and to characterize its spatial variability, we measured near-surface (26%, resulting in a 32% increase in net accumulation. This 'seasonal densification' increased at lower elevations, rising to 47% 10 km closer to the ice-sheet margin at 1860 m a. s. l. Density/depth profiles from nine sites within 1 km2 at ∼1945 m a.s.l. reveal complex stratigraphies that change over short spatial scales and seasonally. We conclude that estimates of mass-balance change cannot be calculated solely from observed changes in surface elevation, but that near-surface densification must also be considered. However, predicting spatial and temporal variations in densification may not be straightforward. Further, the development of complex firn-density profiles both masks discernible annual layers in the near-surface firn and ice stratigraphy and is likely to introduce error into radar-derived estimates of surface elevation

    A Gate-To-Gate Life-Cycle Inventory of Solid Hardwood Flooring in the Eastern US

    Get PDF
    Environmental impacts associated with building materials are under increasing scrutiny in the US. A gate-to-gate life-cycle inventory (LCI) of solid strip and solid plank hardwood flooring production was conducted in the eastern US for the reporting year 2006. Survey responses from hardwood flooring manufacturing facilities in this region accounted for nearly 28% of total US solid hardwood flooring production for that year. This study examined the materials, fuels, and energy required to produce solid hardwood flooring, coproducts, and the emissions to air, land, and water. SimaPro software was used to quantify the environmental impacts associated with the reported materials use and emissions. Impact data were allocated on their mass contribution to all product and coproduct production of 1.0 m3 (oven-dry mass basis) of solid hardwood flooring. Carbon flow and transportation data are provided in addition to the LCI data. Results of this study are useful for creating a cradle-to-gate inventory when linked to LCIs for the hardwood forest resource and the production of solid hardwood lumber in the same region
    corecore