5,855 research outputs found

    Pseudoreplication invalidates the results of many neuroscientific studies

    Get PDF
    Background: Pseudoreplication occurs when observations are not statistically independent, but treated as if they are. This can occur when there are multiple observations on the same subjects, when samples are nested or hierarchically organised, or when measurements are correlated in time or space. Analysis of such data without taking these dependencies into account can lead to meaningless results, and examples can easily be found in the neuroscience literature.\ud \ud Results: A single issue of Nature Neuroscience provided a number of examples and is used as a case study to highlight how pseudoreplication arises in neuroscientific studies, why the analyses in these papers are incorrect, and appropriate analytical methods are provided. 12% of papers had pseudoreplication and a further 36% were suspected of having pseudoreplication, but it was not possible to determine for certain because insufficient information about the analysis was provided.\ud \ud Conclusions: Pseudoreplication undermines the conclusions from statistical analysis of data, and it would be easier to detect if the sample size, degrees of freedom, the test statistic, and precise p-values are reported. This information should be a requirement for all publications

    Don't let spurious accusations of pseudoreplication limit our ability to learn from natural experiments (and other messy kinds of ecological monitoring)

    Get PDF
    Pseudoreplication is defined as the use of inferential statistics to test for treatment effects where treatments are not replicated and/or replicates are not statistically independent. It is a genuine but controversial issue in ecology particularly in the case of costly landscape-scale manipulations, behavioral studies where ethics or other concerns may limit sample sizes, ad hoc monitoring data, and the analysis of natural experiments where chance events occur at a single site. Here key publications on the topic are reviewed to illustrate the debate that exists about the conceptual validity of pseudoreplication. A survey of ecologists and case studies of experimental design and publication issues are used to explore the extent of the problem, ecologists’ solutions, reviewers’ attitudes, and the fate of submitted manuscripts. Scientists working across a range of ecological disciplines regularly come across the problem of pseudoreplication and build solutions into their designs and analyses. These include carefully defining hypotheses and the population of interest, acknowledging the limits of statistical inference and using statistical approaches including nesting and random effects. Many ecologists face considerable challenges getting their work published if accusations of pseudoreplication are made – even if the problem has been dealt with. Many reviewers reject papers for pseudoreplication, and this occurs more often if they haven't experienced the issue themselves. The concept of pseudoreplication is being applied too dogmatically and often leads to rejection during review. There is insufficient consideration of the associated philosophical issues and potential statistical solutions. By stopping the publication of ecological studies, reviewers are slowing the pace of ecological research and limiting the scope of management case studies, natural events studies, and valuable data available to form evidence-based solutions. Recommendations for fair and consistent treatment of pseudoreplication during writing and review are given for authors, reviewers, and editors

    Pseudoreplication Revisited

    Get PDF

    Spatial Autocorrelation and Pseudoreplication in Fire Ecology

    Get PDF
    Fire ecologists face many challenges regarding the statistical analyses of their studies. Hurlbert (1984) brought the problem of pseudoreplication to the scientific community’s attention in the mid 1980’s. Now, there is a new issue in the form of spatial autocorrelation. Spatial autocorrelation, if present, violates the traditional statistical assumption of observational independence. What, if anything, can the fire ecology community do about this new problem? An understanding of spatial autocorrelation, and knowledge of available methods used to reduce the effect of spatial autocorrelation and pseudoreplication will greatly assist fire ecology researchers

    Improving basic and translational science by accounting for litter-to-litter variation in animal models

    Get PDF
    Background: Animals from the same litter are often more alike compared with animals from different litters. This litter-to-litter variation, or "litter effects", can influence the results in addition to the experimental factors of interest. Furthermore, an experimental treatment can be applied to whole litters rather than to individual offspring. For example, in the valproic acid (VPA) model of autism, VPA is administered to pregnant females thereby inducing the disease phenotype in the offspring. With this type of experiment the sample size is the number of litters and not the total number of offspring. If such experiments are not appropriately designed and analysed, the results can be severely biased as well as extremely underpowered. Results: A review of the VPA literature showed that only 9% (3/34) of studies correctly determined that the experimental unit (n) was the litter and therefore made valid statistical inferences. In addition, litter effects accounted for up to 61% (p <0.001) of the variation in behavioural outcomes, which was larger than the treatment effects. In addition, few studies reported using randomisation (12%) or blinding (18%), and none indicated that a sample size calculation or power analysis had been conducted. Conclusions: Litter effects are common, large, and ignoring them can make replication of findings difficult and can contribute to the low rate of translating preclinical in vivo studies into successful therapies. Only a minority of studies reported using rigorous experimental methods, which is consistent with much of the preclinical in vivo literature.Comment: http://www.biomedcentral.com/1471-2202/14/37/abstrac

    Pseudoreplication in Primate Communication Research : 10 Years On

    Get PDF
    Pseudoreplication is the statistical error of collecting numerous datapoints from a single unit (such as an individual), which are not independent, and applying statistical methods that assume independence of data. Importantly, pseudoreplication increases the chances of Type 1 errors (i.e., false positives), bringing findings and conclusions based on pseudoreplicated analyses into question. Ten years ago, Waller et al. (2013) published a paper highlighting the prevalence of statistical pseudoreplication throughout the nonhuman primate communication literature. In this current study, we examined the literature published since the original publication (between 2009 and 2020; 348 papers) to assess whether pseudoreplication is still as widespread as it was, if it has become more problematic, or if the field is beginning to overcome this issue. We find that there has been a significant decrease in pseudoreplication over the past ten years (38.6% then, compared with 23.0% now). This reduction in pseudoreplication appears to be associated with an increase in the use of multilevel models throughout primatology (which allow for nonindependent data to be nested appropriately). Pseudoreplication was historically more prevalent in research using observational (vs. experimental) methods and those working with wild (vs. captive) primates. However, these biases do not seem to exist in more recent literature with a more comparable likelihood of pseudoreplication seen across the field regardless of methods. Although these current findings relate specifically to primate communication research, we think they will translate broadly across nonhuman communication research, and throughout biology. We continue to emphasise the need to monitor these issues, as although now seen at much lower rates, pseudoreplication is still present and therefore potentially impacting the accuracy of findings

    Pesticide effects on body temperature of torpid/hibernating rodents (Peromyscus leucopus and Spermophilus tridecemlineatus)

    Get PDF
    Environmental contaminants have been shown in the lab to alter thyroid hormone concentrations. Despite the role these hormones play in the physiological ecology of small mammals, no one has investigated the possible effects of thyroid-disrupting chemicals on mammalian thermal ecology and thermoregulatory ability. Because the energetic impact of such a disruption is likely to be most dramatic during times already energetically stressful, we investigated the effects of two common pesticides (atrazine and lindane) on the use of daily torpor in white-footed mice, and the use of hibernation in 13-lined ground squirrels. Fortunately, we found that these strategies for over-wintering success were not impaired

    Pseudoreplication in physiology:More means less

    Get PDF
    This article reviews how to analyze data from experiments designed to compare the cellular physiology of two or more groups of animals or people. This is commonly done by measuring data from several cells from each animal and using simple t tests or ANOVA to compare between groups. I use simulations to illustrate that this method can give erroneous positive results by assuming that the cells from each animal are independent of each other. This problem, which may be responsible for much of the lack of reproducibility in the literature, can be easily avoided by using a hierarchical, nested statistics approach
    corecore