317 research outputs found

    Survey of the quality of experimental design, statistical analysis and reporting of research using animals

    Get PDF
    For scientific, ethical and economic reasons, experiments involving animals should be appropriately designed, correctly analysed and transparently reported. This increases the scientific validity of the results, and maximises the knowledge gained from each experiment. A minimum amount of relevant information must be included in scientific publications to ensure that the methods and results of a study can be reviewed, analysed and repeated. Omitting essential information can raise scientific and ethical concerns. We report the findings of a systematic survey of reporting, experimental design and statistical analysis in published biomedical research using laboratory animals. Medline and EMBASE were searched for studies reporting research on live rats, mice and non-human primates carried out in UK and US publicly funded research establishments. Detailed information was collected from 271 publications, about the objective or hypothesis of the study, the number, sex, age and/or weight of animals used, and experimental and statistical methods. Only 59% of the studies stated the hypothesis or objective of the study and the number and characteristics of the animals used. Appropriate and efficient experimental design is a critical component of high-quality science. Most of the papers surveyed did not use randomisation (87%) or blinding (86%), to reduce bias in animal selection and outcome assessment. Only 70% of the publications that used statistical methods described their methods and presented the results with a measure of error or variability. This survey has identified a number of issues that need to be addressed in order to improve experimental design and reporting in publications describing research using animals. Scientific publication is a powerful and important source of information; the authors of scientific publications therefore have a responsibility to describe their methods and results comprehensively, accurately and transparently, and peer reviewers and journal editors share the responsibility to ensure that published studies fulfil these criteria

    Optimising experimental design for high-throughput phenotyping in mice: a case study

    Get PDF
    To further the functional annotation of the mammalian genome, the Sanger Mouse Genetics Programme aims to generate and characterise knockout mice in a high-throughput manner. Annually, approximately 200 lines of knockout mice will be characterised using a standardised battery of phenotyping tests covering key disease indications ranging from obesity to sensory acuity. From these findings secondary centres will select putative mutants of interest for more in-depth, confirmatory experiments. Optimising experimental design and data analysis is essential to maximise output using the resources with greatest efficiency, thereby attaining our biological objective of understanding the role of genes in normal development and disease. This study uses the example of the noninvasive blood pressure test to demonstrate how statistical investigation is important for generating meaningful, reliable results and assessing the design for the defined research objectives. The analysis adjusts for the multiple-testing problem by applying the false discovery rate, which controls the number of false calls within those highlighted as significant. A variance analysis finds that the variation between mice dominates this assay. These variance measures were used to examine the interplay between days, readings, and number of mice on power, the ability to detect change. If an experiment is underpowered, we cannot conclude whether failure to detect a biological difference arises from low power or lack of a distinct phenotype, hence the mice are subjected to testing without gain. Consequently, in confirmatory studies, a power analysis along with the 3Rs can provide justification to increase the number of mice used

    Obesity: A Biobehavioral Point of View

    Full text link
    Excerpt: If you ask an overweight person, “Why are you fat?’, you will, almost invariably, get the answer, “Because 1 eat too much.” You will get this answer in spite of the fact that of thirteen studies, six find no significant differences in the caloric intake of obese versus nonobese subjects, five report that the obese eat significantly less than the nonobese, and only two report that they eat significantly more

    Systematic Reviews of Animal Experiments Demonstrate Poor Human Clinical and Toxicological Utility

    Get PDF
    The assumption that animal models are reasonably predictive of human outcomes provides the basis for their widespread use in toxicity testing and in biomedical research aimed at developing cures for human diseases. To investigate the validity of this assumption, the comprehensive Scopus biomedical bibliographic databases were searched for published systematic reviews of the human clinical or toxicological utility of animal experiments. In 20 reviews in which clinical utility was examined, the authors concluded that animal models were either significantly useful in contributing to the development of clinical interventions, or were substantially consistent with clinical outcomes, in only two cases, one of which was contentious. These included reviews of the clinical utility of experiments expected by ethics committees to lead to medical advances, of highly-cited experiments published in major journals, and of chimpanzee experiments — those involving the species considered most likely to be predictive of human outcomes. Seven additional reviews failed to clearly demonstrate utility in predicting human toxicological outcomes, such as carcinogenicity and teratogenicity. Consequently, animal data may not generally be assumed to be substantially useful for these purposes. Possible causes include interspecies differences, the distortion of outcomes arising from experimental environments and protocols, and the poor methodological quality of many animal experiments, which was evident in at least 11 reviews. No reviews existed in which the majority of animal experiments were of good methodological quality. Whilst the effects of some of these problems might be minimised with concerted effort (given their widespread prevalence), the limitations resulting from interspecies differences are likely to be technically and theoretically impossible to overcome. Non-animal models are generally required to pass formal scientific validation prior to their regulatory acceptance. In contrast, animal models are simply assumed to be predictive of human outcomes. These results demonstrate the invalidity of such assumptions. The consistent application of formal validation studies to all test models is clearly warranted, regardless of their animal, non-animal, historical, contemporary or possible future status. Likely benefits would include, the greater selection of models truly predictive of human outcomes, increased safety of people exposed to chemicals that have passed toxicity tests, increased efficiency during the development of human pharmaceuticals and other therapeutic interventions, and decreased wastage of animal, personnel and financial resources. The poor human clinical and toxicological utility of most animal models for which data exists, in conjunction with their generally substantial animal welfare and economic costs, justify a ban on animal models lacking scientific data clearly establishing their human predictivity or utility

    Caging and uncaging genetics

    Get PDF
    It is important for biology to understand if observations made in highly reductionist laboratory settings generalise to harsh and noisy natural environments in which genetic variation is sorted to produce adaptation. But what do we learn by studying, in the laboratory, a genetically diverse population that mirrors the wild? What is the best design for studying genetic variation? When should we consider it at all? The right experimental approach depends on what you want to know

    Meta-analysis of variation suggests that embracing variability improves both replicability and generalizability in preclinical research

    Get PDF
    The replicability of research results has been a cause of increasing concern to the scientific community. The long-held belief that experimental standardization begets replicability has also been recently challenged, with the observation that the reduction of variability within studies can lead to idiosyncratic, lab-specific results that cannot be replicated. An alternative approach is to, instead, deliberately introduce heterogeneity, known as "heterogenization" of experimental design. Here, we explore a novel perspective in the heterogenization program in a meta-analysis of variability in observed phenotypic outcomes in both control and experimental animal models of ischemic stroke. First, by quantifying interindividual variability across control groups, we illustrate that the amount of heterogeneity in disease state (infarct volume) differs according to methodological approach, for example, in disease induction methods and disease models. We argue that such methods may improve replicability by creating diverse and representative distribution of baseline disease state in the reference group, against which treatment efficacy is assessed. Second, we illustrate how meta-analysis can be used to simultaneously assess efficacy and stability (i.e., mean effect and among-individual variability). We identify treatments that have efficacy and are generalizable to the population level (i.e., low interindividual variability), as well as those where there is high interindividual variability in response; for these, latter treatments translation to a clinical setting may require nuance. We argue that by embracing rather than seeking to minimize variability in phenotypic outcomes, we can motivate the shift toward heterogenization and improve both the replicability and generalizability of preclinical research

    Survey of Canadian Animal-Based Researchers' Views on the Three Rs: Replacement, Reduction and Refinement

    Get PDF
    The ‘Three Rs’ tenet (replacement, reduction, refinement) is a widely accepted cornerstone of Canadian and international policies on animal-based science. The Canadian Council on Animal Care (CCAC) initiated this web-based survey to obtain greater understanding of ‘principal investigators’ and ‘other researchers’ (i.e. graduate students, post-doctoral researchers etc.) views on the Three Rs, and to identify obstacles and opportunities for continued implementation of the Three Rs in Canada. Responses from 414 participants indicate that researchers currently do not view the goal of replacement as achievable. Researchers prefer to use enough animals to ensure quality data is obtained rather than using the minimum and potentially waste those animals if a problem occurs during the study. Many feel that they already reduce animal numbers as much as possible and have concerns that further reduction may compromise research. Most participants were ambivalent about re-use, but expressed concern that the practice could compromise experimental outcomes. In considering refinement, many researchers feel there are situations where animals should not receive pain relieving drugs because it may compromise scientific outcomes, although there was strong support for the Three Rs strategy of conducting animal welfare-related pilot studies, which were viewed as useful for both animal welfare and experimental design. Participants were not opposed to being offered “assistance” to implement the Three Rs, so long as the input is provided in a collegial manner, and from individuals who are perceived as experts. It may be useful for animal use policymakers to consider what steps are needed to make replacement a more feasible goal. In addition, initiatives that offer researchers greater practical and logistical support with Three Rs implementation may be useful. Encouragement and financial support for Three Rs initiatives may result in valuable contributions to Three Rs knowledge and improve welfare for animals used in science

    Applying refinement to the use of mice and rats in rheumatoid arthritis research

    Get PDF
    Rheumatoid arthritis (RA) is a painful, chronic disorder and there is currently an unmet need for effective therapies that will benefit a wide range of patients. The research and development process for therapies and treatments currently involves in vivo studies, which have the potential to cause discomfort, pain or distress. This Working Group report focuses on identifying causes of suffering within commonly used mouse and rat ‘models’ of RA, describing practical refinements to help reduce suffering and improve welfare without compromising the scientific objectives. The report also discusses other, relevant topics including identifying and minimising sources of variation within in vivo RA studies, the potential to provide pain relief including analgesia, welfare assessment, humane endpoints, reporting standards and the potential to replace animals in RA research
    corecore