1,193 research outputs found

    Understanding Null Hypothesis Tests, and Their Wise Use

    Get PDF
    Few students sitting in their introductory statistics class learn that they are being taught the product of a misguided effort to combine two methods of inference into one. Few students learn that many think the method they are being taught should be banned. In Understanding Null Hypothesis Tests, and Their Wise Use, I emphasize Ronald Fisher’s approach to null hypothesis testing. Fisher’s method is simple, intuitive, thoughtful, and pure. If we follow Fisher’s example, all the criticisms of null hypothesis tests melt away. Fisher on a good day. Do you collect data and then ask a friend how to analyze them? Once you have read this monograph, you will not have to do that anymore (at least not often). You will understand the concepts behind the mathematics. You will see why different types of data require different types of tests. Do you think that P is the probability of a type I error? It is not. If you fail to reject, do you accept the alternate or alternative hypothesis? You accomplish nothing by doing so. Do you equate statistical “significance” with importance? You should not. These and other misconceptions are explained and dispensed with in Understanding Null Hypothesis Tests, and Their Wise Use. After reading this monograph, you will understand why it is utter foolishness to say, We use confidence intervals instead. (Confidence intervals are wonderful, but they show the results of null hypothesis tests performed backwards.) More importantly, you will understand wise use. You will use P-values thoughtfully, not to make mindless, binary decisions. Most importantly, you will know the Big Secret that should not be a secret. A null hypothesis is infinitely precise, so many and maybe most null hypotheses cannot be correct—a fact we should know from the start. It was the legendary statistician John Tukey who explained why it is important to test such nulls anyway: to determine whether we can trust our data to tell us the direction of a difference. Getting the direction wrong is referred to as a type III or type S error. Once you have read about type S errors in Understanding Null Hypothesis Tests, and Their Wise Use, you will have a better understanding of null hypothesis tests than anyone on your block

    Beware of the mine! the political economy of mines in Northern Mozambique

    Get PDF
    We examine the e ect of natural resources on the social and political fabric of lowincome communities. We combine geospatial data on mining activity with household surveys we conducted in Northern Mozambique. We nd that mines decrease the level of trust, especially in neighbors, local and national leaders. In the same direction, households living in mining areas contribute less to public goods. A signi cant negative e ect on participation to local community groups only emerges when using matching methods. On the political side, mineral endowments lead to institutional degradation in the form of lower level of democratic decision-making in the community, lower preference for democratic decisions by the households and increased corruption in the allocation of public funds, which suggest rent-seeking behavior of both the political elite and the population. We also document weak evidence of violence within and around mining areas. These results unveil the presence of both social and political mechanisms behind the natural resource curse and call for carefully monitoring the ongoing expansion of the extractive industries in Africa

    Perils and pitfalls of mixed-effects regression models in biology

    Get PDF
    This is the final version. Available on open access from PeerJ via the DOI in this recordData Availability: The following information was supplied regarding data availability: The R code used to conduct all simulations in the paper is available in the Supplemental Files.Biological systems, at all scales of organisation from nucleic acids to ecosystems, are inherently complex and variable. Biologists therefore use statistical analyses to detect signal among this systemic noise. Statistical models infer trends, find functional relationships and detect differences that exist among groups or are caused by experimental manipulations. They also use statistical relationships to help predict uncertain futures. All branches of the biological sciences now embrace the possibilities of mixed-effects modelling and its flexible toolkit for partitioning noise and signal. The mixed-effects model is not, however, a panacea for poor experimental design, and should be used with caution when inferring or deducing the importance of both fixed and random effects. Here we describe a selection of the perils and pitfalls that are widespread in the biological literature, but can be avoided by careful reflection, modelling and model-checking. We focus on situations where incautious modelling risks exposure to these pitfalls and the drawing of incorrect conclusions. Our stance is that statements of significance, information content or credibility all have their place in biological research, as long as these statements are cautious and well-informed by checks on the validity of assumptions. Our intention is to reveal potential perils and pitfalls in mixed model estimation so that researchers can use these powerful approaches with greater awareness and confidence. Our examples are ecological, but translate easily to all branches of biology.University of Exete

    The illusion of data validity : Why numbers about people are likely wrong

    Get PDF
    This reflection article addresses a difficulty faced by scholars and practitioners working with numbers about people, which is that those who study people want numerical data about these people. Unfortunately, time and time again, this numerical data about people is wrong. Addressing the potential causes of this wrongness, we present examples of analyzing people numbers, i.e., numbers derived from digital data by or about people, and discuss the comforting illusion of data validity. We first lay a foundation by highlighting potential inaccuracies in collecting people data, such as selection bias. Then, we discuss inaccuracies in analyzing people data, such as the flaw of averages, followed by a discussion of errors that are made when trying to make sense of people data through techniques such as posterior labeling. Finally, we discuss a root cause of people data often being wrong – the conceptual conundrum of thinking the numbers are counts when they are actually measures. Practical solutions to address this illusion of data validity are proposed. The implications for theories derived from people data are also highlighted, namely that these people theories are generally wrong as they are often derived from people numbers that are wrong.© 2022 Wuhan University. Published by Elsevier Ltd. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).fi=vertaisarvioitu|en=peerReviewed

    Towards codes of practice for navigating the academic peer review process

    Get PDF
    Peer review is the bedrock of modern academic research and its lasting contributions to science and society. And yet, reviewers can submit “poor” peer review reports, authors can blatantly ignore referee advice, and editors can contravene and undermine the peer review process itself. In this paper, we, the Editors of Energy Research & Social Science (ER&SS), seek to establish peer review codes of practice for the general energy and social science research community. We include suggestions for three of the most important roles: peer reviewers or referees, editors, and authors. We base our 33 recommendations on a collective 60 years of editorial experience at ER&SS. Our hope is that such codes of practice can enable the academic community to navigate the peer review process more effectively, more meaningfully, and more efficiently

    Statistical strategies for avoiding false discoveries in metabolomics and related experiments

    Full text link
    corecore