22 research outputs found

    Why we habitually engage in null-hypothesis significance testing:A qualitative study

    Get PDF
    BACKGROUND: Null Hypothesis Significance Testing (NHST) is the most familiar statistical procedure for making inferences about population effects. Important problems associated with this method have been addressed and various alternatives that overcome these problems have been developed. Despite its many well-documented drawbacks, NHST remains the prevailing method for drawing conclusions from data. Reasons for this have been insufficiently investigated. Therefore, the aim of our study was to explore the perceived barriers and facilitators related to the use of NHST and alternative statistical procedures among relevant stakeholders in the scientific system. METHODS: Individual semi-structured interviews and focus groups were conducted with junior and senior researchers, lecturers in statistics, editors of scientific journals and program leaders of funding agencies. During the focus groups, important themes that emerged from the interviews were discussed. Data analysis was performed using the constant comparison method, allowing emerging (sub)themes to be fully explored. A theory substantiating the prevailing use of NHST was developed based on the main themes and subthemes we identified. RESULTS: Twenty-nine interviews and six focus groups were conducted. Several interrelated facilitators and barriers associated with the use of NHST and alternative statistical procedures were identified. These factors were subsumed under three main themes: the scientific climate, scientific duty, and reactivity. As a result of the factors, most participants feel dependent in their actions upon others, have become reactive, and await action and initiatives from others. This may explain why NHST is still the standard and ubiquitously used by almost everyone involved. CONCLUSION: Our findings demonstrate how perceived barriers to shift away from NHST set a high threshold for actual behavioral change and create a circle of interdependency between stakeholders. By taking small steps it should be possible to decrease the scientific community’s strong dependence on NHST and p-values

    Using systems perspectives in evidence synthesis: A methodological mapping review

    Get PDF
    BACKGROUND: Reviewing complex interventions is challenging because they include many elements that can interact dynamically in a non-linear manner. A systems perspective offers a way of thinking to help understand complex issues, but its application in evidence synthesis is not established. The aim of this project was to understand how and why systems perspectives have been applied in evidence synthesis. METHODS: A methodological mapping review was conducted to identify papers using a systems perspective in evidence synthesis. A search was conducted in seven bibliographic databases and three search engines. RESULTS: A total of 101 papers (representing 98 reviews) met the eligibility criteria. Two categories of reviews were identified: 1) reviews using a 'systems lens' to frame the topic, generate hypotheses, select studies, and guide the analysis and interpretation of findings (n=76) and 2) reviews using systems methods to develop a systems model (n=22). Several methods (e.g., systems dynamic modeling, soft systems approach) were identified and they were used to identify, rank, and select elements, analyze interactions, develop models, and forecast needs. The main reasons for using a systems perspective were to address complexity, view the problem as a whole, and understand the interrelationships between the elements. Several challenges for capturing the true nature and complexity of a problem were raised when performing these methods. CONCLUSION: This review is a useful starting point when designing evidence synthesis of complex interventions. It identifies different opportunities for applying a systems perspective in evidence synthesis, and highlights both commonplace and less familiar methods

    Preregistering Qualitative Research: A Delphi Study

    Get PDF
    Preregistrations—records made a priori about study designs and analysis plans and placed in open repositories—are thought to strengthen the credibility and transparency of research. Different authors have put forth arguments in favor of introducing this practice in qualitative research and made suggestions for what to include in a qualitative preregistration form. The goal of this study was to gauge and understand what parts of preregistration templates qualitative researchers would find helpful and informative. We used an online Delphi study design consisting of two rounds with feedback reports in between. In total, 48 researchers participated (response rate: 16%). In round 1, panelists considered 14 proposed items relevant to include in the preregistration form, but two items had relevance scores just below our predefined criterion (68%) with mixed argument and were put forth again. We combined items where possible, leading to 11 revised items. In round 2, panelists agreed on including the two remaining items. Panelists also converged on suggested terminology and elaborations, except for two terms for which they provided clear arguments. The result is an agreement-based form for the preregistration of qualitative studies that consists of 13 items. The form will be made available as a registration option on Open Science Framework (osf.io). We believe it is important to assure that the strength of qualitative research, which is its flexibility to adapt, adjust and respond, is not lost in preregistration. The preregistration should provide a systematic starting point

    Overview on the Null Hypothesis Significance Test

    No full text
    For decades, waxing and waning, there has been an ongoing debate on the values and problems of the ubiquitously used null hypothesis significance test (NHST). With the start of the replication crisis, this debate has flared-up once again, especially in the psychology and psychological methods literature. Arguing for or against the NHST method usually takes place in essay and opinion pieces that cover some, but not all the qualities and problems of the method. The NHST literature landscape is vast, a clear overview is lacking, and participants in the debate seem to be talking past one another. To contribute to a resolution, we conducted a systematic review on essay literature concerning NHST published in psychology and psychological methods journals between 2011 and 2018. We extracted all arguments in defense of (20) and against (70) NHST, and we extracted the solutions (33) that were proposed to remedy (some of) the perceived problems of NHST. Unfiltered, these 123 items form a landscape that is prohibitively difficult to keep in one’s sights. Our contribution to the resolution of the NHST debate is twofold. 1) We performed a thematic synthesis of the arguments and solutions that carves the landscape in a framework of three zones: mild, moderate, and critical. This reduction summarizes groups of arguments and solutions, thus offering a manageable overview of NHST’s qualities, problems, and solutions. 2) We provide the data on the arguments and solutions as a resource for those who will carry-on the debate and/or study the use of NHST

    Preregistering qualitative research

    Get PDF
    The threat to reproducibility and awareness of current rates of research misbehavior sparked initiatives to better academic science. One initiative is preregistration of quantitative research. We investigate whether the preregistration format could also be used to boost the credibility of qualitative research. A crucial distinction underlying preregistration is that between prediction and postdiction. In qualitative research, data are used to decide which way interpretation should move forward, using data to generate hypotheses and new research questions. Qualitative research is thus a real-life example of postdiction research. Some may object to the idea of preregistering qualitative studies because qualitative research generally does not test hypotheses, and because qualitative research design is typically flexible and subjective. We rebut these objections, arguing that making hypotheses explicit is just one feature of preregistration, that flexibility can be tracked using preregistration, and that preregistration would provide a check on subjectivity. We then contextualize preregistrations alongside another initiative to enhance credibility in qualitative research: the confirmability audit. Besides, preregistering qualitative studies is practically useful to combating dissemination bias and could incentivize qualitative researchers to report constantly on their study's development. We conclude with suggested modifications to the Open Science Framework preregistration form to tailor it for qualitative studies

    Stop met het onkritische gebruik van nulhypothesen

    Get PDF
    Misinterpretation and abuse of statistical tests, confidence intervals, and statistical power have been decried for decades, yet remain rampant. A key problem is that there are no interpretations of these concepts that are at once simple, intuitive, correct, and foolproof. Instead, correct use and interpretation of these statistics requires an attention to detail which seems to tax the patience of working scientists. This high cognitive demand has led to an epidemic of shortcut definitions and interpretations that are simply wrong, sometimes disastrously so-and yet these misinterpretations dominate much of the scientific literature. In light of this problem, we provide definitions and a discussion of basic statistics that are more general and critical than typically found in traditional introductory expositions. Our goal is to provide a resource for instructors, researchers, and consumers of statistics whose knowledge of statistical theory and technique may be limited but who wish to avoid and spot misinterpretations. We emphasize how violation of often unstated analysis protocols (such as selecting analyses for presentation based on the P values they produce) can lead to small P values even if the declared test hypothesis is correct, and can lead to large P values even if that hypothesis is incorrect. We then provide an explanatory list of 25 misinterpretations of P values, confidence intervals, and power. We conclude with guidelines for improving statistical interpretation and reporting

    Stop met het onkritische gebruik van nulhypothesen

    Get PDF
    Misinterpretation and abuse of statistical tests, confidence intervals, and statistical power have been decried for decades, yet remain rampant. A key problem is that there are no interpretations of these concepts that are at once simple, intuitive, correct, and foolproof. Instead, correct use and interpretation of these statistics requires an attention to detail which seems to tax the patience of working scientists. This high cognitive demand has led to an epidemic of shortcut definitions and interpretations that are simply wrong, sometimes disastrously so-and yet these misinterpretations dominate much of the scientific literature. In light of this problem, we provide definitions and a discussion of basic statistics that are more general and critical than typically found in traditional introductory expositions. Our goal is to provide a resource for instructors, researchers, and consumers of statistics whose knowledge of statistical theory and technique may be limited but who wish to avoid and spot misinterpretations. We emphasize how violation of often unstated analysis protocols (such as selecting analyses for presentation based on the P values they produce) can lead to small P values even if the declared test hypothesis is correct, and can lead to large P values even if that hypothesis is incorrect. We then provide an explanatory list of 25 misinterpretations of P values, confidence intervals, and power. We conclude with guidelines for improving statistical interpretation and reporting
    corecore