8 research outputs found

    Intercept Slope Correlations & Heterogeneity

    No full text

    DRIPHT Repository

    No full text
    Repository of standardized data sets from large-scale replication projects (multi-labs)

    Conjunction Fallacy: Contrasting Determinants from QPT and Inductive Confirmation

    No full text

    Quantifying Empirical Support for Quantum Cognition and Inductive Confirmation

    No full text

    Supplemental materials for: Erroneous Generalization - Exploring Random Error Variance in Reliability Generalizations of Psychological Measurements

    No full text
    This project contains R-code to reproduce analyses and figures from the preprint at osf.io/ud9rb

    Reduce, Reuse, Recycle: Introducing MetaPipeX, a Framework for Analyses of Multi-Lab Data

    No full text
    Multi-lab projects are large scale collaborations between participating data collection sites that gather empirical evidence and (usually) analyze that evidence using meta-analyses. They are a valuable form of scientific collaboration, produce outstanding data sets and are a great resource for third-party researchers. Their data may be reanalyzed and used in research synthesis. Their repositories and code could provide guidance to future projects of this kind. But, while multi-labs are similar in their structure and aggregate their data using meta-analyses, they deploy a variety of different solutions regarding the storage structure in the repositories, the way the (analysis) code is structured and the file-formats they provide. Continuing this trend implies that anyone who wants to work with data from multiple of these projects, or combine their datasets, is faced with an ever-increasing complexity. Some of that complexity could be avoided. Here, we introduce MetaPipeX, a standardized framework to harmonize, document and analyze multi-lab data. It features a pipeline conceptualization of the analysis and documentation process, an R-package that implements both and a Shiny App (https://www.apps.meta-rep.lmu.de/metapipex/) that allows users to explore and visualize these data sets. We introduce the framework by describing its components and applying it to a practical example. Engaging with this form of collaboration and integrating it further into research practice will certainly be beneficial to quantitative sciences and we hope the framework provides a structure and tools to reduce effort for anyone who creates, re-uses, harmonizes or learns about multi-lab replication projects

    Whoever has will be given more? How to use the intercept-slope correlation in improving our understanding of replicability, heterogeneity, and theory development.

    No full text
    Multi-lab projects are large scale collaborations between participating data collection sites that gather empirical evidence and (usually) analyze that evidence using meta-analyses. They are a valuable form of scientific collaboration, produce outstanding data sets and are a great resource for third-party researchers. Their data may be reanalyzed and used in research synthesis. Their repositories and code could provide guidance to future projects of this kind. But, while multi-labs are similar in their structure and aggregate their data using meta-analyses, they deploy a variety of different solutions regarding the storage structure in the repositories, the way the (analysis) code is structured and the file-formats they provide. Continuing this trend implies that anyone who wants to work with data from multiple of these projects, or combine their datasets, is faced with an ever-increasing complexity. Some of that complexity could be avoided. Here, we introduce MetaPipeX, a standardized framework to harmonize, document, analyze and multi-lab data. It features a pipeline conceptualization of the analysis and documentation process, an R-package that implements both and a Shiny App that allows users to explore and visualize these data sets. We introduce the framework by describing its components and applying it to a practical example. Engaging with this form of collaboration and integrating it further into research practice will certainly be beneficial to quantitative sciences and we hope the framework provides a structure and tools to reduce effort for anyone who creates, re-uses, harmonizes or learns about multi-lab replication projects

    Practicing theory building in a many modelers hackathon: A proof of concept

    No full text
    Scientific theories reflect some of humanity's greatest epistemic achievements. The best theories motivate us to search for discoveries, guide us towards successful interventions, and help us to explain and organize knowledge. Such theories require a high degree of specificity, and specifying them requires modeling skills. Unfortunately, in psychological science, theories are often not precise, and psychological scientists often lack the technical skills to formally specify existing theories. This problem raises the question: How can we promote formal theory development in psychology, where there are many content experts but few modelers? In this paper, we discuss one strategy for addressing this issue: a Many Modelers approach. Many Modelers consist of mixed teams of modelers and non-modelers that collaborate to create a formal theory of a phenomenon. We report a proof of concept of this approach, which we piloted as a three-hour hackathon at the SIPS 2021 conference. We find that (a) psychologists who have never developed a formal model can become excited about formal modeling and theorizing; (b) a division of labor in formal theorizing could be possible where only one or a few team members possess the prerequisite modeling expertise; and (c) first working prototypes of a theoretical model can be created in a short period of time
    corecore