95 research outputs found
Structural validity evidence for the Oxford Utilitarianism Scale across 15 languages
Background: The Psychological Science Accelerator (PSA) recently completed a large-scale moral psychology study using translated versions of the Oxford Utilitarianism Scale (OUS). However, the translated versions have no validity evidence. Objective: The study investigated the structural validity evidence of the OUS across 15 translated versions and produced version-specific validity reports. Methods: We analyzed OUS data from the PSA, which was collected internationally on a centralized online questionnaire. We also collected qualitative feedback from experts for each translated version. Results: For each version, we produced version-specific psychometric reports which include the following: (1) descriptive item and demographics analyses, (2) factor structure evidence using confirmatory factor analyses, (3) measurement invariance testing across languages using multiple-group confirmatory factor analyses and alignment optimization, and (4) reliability analyses using coefficients ? and ?.info:eu-repo/semantics/publishedVersio
The Psychological Science Accelerator: Advancing Psychology through a Distributed Collaborative Network
Concerns have been growing about the veracity of psychological research. Many findings in psychological science are based on studies with insufficient statistical power and nonrepresentative samples, or may otherwise be limited to specific, ungeneralizable settings or populations. Crowdsourced research, a type of large-scale collaboration in which one or more research projects are conducted across multiple lab sites, offers a pragmatic solution to these and other current methodological challenges. The Psychological Science Accelerator (PSA) is a distributed network of laboratories designed to enable and support crowdsourced research projects. These projects can focus on novel research questions, or attempt to replicate prior research, in large, diverse samples. The PSA\u27s mission is to accelerate the accumulation of reliable and generalizable evidence in psychological science. Here, we describe the background, structure, principles, procedures, benefits, and challenges of the PSA. In contrast to other crowdsourced research networks, the PSA is ongoing (as opposed to time-limited), efficient (in terms of re-using structures and principles for different projects), decentralized, diverse (in terms of participants and researchers), and inclusive (of proposals, contributions, and other relevant input from anyone inside or outside of the network). The PSA and other approaches to crowdsourced psychological science will advance our understanding of mental processes and behaviors by enabling rigorous research and systematically examining its generalizability
An Official American Thoracic Society/European Respiratory Society Statement: Update of the International Multidisciplinary Classification of the Idiopathic Interstitial Pneumonias
BackgroundIn 2002 the American Thoracic Society/European Respiratory Society (ATS/ERS) classification of idiopathic interstitial pneumonias (IIPs) defined seven specific entities, and provided standardized terminology and diagnostic criteria. In addition, the historical "gold standard" of histologic diagnosis was replaced by a multidisciplinary approach. Since 2002 many publications have provided new information about IIPs.PurposeThe objective of this statement is to update the 2002 ATS/ERS classification of IIPs.MethodsAn international multidisciplinary panel was formed and developed key questions that were addressed through a review of the literature published between 2000 and 2011.ResultsSubstantial progress has been made in IIPs since the previous classification. Nonspecific interstitial pneumonia is now better defined. Respiratory bronchiolitis-interstitial lung disease is now commonly diagnosed without surgical biopsy. The clinical course of idiopathic pulmonary fibrosis and nonspecific interstitial pneumonia is recognized to be heterogeneous. Acute exacerbation of IIPs is now well defined. A substantial percentage of patients with IIP are difficult to classify, often due to mixed patterns of lung injury. A classification based on observed disease behavior is proposed for patients who are difficult to classify or for entities with heterogeneity in clinical course. A group of rare entities, including pleuroparenchymal fibroelastosis and rare histologic patterns, is introduced. The rapidly evolving field of molecular markers is reviewed with the intent of promoting additional investigations that may help in determining diagnosis, and potentially prognosis and treatment.ConclusionsThis update is a supplement to the previous 2002 IIP classification document. It outlines advances in the past decade and potential areas for future investigation
Examining the generalizability of research findings from archival data
This initiative examined systematically the extent to which a large set of archival research findings generalizes across contexts. We repeated the key analyses for 29 original strategic management effects in the same context (direct reproduction) as well as in 52 novel time periods and geographies; 45% of the reproductions returned results matching the original reports together with 55% of tests in different spans of years and 40% of tests in novel geographies. Some original findings were associated with multiple new tests. Reproducibility was the best predictor of generalizability-for the findings that proved directly reproducible, 84% emerged in other available time periods and 57% emerged in other geographies. Overall, only limited empirical evidence emerged for context sensitivity. In a forecasting survey, independent scientists were able to anticipate which effects would find support in tests in new samples
The Psychological Science Accelerator: Advancing Psychology Through a Distributed Collaborative Network
Source at https://doi.org/10.1177/2515245918797607.Concerns about the veracity of psychological research have been growing. Many findings in psychological science are based on studies with insufficient statistical power and nonrepresentative samples, or may otherwise be limited to specific, ungeneralizable settings or populations. Crowdsourced research, a type of large-scale collaboration in which one or more research projects are conducted across multiple lab sites, offers a pragmatic solution to these and other current methodological challenges. The Psychological Science Accelerator (PSA) is a distributed network of laboratories designed to enable and support crowdsourced research projects. These projects can focus on novel research questions or replicate prior research in large, diverse samples. The PSA’s mission is to accelerate the accumulation of reliable and generalizable evidence in psychological science. Here, we describe the background, structure, principles, procedures, benefits, and challenges of the PSA. In contrast to other crowdsourced research networks, the PSA is ongoing (as opposed to time limited), efficient (in that structures and principles are reused for different projects), decentralized, diverse (in both subjects and researchers), and inclusive (of proposals, contributions, and other relevant input from anyone inside or outside the network). The PSA and other approaches to crowdsourced psychological science will advance understanding of mental processes and behaviors by enabling rigorous research and systematic examination of its generalizability
Creative destruction in science
Drawing on the concept of a gale of creative destruction in a capitalistic economy, we argue that initiatives to assess the robustness of findings in the organizational literature should aim to simultaneously test competing ideas operating in the same theoretical space. In other words, replication efforts should seek not just to support or question the original findings, but also to replace them with revised, stronger theories with greater explanatory power. Achieving this will typically require adding new measures, conditions, and subject populations to research designs, in order to carry out conceptual tests of multiple theories in addition to directly replicating the original findings. To illustrate the value of the creative destruction approach for theory pruning in organizational scholarship, we describe recent replication initiatives re-examining culture and work morality, working parents\u2019 reasoning about day care options, and gender discrimination in hiring decisions.
Significance statement
It is becoming increasingly clear that many, if not most, published research findings across scientific fields are not readily replicable when the same method is repeated. Although extremely valuable, failed replications risk leaving a theoretical void\u2014 reducing confidence the original theoretical prediction is true, but not replacing it with positive evidence in favor of an alternative theory. We introduce the creative destruction approach to replication, which combines theory pruning methods from the field of management with emerging best practices from the open science movement, with the aim of making replications as generative as possible. In effect, we advocate for a Replication 2.0 movement in which the goal shifts from checking on the reliability of past findings to actively engaging in competitive theory testing and theory building.
Scientific transparency statement
The materials, code, and data for this article are posted publicly on the Open Science Framework, with links provided in the article
Examining the generalizability of research findings from archival data
This initiative examined systematically the extent to which a large set of archival research findings generalizes across contexts. We repeated the key analyses for 29 original strategic management effects in the same context (direct reproduction) as well as in 52 novel time periods and geographies; 45% of the reproductions returned results matching the original reports together with 55% of tests in different spans of years and 40% of tests in novel geographies. Some original findings were associated with multiple new tests. Reproducibility was the best predictor of generalizability—for the findings that proved directly reproducible, 84% emerged in other available time periods and 57% emerged in other geographies. Overall, only limited empirical evidence emerged for context sensitivity. In a forecasting survey, independent scientists were able to anticipate which effects would find support in tests in new samples
Raising the Bar: Improving Methodological Rigour in Cognitive Alcohol Research
Background and Aims: A range of experimental paradigms claim to measure the cognitive processes underpinning alcohol use, suggesting that heightened attentional bias, greater approach tendencies and reduced cue-specific inhibitory control are important drivers of consumption. This paper identifies methodological shortcomings within this broad domain of research and exemplifies them in studies focused specifically on alcohol-related attentional bias. Argument and analysis: We highlight five main methodological issues: (i) the use of inappropriately matched control stimuli; (ii) opacity of stimulus selection and validation procedures; (iii) a credence in noisy measures; (iv) a reliance on unreliable tasks; and (v) variability in design and analysis. This is evidenced through a review of alcohol-related attentional bias (64 empirical articles, 68 tasks), which reveals the following: only 53% of tasks use appropriately matched control stimuli; as few as 38% report their stimulus selection and 19% their validation procedures; less than 28% used indices capable of disambiguating attentional processes; 22% assess reliability; and under 2% of studies were pre-registered. Conclusions: Well-matched and validated experimental stimuli, the development of reliable cognitive tasks and explicit assessment of their psychometric properties, and careful consideration of behavioural indices and their analysis will improve the methodological rigour of cognitive alcohol research. Open science principles can facilitate replication and reproducibility in alcohol research
Same data, different conclusions: Radical dispersion in empirical results when independent analysts operationalize and test the same hypothesis
In this crowdsourced initiative, independent analysts used the same dataset to test two hypotheses regarding the effects of scientists’ gender and professional status on verbosity during group meetings. Not only the analytic approach but also the operationalizations of key variables were left unconstrained and up to individual analysts. For instance, analysts could choose to operationalize status as job title, institutional ranking, citation counts, or some combination. To maximize transparency regarding the process by which analytic choices are made, the analysts used a platform we developed called DataExplained to justify both preferred and rejected analytic paths in real time. Analyses lacking sufficient detail, reproducible code, or with statistical errors were excluded, resulting in 29 analyses in the final sample. Researchers reported radically different analyses and dispersed empirical outcomes, in a number of cases obtaining significant effects in opposite directions for the same research question. A Boba multiverse analysis demonstrates that decisions about how to operationalize variables explain variability in outcomes above and beyond statistical choices (e.g., covariates). Subjective researcher decisions play a critical role in driving the reported empirical results, underscoring the need for open data, systematic robustness checks, and transparency regarding both analytic paths taken and not taken. Implications for organizations and leaders, whose decision making relies in part on scientific findings, consulting reports, and internal analyses by data scientists, are discussed
- …