195 research outputs found
Depressive Symptoms and Category Learning: A Preregistered Conceptual Replication Study
We present a fully preregistered, high-powered conceptual replication of Experiment 1 by Smith, Tracy, and Murray (1993). They observed a cognitive deficit in people with elevated depressive symptoms in a task requiring flexible analytic processing and deliberate hypothesis testing, but no deficit in a task assumed to require more automatic, holistic processing. Specifically, they found that individuals with depressive symptoms showed impaired performance on a criterial-attribute classification task, requiring flexible analysis of the attributes and deliberate hypothesis testing, but not on a family-resemblance classification task, assumed to rely on holistic processing. While deficits in tasks requiring flexible hypothesis testing are commonly observed in people diagnosed with a major depressive disorder, these deficits are much less commonly observed in people with merely elevated depressive symptoms, and therefore Smith et al.âs (1993) finding deserves further scrutiny. We observed no deficit in performance on the criterial-attribute task in people with above average depressive symptoms. Rather, we found a similar difference in performance on the criterial-attribute versus family-resemblance task between people with high and low depressive symptoms. The absence of a deficit in people with elevated depressive symptoms is consistent with previous findings focusing on different tasks
[37th] ANNUAL REPORT OF THE FACULTY OF THE COLLEGE OF THE CITY OF NEW YORK TO THE BOARD OF TRUSTEES, FOR THE YEAR ENDING JUNE 21, 1888.
Report fourteen from the sixth bound volume of ten which documents in part the first nineteen years of The Free Academy, the predecessor of the educational institution, City College of New York. COLLEGE OF THE CITY OF NEW YORK. 1856-96. REPORTS OF THE FACULTY II, includes 21 individual reports. At a time when municipal education constituted primary schooling, citizens united in response to arguments presented by a merchant and Board of Education President, Townsend Harris, for the necessity of an institution that would provide advanced training for future generations of citizens to fully engage in the professions advantageous to an expanding urban center. Includes preliminary reports that commented on the application of resources for the creation of the institution and the annual reports of the faculty, demonstrating accountability to the Board of Education with regard to the operation of the facility., [37th] ANNUAL REPORT OF THE FACULTY OF THE COLLEGE OF THE CITY OF NEW YORK TO THE BOARD OF TRUSTEES, FOR THE YEAR ENDING JUNE 21, 1888. [6 pages ([325]-330), 1888], RG
The peer reviewers' openness initiative:Incentivising open research practices through peer review
Openness is one of the central values of science. Open scientific practices such as sharing data, materials and analysis scripts alongside published articles have many benefits, including easier replication and extension studies, increased availability of data for theory-building and meta-analysis, and increased possibility of review and collaboration even after a paper has been published. Although modern information technology makes sharing easier than ever before, uptake of open practices had been slow. We suggest this might be in part due to a social dilemma arising from misaligned incentives and propose a specific, concrete mechanismâreviewers withholding comprehensive reviewâto achieve the goal of creating the expectation of open practices as a matter of scientific principle
The Replication Database:Documenting the Replicability of Psychological Science
In psychological science, replicabilityârepeating a study with a new sample achieving consistent results (Parsons et al., 2022)âis critical for affirming the validity of scientific findings. Despite its importance, replication efforts are few and far between in psychological science with many attempts failing to corroborate past findings. This scarcity, compounded by the difficulty in accessing replication data, jeopardizes the efficient allocation of research resources and impedes scientific advancement. Addressing this crucial gap, we present the Replication Database (https://forrt-replications.shinyapps.io/fred_explorer), a novel platform hosting 1,239 original findings paired with replication findings. The infrastructure of this database allows researchers to submit, access, and engage with replication findings. The database makes replications visible, easily findable via a graphical user interface, and tracks replication rates across various factors, such as publication year or journal. This will facilitate future efforts to evaluate the robustness of psychological research
The Psychological Science Accelerator: Advancing Psychology Through a Distributed Collaborative Network
Source at https://doi.org/10.1177/2515245918797607.Concerns about the veracity of psychological research have been growing. Many findings in psychological science are based on studies with insufficient statistical power and nonrepresentative samples, or may otherwise be limited to specific, ungeneralizable settings or populations. Crowdsourced research, a type of large-scale collaboration in which one or more research projects are conducted across multiple lab sites, offers a pragmatic solution to these and other current methodological challenges. The Psychological Science Accelerator (PSA) is a distributed network of laboratories designed to enable and support crowdsourced research projects. These projects can focus on novel research questions or replicate prior research in large, diverse samples. The PSAâs mission is to accelerate the accumulation of reliable and generalizable evidence in psychological science. Here, we describe the background, structure, principles, procedures, benefits, and challenges of the PSA. In contrast to other crowdsourced research networks, the PSA is ongoing (as opposed to time limited), efficient (in that structures and principles are reused for different projects), decentralized, diverse (in both subjects and researchers), and inclusive (of proposals, contributions, and other relevant input from anyone inside or outside the network). The PSA and other approaches to crowdsourced psychological science will advance understanding of mental processes and behaviors by enabling rigorous research and systematic examination of its generalizability
Many Labs 5:Testing pre-data collection peer review as an intervention to increase replicability
Replication studies in psychological science sometimes fail to reproduce prior findings. If these studies use methods that are unfaithful to the original study or ineffective in eliciting the phenomenon of interest, then a failure to replicate may be a failure of the protocol rather than a challenge to the original finding. Formal pre-data-collection peer review by experts may address shortcomings and increase replicability rates. We selected 10 replication studies from the Reproducibility Project: Psychology (RP:P; Open Science Collaboration, 2015) for which the original authors had expressed concerns about the replication designs before data collection; only one of these studies had yielded a statistically significant effect (p < .05). Commenters suggested that lack of adherence to expert review and low-powered tests were the reasons that most of these RP:P studies failed to replicate the original effects. We revised the replication protocols and received formal peer review prior to conducting new replication studies. We administered the RP:P and revised protocols in multiple laboratories (median number of laboratories per original study = 6.5, range = 3?9; median total sample = 1,279.5, range = 276?3,512) for high-powered tests of each original finding with both protocols. Overall, following the preregistered analysis plan, we found that the revised protocols produced effect sizes similar to those of the RP:P protocols (?r = .002 or .014, depending on analytic approach). The median effect size for the revised protocols (r = .05) was similar to that of the RP:P protocols (r = .04) and the original RP:P replications (r = .11), and smaller than that of the original studies (r = .37). Analysis of the cumulative evidence across the original studies and the corresponding three replication attempts provided very precise estimates of the 10 tested effects and indicated that their effect sizes (median r = .07, range = .00?.15) were 78% smaller, on average, than the original effect sizes (median r = .37, range = .19?.50)
- âŠ