15 research outputs found

    Depressive Symptoms and Category Learning: A Preregistered Conceptual Replication Study

    Get PDF
    We present a fully preregistered, high-powered conceptual replication of Experiment 1 by Smith, Tracy, and Murray (1993). They observed a cognitive deficit in people with elevated depressive symptoms in a task requiring flexible analytic processing and deliberate hypothesis testing, but no deficit in a task assumed to require more automatic, holistic processing. Specifically, they found that individuals with depressive symptoms showed impaired performance on a criterial-attribute classification task, requiring flexible analysis of the attributes and deliberate hypothesis testing, but not on a family-resemblance classification task, assumed to rely on holistic processing. While deficits in tasks requiring flexible hypothesis testing are commonly observed in people diagnosed with a major depressive disorder, these deficits are much less commonly observed in people with merely elevated depressive symptoms, and therefore Smith et al.’s (1993) finding deserves further scrutiny. We observed no deficit in performance on the criterial-attribute task in people with above average depressive symptoms. Rather, we found a similar difference in performance on the criterial-attribute versus family-resemblance task between people with high and low depressive symptoms. The absence of a deficit in people with elevated depressive symptoms is consistent with previous findings focusing on different tasks

    [37th] ANNUAL REPORT OF THE FACULTY OF THE COLLEGE OF THE CITY OF NEW YORK TO THE BOARD OF TRUSTEES, FOR THE YEAR ENDING JUNE 21, 1888.

    Full text link
    Report fourteen from the sixth bound volume of ten which documents in part the first nineteen years of The Free Academy, the predecessor of the educational institution, City College of New York. COLLEGE OF THE CITY OF NEW YORK. 1856-96. REPORTS OF THE FACULTY II, includes 21 individual reports. At a time when municipal education constituted primary schooling, citizens united in response to arguments presented by a merchant and Board of Education President, Townsend Harris, for the necessity of an institution that would provide advanced training for future generations of citizens to fully engage in the professions advantageous to an expanding urban center. Includes preliminary reports that commented on the application of resources for the creation of the institution and the annual reports of the faculty, demonstrating accountability to the Board of Education with regard to the operation of the facility., [37th] ANNUAL REPORT OF THE FACULTY OF THE COLLEGE OF THE CITY OF NEW YORK TO THE BOARD OF TRUSTEES, FOR THE YEAR ENDING JUNE 21, 1888. [6 pages ([325]-330), 1888], RG

    A pre-registered, multi-lab non-replication of the Action-sentence Compatibility Effect (ACE)

    Get PDF
    The Action-sentence Compatibility Effect (ACE) is a well-known demonstration of the role of motor activity in the comprehension of language. Participants are asked to make sensibility judgments on sentences by producing movements toward the body or away from the body. The ACE is the finding that movements are faster when the direction of the movement (e.g., toward) matches the direction of the action in the to-be-judged sentence (e.g., Art gave you the pen describes action toward you). We report on a pre-registered, multi-lab replication of one version of the ACE. The results show that none of the 18 labs involved in the study observed a reliable ACE, and that the meta-analytic estimate of the size of the ACE was essentially zero.Fil: Morey, Richard. Cardiff University; Reino UnidoFil: Kaschak, Michael. Florida State University; Estados UnidosFil: Díez Álamo, Antonio. Universidad de Salamanca; España. Arizona State University; Estados UnidosFil: Glenberg, Arthur. Arizona State University; Estados Unidos. Universidad de Salamanca; EspañaFil: Zwaan, Rolf A.. Erasmus University Rotterdam; Países BajosFil: Lakens, Daniël. Eindhoven University of Technology; Países BajosFil: Ibáñez, Santiago Agustín. Consejo Nacional de Investigaciones Científicas y Técnicas; Argentina. Universidad de San Andrés; Argentina. University of San Francisco; Estados Unidos. Universidad Adolfo Ibañez; Chile. Trinity College Dublin; IrlandaFil: García, Adolfo Martín. Consejo Nacional de Investigaciones Científicas y Técnicas; Argentina. Universidad de San Andrés; Argentina. University of San Francisco; Estados Unidos. Universidad Nacional de Cuyo. Facultad de Educación Elemental y Especial; Argentina. Universidad de Santiago de Chile; ChileFil: Gianelli, Claudia. Universitat Potsdam; Alemania. Scuola Universitaria Superiore; ItaliaFil: Jones, John L.. Florida State University; Estados UnidosFil: Madden, Julie. University of Tennessee; Estados UnidosFil: Alifano Ferrero, Florencia. Consejo Nacional de Investigaciones Científicas y Técnicas; ArgentinaFil: Bergen, Benjamin. University of California at San Diego; Estados UnidosFil: Bloxsom, Nicholas G.. Ashland University; Estados UnidosFil: Bub, Daniel N.. University of Victoria; CanadáFil: Cai, Zhenguang G.. The Chinese University; Hong KongFil: Chartier, Christopher R.. Ashland University; Estados UnidosFil: Chatterjee, Anjan. University of Pennsylvania; Estados UnidosFil: Conwell, Erin. North Dakota State University; Estados UnidosFil: Wagner Cook, Susan. University of Iowa; Estados UnidosFil: Davis, Joshua D.. University of California at San Diego; Estados UnidosFil: Evers, Ellen R. K.. University of California at Berkeley; Estados UnidosFil: Girard, Sandrine. University of Carnegie Mellon; Estados UnidosFil: Harter, Derek. Texas A&m University Commerce; Estados UnidosFil: Hartung, Franziska. University of Pennsylvania; Estados UnidosFil: Herrera, Eduar. Universidad ICESI; ColombiaFil: Huettig, Falk. Max Planck Institute for Psycholinguistics; Países BajosFil: Humphries, Stacey. University of Pennsylvania; Estados UnidosFil: Juanchich, Marie. University of Essex; Reino UnidoFil: Kühne, Katharina. Universitat Potsdam; AlemaniaFil: Lu, Shulan. Texas A&m University Commerce; Estados UnidosFil: Lynes, Tom. University of East Anglia; Reino UnidoFil: Masson, Michael E. J.. University of Victoria; CanadáFil: Ostarek, Markus. Max Planck Institute for Psycholinguistics; Países BajosFil: Pessers, Sebastiaan. Katholikie Universiteit Leuven; BélgicaFil: Reglin, Rebecca. Universitat Potsdam; AlemaniaFil: Steegen, Sara. Katholikie Universiteit Leuven; BélgicaFil: Thiessen, Erik D.. University of Carnegie Mellon; Estados UnidosFil: Thomas, Laura E.. North Dakota State University; Estados UnidosFil: Trott, Sean. University of California at San Diego; Estados UnidosFil: Vandekerckhove, Joachim. University of California at Irvine; Estados UnidosFil: Vanpaeme, Wolf. Katholikie Universiteit Leuven; BélgicaFil: Vlachou, Maria. Katholikie Universiteit Leuven; BélgicaFil: Williams, Kristina. Texas A&m University Commerce; Estados UnidosFil: Ziv Crispel, Noam. BehavioralSight; Estados Unido

    Many Labs 5:Testing pre-data collection peer review as an intervention to increase replicability

    Get PDF
    Replication studies in psychological science sometimes fail to reproduce prior findings. If these studies use methods that are unfaithful to the original study or ineffective in eliciting the phenomenon of interest, then a failure to replicate may be a failure of the protocol rather than a challenge to the original finding. Formal pre-data-collection peer review by experts may address shortcomings and increase replicability rates. We selected 10 replication studies from the Reproducibility Project: Psychology (RP:P; Open Science Collaboration, 2015) for which the original authors had expressed concerns about the replication designs before data collection; only one of these studies had yielded a statistically significant effect (p < .05). Commenters suggested that lack of adherence to expert review and low-powered tests were the reasons that most of these RP:P studies failed to replicate the original effects. We revised the replication protocols and received formal peer review prior to conducting new replication studies. We administered the RP:P and revised protocols in multiple laboratories (median number of laboratories per original study = 6.5, range = 3?9; median total sample = 1,279.5, range = 276?3,512) for high-powered tests of each original finding with both protocols. Overall, following the preregistered analysis plan, we found that the revised protocols produced effect sizes similar to those of the RP:P protocols (?r = .002 or .014, depending on analytic approach). The median effect size for the revised protocols (r = .05) was similar to that of the RP:P protocols (r = .04) and the original RP:P replications (r = .11), and smaller than that of the original studies (r = .37). Analysis of the cumulative evidence across the original studies and the corresponding three replication attempts provided very precise estimates of the 10 tested effects and indicated that their effect sizes (median r = .07, range = .00?.15) were 78% smaller, on average, than the original effect sizes (median r = .37, range = .19?.50)

    Towards better research practices in psychology

    No full text
    Psychology faces a deep crisis of confidence and is at the risk of losing its credibility. Researchers are being criticized for the way they are conducting studies, analyzing data and reporting results. Confronted with this poor research quality in psychology, several recommendations have been made to overcome this problem. The goal of this dissertation is to make a contribution to this enterprise of improving research quality in the field of psychology. In Chapter 1, we carry out a replication study, implementing the most commonly made recommendation for good research practices. In particular, we aim to replicate the crowd within effect, according to which the average of two guesses from one person provides a better estimate than the single guesses on their own. We also tried to make this an exemplary study, in the sense that we attempted to follow most recommended good research practices when carrying out the study. In Chapter 2, we extend the class of recommendations that focus on transparency by highlighting the importance of an increased transparency about arbitrary choices in data processing. We start from the observation that processing raw data into a data file ready for analysis often involves arbitrary choices among several reasonable options for excluding, transforming, and coding data. Using a worked example focusing on the effect of fertility on religiosity and political attitudes, we show that these arbitrary choices can lead to widely fluctuating results. We suggest that instead of performing only one analysis, researchers should perform a multiverse analysis, which involves performing all analyses across the whole set of alternatively processed data sets corresponding to a large set of reasonable scenarios. A multiverse analysis offers an idea of how much the conclusions change because of arbitrary choices in data processing and gives pointers as to which choices are most consequential in the fragility of the result. Chapters 3 and 4 cover topics concerning Bayes factors, which are being advocated as a Bayesian alternative for null hypothesis significance testing. In Chapter 3, we compare the Bayes factor with an alternative Bayesian model selection method: the Prior Information Criterion (PIC). This latter method is a recently developed Bayesian model selection method, with close resemblances to the Bayes factor. Both methods are compared on their behavior in the context of the binomial model and we derive formal relations between them. We show that the PIC can lead to conclusions that not only widely differ from the conclusions based on the Bayes factor, but are also highly undesirable. Finally, in Chapter 4, we extend the core idea of Bayes factors - considering average fit rather than best fit - to qualitative data. Whereas Bayes factors focus on fit with respect to the quantitative aspects of the data, psychologists are often interested in the qualitative aspects of the data, such as ordinal patterns. We explore the potential of Parameter Space Partitioning - a model evaluation tool that focuses on qualitative data patterns - as a model selection method, focusing on average model fit with respect to the qualitative aspects of the data.status: publishe

    Using parameter space partitioning to evaluate a model's qualitative fit

    No full text
    Parameter space partitioning (PSP) is a versatile tool for model analysis that detects the qualitatively distinctive data patterns a model can generate, and partitions a model's parameter space into regions corresponding to these patterns. In this paper, we propose a PSP fit measure that summarizes the outcome of a PSP analysis into a single number, which can be used for model selection. In contrast to traditional model selection methods, PSP-based model selection focuses on qualitative data. We demonstrate PSP-based model selection by use of application examples in the area of category learning. A large-scale model recovery study reveals excellent recovery properties, suggesting that PSP fit is useful for model selection.status: publishe

    Measuring the crowd within again: a pre-registered replication study

    No full text
    According to the crowd within effect, the average of two estimates from one person tends to be more accurate than a single estimate of that person. The effect implies that the well documented wisdom of the crowd effect - the crowd's average estimate tends to be more accurate than the individual estimates - can be obtained within a single individual. In this paper, we performed a high-powered, pre-registered replication study of the original experiment. Our replication results are evaluated with the traditional null hypothesis significance testing approach, as well as with effect sizes and their confidence intervals. We adopted a co-pilot approach, in the sense that all analyses were performed independently by two researchers using different analysis software. Moreover, we report Bayes factors for all tests. We successfully replicated the crowd within effect, both when the second guess was made immediately after the first guess, as well as when it was made three weeks later. The experimental protocol, the raw data, the post-processed data and the analysis code are available online

    Outcome probability modulates anticipatory behavior to signals that are equally reliable

    No full text
    A stimulus is a reliable signal of an outcome when the probability that the outcome occurs in its presence is different from in its absence. Reliable signals of important outcomes are responsible for triggering critical anticipatory or preparatory behavior, which is any form of behavior that prepares the organism to receive a biologically significant event. Previous research has shown that humans and other animals prepare more for outcomes that occur in the presence of highly reliable (i.e., highly contingent) signals, that is, those for which that difference is larger. However, it seems reasonable to expect that, all other things being equal, the probability with which the outcome follows the signal should also affect preparatory behavior. In the present experiment with humans, we used two signals. They were differentially followed by the outcome, but they were equally (and relatively weakly) reliable. The dependent variable was preparatory behavior in a Martians video game. Participants prepared more for the outcome (a Martians’ invasion) when the outcome was most probable. These results indicate that the probability of the outcome can bias preparatory behavior to occur with different intensities despite identical outcome signaling
    corecore