20 research outputs found

    Measuring cost: The forgotten component of expectancy value theory

    Get PDF
    Abstract Expectancy-Value Theory (EVT) (Eccles et al., 1983) offers one of the most influential models for understanding motivation. One component of this theory, cost, can be defined as how much a student has to sacrifice to engage in a task. However, EVT researchers appear to have forgotten the component of cost. Though cost has been theorized as an important component of EVT, empirical work has neglected to measure and study it (Wigfield & Cambria, 2010). As a result, cost and its relationship with student outcomes is largely unknown (Wigfield & Eccles, 2000). The focus of the current study is to address this shortcoming in the literature by reviewing what is currently known about cost and proposing a new scale to measure it. Scale development for cost was an iterative process, guided by Benson’s framework for construct validation (Benson, 1998). The first iteration adopted a top-down approach by conducting an in-depth analysis of the history of EVT and its measurement in educational psychology, as well as cost-related constructs in other literatures in psychology. I used theory and past literature to determine the initial theoretical structure of cost. In the second iteration of scale development, I adopted a bottom-up approach by evaluating data from an exploratory, qualitative study. In the final iteration, the content validity of the proposed scale was investigated using input from a panel of experts. The conclusion of this project offers 36 items to measure numerous components of cost. I offer suggestions for future research to determine the structural and external validity of the scale

    To which world regions does the valence–dominance model of social perception apply?

    Get PDF
    Over the past 10 years, Oosterhof and Todorov’s valence–dominance model has emerged as the most prominent account of how people evaluate faces on social dimensions. In this model, two dimensions (valence and dominance) underpin social judgements of faces. Because this model has primarily been developed and tested in Western regions, it is unclear whether these findings apply to other regions. We addressed this question by replicating Oosterhof and Todorov’s methodology across 11 world regions, 41 countries and 11,570 participants. When we used Oosterhof and Todorov’s original analysis strategy, the valence–dominance model generalized across regions. When we used an alternative methodology to allow for correlated dimensions, we observed much less generalization. Collectively, these results suggest that, while the valence–dominance model generalizes very well across regions when dimensions are forced to be orthogonal, regional differences are revealed when we use different extraction methods and correlate and rotate the dimension reduction solution

    The Psychological Science Accelerator: Advancing Psychology through a Distributed Collaborative Network

    Get PDF
    Concerns have been growing about the veracity of psychological research. Many findings in psychological science are based on studies with insufficient statistical power and nonrepresentative samples, or may otherwise be limited to specific, ungeneralizable settings or populations. Crowdsourced research, a type of large-scale collaboration in which one or more research projects are conducted across multiple lab sites, offers a pragmatic solution to these and other current methodological challenges. The Psychological Science Accelerator (PSA) is a distributed network of laboratories designed to enable and support crowdsourced research projects. These projects can focus on novel research questions, or attempt to replicate prior research, in large, diverse samples. The PSA\u27s mission is to accelerate the accumulation of reliable and generalizable evidence in psychological science. Here, we describe the background, structure, principles, procedures, benefits, and challenges of the PSA. In contrast to other crowdsourced research networks, the PSA is ongoing (as opposed to time-limited), efficient (in terms of re-using structures and principles for different projects), decentralized, diverse (in terms of participants and researchers), and inclusive (of proposals, contributions, and other relevant input from anyone inside or outside of the network). The PSA and other approaches to crowdsourced psychological science will advance our understanding of mental processes and behaviors by enabling rigorous research and systematically examining its generalizability

    The Psychological Science Accelerator: Advancing Psychology Through a Distributed Collaborative Network

    Get PDF
    Source at https://doi.org/10.1177/2515245918797607.Concerns about the veracity of psychological research have been growing. Many findings in psychological science are based on studies with insufficient statistical power and nonrepresentative samples, or may otherwise be limited to specific, ungeneralizable settings or populations. Crowdsourced research, a type of large-scale collaboration in which one or more research projects are conducted across multiple lab sites, offers a pragmatic solution to these and other current methodological challenges. The Psychological Science Accelerator (PSA) is a distributed network of laboratories designed to enable and support crowdsourced research projects. These projects can focus on novel research questions or replicate prior research in large, diverse samples. The PSA’s mission is to accelerate the accumulation of reliable and generalizable evidence in psychological science. Here, we describe the background, structure, principles, procedures, benefits, and challenges of the PSA. In contrast to other crowdsourced research networks, the PSA is ongoing (as opposed to time limited), efficient (in that structures and principles are reused for different projects), decentralized, diverse (in both subjects and researchers), and inclusive (of proposals, contributions, and other relevant input from anyone inside or outside the network). The PSA and other approaches to crowdsourced psychological science will advance understanding of mental processes and behaviors by enabling rigorous research and systematic examination of its generalizability

    To which world regions does the valence–dominance model of social perception apply?

    Get PDF
    Over the past 10 years, Oosterhof and Todorov’s valence–dominance model has emerged as the most prominent account of how people evaluate faces on social dimensions. In this model, two dimensions (valence and dominance) underpin social judgements of faces. Because this model has primarily been developed and tested in Western regions, it is unclear whether these findings apply to other regions. We addressed this question by replicating Oosterhof and Todorov’s methodology across 11 world regions, 41 countries and 11,570 participants. When we used Oosterhof and Todorov’s original analysis strategy, the valence–dominance model generalized across regions. When we used an alternative methodology to allow for correlated dimensions, we observed much less generalization. Collectively, these results suggest that, while the valence–dominance model generalizes very well across regions when dimensions are forced to be orthogonal, regional differences are revealed when we use different extraction methods and correlate and rotate the dimension reduction solution.C.L. was supported by the Vienna Science and Technology Fund (WWTF VRG13-007); L.M.D. was supported by ERC 647910 (KINSHIP); D.I.B. and N.I. received funding from CONICET, Argentina; L.K., F.K. and Á. Putz were supported by the European Social Fund (EFOP-3.6.1.-16-2016-00004; ‘Comprehensive Development for Implementing Smart Specialization Strategies at the University of Pécs’). K.U. and E. Vergauwe were supported by a grant from the Swiss National Science Foundation (PZ00P1_154911 to E. Vergauwe). T.G. is supported by the Social Sciences and Humanities Research Council of Canada (SSHRC). M.A.V. was supported by grants 2016-T1/SOC-1395 (Comunidad de Madrid) and PSI2017-85159-P (AEI/FEDER UE). K.B. was supported by a grant from the National Science Centre, Poland (number 2015/19/D/HS6/00641). J. Bonick and J.W.L. were supported by the Joep Lange Institute. G.B. was supported by the Slovak Research and Development Agency (APVV-17-0418). H.I.J. and E.S. were supported by a French National Research Agency ‘Investissements d’Avenir’ programme grant (ANR-15-IDEX-02). T.D.G. was supported by an Australian Government Research Training Program Scholarship. The Raipur Group is thankful to: (1) the University Grants Commission, New Delhi, India for the research grants received through its SAP-DRS (Phase-III) scheme sanctioned to the School of Studies in Life Science; and (2) the Center for Translational Chronobiology at the School of Studies in Life Science, PRSU, Raipur, India for providing logistical support. K. Ask was supported by a small grant from the Department of Psychology, University of Gothenburg. Y.Q. was supported by grants from the Beijing Natural Science Foundation (5184035) and CAS Key Laboratory of Behavioral Science, Institute of Psychology. N.A.C. was supported by the National Science Foundation Graduate Research Fellowship (R010138018). We acknowledge the following research assistants: J. Muriithi and J. Ngugi (United States International University Africa); E. Adamo, D. Cafaro, V. Ciambrone, F. Dolce and E. Tolomeo (Magna Græcia University of Catanzaro); E. De Stefano (University of Padova); S. A. Escobar Abadia (University of Lincoln); L. E. Grimstad (Norwegian School of Economics (NHH)); L. C. Zamora (Franklin and Marshall College); R. E. Liang and R. C. Lo (Universiti Tunku Abdul Rahman); A. Short and L. Allen (Massey University, New Zealand), A. Ateş, E. Güneş and S. Can Özdemir (Boğaziçi University); I. Pedersen and T. Roos (Åbo Akademi University); N. Paetz (Escuela de Comunicación Mónica Herrera); J. Green (University of Gothenburg); M. Krainz (University of Vienna, Austria); and B. Todorova (University of Vienna, Austria). The funders had no role in study design, data collection and analysis, decision to publish or preparation of the manuscript.https://www.nature.com/nathumbehav/am2023BiochemistryGeneticsMicrobiology and Plant Patholog

    To which world regions does the valence–dominance model of social perception apply?

    Get PDF
    Over the past 10 years, Oosterhof and Todorov’s valence–dominance model has emerged as the most prominent account of how people evaluate faces on social dimensions. In this model, two dimensions (valence and dominance) underpin social judgements of faces. Because this model has primarily been developed and tested in Western regions, it is unclear whether these findings apply to other regions. We addressed this question by replicating Oosterhof and Todorov’s methodology across 11 world regions, 41 countries and 11,570 participants. When we used Oosterhof and Todorov’s original analysis strategy, the valence–dominance model generalized across regions. When we used an alternative methodology to allow for correlated dimensions, we observed much less generalization. Collectively, these results suggest that, while the valence–dominance model generalizes very well across regions when dimensions are forced to be orthogonal, regional differences are revealed when we use different extraction methods and correlate and rotate the dimension reduction solution

    A multi-country test of brief reappraisal interventions on emotions during the COVID-19 pandemic.

    Get PDF
    The COVID-19 pandemic has increased negative emotions and decreased positive emotions globally. Left unchecked, these emotional changes might have a wide array of adverse impacts. To reduce negative emotions and increase positive emotions, we tested the effectiveness of reappraisal, an emotion-regulation strategy that modifies how one thinks about a situation. Participants from 87 countries and regions (n = 21,644) were randomly assigned to one of two brief reappraisal interventions (reconstrual or repurposing) or one of two control conditions (active or passive). Results revealed that both reappraisal interventions (vesus both control conditions) consistently reduced negative emotions and increased positive emotions across different measures. Reconstrual and repurposing interventions had similar effects. Importantly, planned exploratory analyses indicated that reappraisal interventions did not reduce intentions to practice preventive health behaviours. The findings demonstrate the viability of creating scalable, low-cost interventions for use around the world

    Strengthening the foundation of educational psychology by integrating construct validation into open science reform

    No full text
    An increased focus on transparency and replication in science has stimulated reform in research practices and dissemination. As a result, the research culture is changing: the use of preregistration is on the rise, access to data and materials is increasing, and large-scale replication studies are more common. In this paper, I discuss two problems the methodological reform movement is now ready to tackle given the progress thus far and how educational psychology is particularly well suited to contribute. The first problem is that there is a lack of transparency and rigor in measurement development and use. The second problem is caused by the first; replication research is difficult and potentially futile as long as the first problem persists. I describe how to expand transparent practices into measure use and how construct validation can be implemented to bolster the validity of replication studies

    Measurement Invariance Testing Using Confirmatory Factor Analysis and Alignment Optimization: A Tutorial for Transparent Analysis Planning and Reporting

    No full text
    Measurement invariance—the notion that the measurement properties of a scale are equal across groups, contexts, or time—is an important assumption underlying much of psychology research. The traditional approach for evaluating measurement invariance is to fit a series of nested measurement models using multiple-group confirmatory factor analyses. However, traditional approaches are strict, vary across the field in implementation, and present multiplicity challenges, even in the simplest case of two groups under study. The alignment method was recently proposed as an alternative approach. This method is more automated, requires fewer decisions from researchers, and accommodates two or more groups. However, it has different assumptions, estimation techniques, and limitations from traditional approaches. To address the lack of accessible resources that explain the methodological differences and complexities between the two approaches, we introduce and illustrate both, comparing them side by side. First, we overview the concepts, assumptions, advantages, and limitations of each approach. Based on this overview, we propose a list of four key considerations to help researchers decide which approach to choose and how to document their analytical decisions in a preregistration or analysis plan. We then demonstrate our key considerations on an illustrative research question using an open dataset and provide an example of a completed preregistration. Our illustrative example is accompanied by an annotated analysis report that shows readers, step-by-step, how to conduct measurement invariance tests using R and Mplus. Finally, we provide recommendations for how to decide between and use each approach and next steps for methodological research

    Measurement Schmeasurement: Questionable Measurement Practices and How to Avoid Them

    No full text
    In this paper, we define questionable measurement practices (QMPs) as decisions researchers make that raise doubts about the validity of the measures, and ultimately the validity of study conclusions. Doubts arise for a host of reasons including a lack of transparency, ignorance, negligence, or misrepresentation of the evidence. We describe the scope of the problem and focus on how transparency is a part of the solution. A lack of measurement transparency makes it impossible to evaluate potential threats to internal, external, statistical conclusion, and construct validity. We demonstrate that psychology is plagued by a measurement schmeasurement attitude: QMPs are common, hide a stunning source of researcher degrees of freedom, pose a serious threat to cumulative psychological science, but are largely ignored. We address these challenges by providing a set of questions that researchers and consumers of scientific research can consider to identify and avoid QMPs. Transparent answers to these measurement questions promote rigorous research, allow for thorough evaluations of a study’s inferences, and are necessary for meaningful replication studies
    corecore