198 research outputs found
Recommended from our members
A randomized trial of a lab-embedded discourse intervention to improve research ethics.
We report a randomized trial of a research ethics training intervention designed to enhance ethics communication in university science and engineering laboratories, focusing specifically on authorship and data management. The intervention is a project-based research ethics curriculum that was designed to enhance the ability of science and engineering research laboratory members to engage in reason giving and interpersonal communication necessary for ethical practice. The randomized trial was fielded in active faculty-led laboratories at two US research-intensive institutions. Here, we show that laboratory members perceived improvements in the quality of discourse on research ethics within their laboratories and enhanced awareness of the relevance and reasons for that discourse for their work as measured by a survey administered over 4 mo after the intervention. This training represents a paradigm shift compared with more typical module-based or classroom ethics instruction that is divorced from the everyday workflow and practices within laboratories and is designed to cultivate a campus culture of ethical science and engineering research in the very work settings where laboratory members interact
The economic well-being of nations is associated with positive daily situational experiences
People in economically advantaged nations tend to evaluate their life as more positive overall and report greater well-being than people in less advantaged nations. But how does positivity manifest in the daily life experiences of individuals around the world? The present study asked 15,244 college students from 62 nations, in 42 languages, to describe a situation they experienced the previous day using the Riverside Situational Q-sort (RSQ). Using expert ratings, the overall positivity of each situation was calculated for both nations and individuals. The positivity of the average situation in each nation was strongly related to the economic development of the nation as measured by the Human Development Index (HDI). For individuals\u27 daily experiences, the economic status of their nation also predicted the positivity of their experience, even more than their family socioeconomic status. Further analyses revealed the specific characteristics of the average situations for higher HDI nations that make their experiences more positive. Higher HDI was associated with situational experiences involving humor, socializing with others, and the potential to express emotions and fantasies. Lower HDI was associated with an increase in the presence of threats, blame, and hostility, as well as situational experiences consisting of family, religion, and money. Despite the increase in a few negative situational characteristics in lower HDI countries, the overall average experience still ranged from neutral to slightly positive, rather than negative, suggesting that greater HDI may not necessarily increase positive experiences but rather decrease negative experiences. The results illustrate how national economic status influences the lives of individuals even within a single instance of daily life, with large and powerful consequences when accumulated across individuals within each nation
The economic well-being of nations is associated with positive daily situational experiences
People in economically advantaged nations tend to evaluate their life as more positive overall and report greater well-being than people in less advantaged nations. But how does positivity manifest in the daily life experiences of individuals around the world? The present study asked 15,244 college students from 62 nations, in 42 languages, to describe a situation they experienced the previous day using the Riverside Situational Q-sort (RSQ). Using expert ratings, the overall positivity of each situation was calculated for both nations and individuals. The positivity of the average situation in each nation was strongly related to the economic development of the nation as measured by the Human Development Index (HDI). For individualsâ daily experiences, the economic status of their nation also predicted the positivity of their experience, even more than their family socioeconomic status. Further analyses revealed the specific characteristics of the average situations for higher HDI nations that make their experiences more positive. Higher HDI was associated with situational experiences involving humor, socializing with others, and the potential to express emotions and fantasies. Lower HDI was associated with an increase in the presence of threats, blame, and hostility, as well as situational experiences consisting of family, religion, and money. Despite the increase in a few negative situational characteristics in lower HDI countries, the overall average experience still ranged from neutral to slightly positive, rather than negative, suggesting that greater HDI may not necessarily increase positive experiences but rather decrease negative experiences. The results illustrate how national economic status influences the lives of individuals even within a single instance of daily life, with large and powerful consequences when accumulated across individuals within each nation
Many Labs 5: Testing Pre-Data-Collection Peer Review as an Intervention to Increase Replicability
Replication studies in psychological science sometimes fail to reproduce prior findings. If these studies use methods that are unfaithful to the original study or ineffective in eliciting the phenomenon of interest, then a failure to replicate may be a failure of the protocol rather than a challenge to the original finding. Formal pre-data-collection peer review by experts may address shortcomings and increase replicability rates. We selected 10 replication studies from the Reproducibility Project: Psychology (RP:P; Open Science Collaboration, 2015) for which the original authors had expressed concerns about the replication designs before data collection; only one of these studies had yielded a statistically significant effect (p < .05). Commenters suggested that lack of adherence to expert review and low-powered tests were the reasons that most of these RP:P studies failed to replicate the original effects. We revised the replication protocols and received formal peer review prior to conducting new replication studies. We administered the RP:P and revised protocols in multiple laboratories (median number of laboratories per original study = 6.5, range = 3â9; median total sample = 1,279.5, range = 276â3,512) for high-powered tests of each original finding with both protocols. Overall, following the preregistered analysis plan, we found that the revised protocols produced effect sizes similar to those of the RP:P protocols (Îr = .002 or .014, depending on analytic approach). The median effect size for the revised protocols (r = .05) was similar to that of the RP:P protocols (r = .04) and the original RP:P replications (r = .11), and smaller than that of the original studies (r = .37). Analysis of the cumulative evidence across the original studies and the corresponding three replication attempts provided very precise estimates of the 10 tested effects and indicated that their effect sizes (median r = .07, range = .00â.15) were 78% smaller, on average, than the original effect sizes (median r = .37, range = .19â.50).Additional co-authors: Ivan Ropovik, Balazs Aczel, Lena F. Aeschbach, Luca Andrighetto, Jack D. Arnal, Holly Arrow, Peter Babincak, Bence E. Bakos, Gabriel BanĂk, Ernest Baskin, Radomir Belopavlovic, Michael H. Bernstein, MichaĆ BiaĆek, Nicholas G. Bloxsom, Bojana BodroĆŸa, Diane B. V. Bonfiglio, Leanne Boucher, Florian BrĂŒhlmann, Claudia C. Brumbaugh, Erica Casini, Yiling Chen, Carlo Chiorri, William J. Chopik, Oliver Christ, Antonia M. Ciunci, Heather M. Claypool, Sean Coary, Marija V. CËolic, W. Matthew Collins, Paul G. Curran, Chris R. Day, Anna Dreber, John E. Edlund, Filipe FalcĂŁo, Anna Fedor, Lily Feinberg, Ian R. Ferguson, MĂĄire Ford, Michael C. Frank, Emily Fryberger, Alexander Garinther, Katarzyna Gawryluk, Kayla Ashbaugh, Mauro Giacomantonio, Steffen R. Giessner, Jon E. Grahe, Rosanna E. Guadagno, Ewa HaĆasa, Rias A. Hilliard, Joachim HĂŒffmeier, Sean Hughes, Katarzyna Idzikowska, Michael Inzlicht, Alan Jern, William JimĂ©nez-Leal, Magnus Johannesson, Jennifer A. Joy-Gaba, Mathias Kauff, Danielle J. Kellier, Grecia Kessinger, Mallory C. Kidwell, Amanda M. Kimbrough, Josiah P. J. King, Vanessa S. Kolb, Sabina KoĆodziej, Marton Kovacs, Karolina Krasuska, Sue Kraus, Lacy E. Krueger, Katarzyna Kuchno, Caio Ambrosio Lage, Eleanor V. Langford, Carmel A. Levitan, Tiago JessĂ© Souza de Lima, Hause Lin, Samuel Lins, Jia E. Loy, Dylan Manfredi, Ćukasz Markiewicz, Madhavi Menon, Brett Mercier, Mitchell Metzger, Venus Meyet, Jeremy K. Miller, Andres Montealegre, Don A. Moore, RafaĆ Muda, Gideon Nave, Austin Lee Nichols, Sarah A. Novak, Christian Nunnally, Ana Orlic, Anna Palinkas, Angelo Panno, Kimberly P. Parks, Ivana Pedovic, Emilian Pekala, Matthew R. Penner, Sebastiaan Pessers, Boban Petrovic, Thomas Pfeiffer, Damian Pienkosz, Emanuele Preti, Danka Puric, Tiago Ramos, Jonathan Ravid, Timothy S. Razza, Katrin Rentzsch, Juliette Richetin, Sean C. Rife, Anna Dalla Rosa, Kaylis Hase Rudy, Janos Salamon, Blair Saunders, PrzemysĆaw Sawicki, Kathleen Schmidt, Kurt Schuepfer, Thomas Schultze, Stefan Schulz-Hardt, Astrid SchĂŒtz, Ani N. Shabazian, Rachel L. Shubella, Adam Siegel, RĂșben Silva, Barbara Sioma, Lauren Skorb, Luana Elayne Cunha de Souza, Sara Steegen, L. A. R. Stein, R. Weylin Sternglanz, Darko Stojilovic, Daniel Storage, Gavin Brent Sullivan, Barnabas Szaszi, Peter Szecsi, Orsolya Szöke, Attila Szuts, Manuela Thomae, Natasha D. Tidwell, Carly Tocco, Ann-Kathrin Torka, Francis Tuerlinckx, Wolf Vanpaemel, Leigh Ann Vaughn, Michelangelo Vianello, Domenico Viganola, Maria Vlachou, Ryan J. Walker, Sophia C. Weissgerber, Aaron L. Wichman, Bradford J. Wiggins, Daniel Wolf, Michael J. Wood, David Zealley, Iris ĆœeĆŸelj, Mark Zrubka, and Brian A. Nose
Recommended from our members
Many Labs 5: Testing Pre-Data-Collection Peer Review as an Intervention to Increase Replicability
Replication studies in psychological science sometimes fail to reproduce prior findings. If these studies use methods that are unfaithful to the original study or ineffective in eliciting the phenomenon of interest, then a failure to replicate may be a failure of the protocol rather than a challenge to the original finding. Formal pre-data-collection peer review by experts may address shortcomings and increase replicability rates. We selected 10 replication studies from the Reproducibility Project: Psychology (RP:P; Open Science Collaboration, 2015) for which the original authors had expressed concerns about the replication designs before data collection; only one of these studies had yielded a statistically significant effect (p <.05). Commenters suggested that lack of adherence to expert review and low-powered tests were the reasons that most of these RP:P studies failed to replicate the original effects. We revised the replication protocols and received formal peer review prior to conducting new replication studies. We administered the RP:P and revised protocols in multiple laboratories (median number of laboratories per original study = 6.5, range = 3â9; median total sample = 1,279.5, range = 276â3,512) for high-powered tests of each original finding with both protocols. Overall, following the preregistered analysis plan, we found that the revised protocols produced effect sizes similar to those of the RP:P protocols (Îr =.002 or.014, depending on analytic approach). The median effect size for the revised protocols (r =.05) was similar to that of the RP:P protocols (r =.04) and the original RP:P replications (r =.11), and smaller than that of the original studies (r =.37). Analysis of the cumulative evidence across the original studies and the corresponding three replication attempts provided very precise estimates of the 10 tested effects and indicated that their effect sizes (median r =.07, range =.00â.15) were 78% smaller, on average, than the original effect sizes (median r =.37, range =.19â.50)
Happiness around the world: A combined etic-emic approach across 63 countries.
What does it mean to be happy? The vast majority of cross-cultural studies on happiness have employed a Western-origin, or "WEIRD" measure of happiness that conceptualizes it as a self-centered (or "independent"), high-arousal emotion. However, research from Eastern cultures, particularly Japan, conceptualizes happiness as including an interpersonal aspect emphasizing harmony and connectedness to others. Following a combined emic-etic approach (Cheung, van de Vijver & Leong, 2011), we assessed the cross-cultural applicability of a measure of independent happiness developed in the US (Subjective Happiness Scale; Lyubomirsky & Lepper, 1999) and a measure of interdependent happiness developed in Japan (Interdependent Happiness Scale; Hitokoto & Uchida, 2015), with data from 63 countries representing 7 sociocultural regions. Results indicate that the schema of independent happiness was more coherent in more WEIRD countries. In contrast, the coherence of interdependent happiness was unrelated to a country's "WEIRD-ness." Reliabilities of both happiness measures were lowest in African and Middle Eastern countries, suggesting these two conceptualizations of happiness may not be globally comprehensive. Overall, while the two measures had many similar correlates and properties, the self-focused concept of independent happiness is "WEIRD-er" than interdependent happiness, suggesting cross-cultural researchers should attend to both conceptualizations
Many Labs 5:Testing pre-data collection peer review as an intervention to increase replicability
Replication studies in psychological science sometimes fail to reproduce prior findings. If these studies use methods that are unfaithful to the original study or ineffective in eliciting the phenomenon of interest, then a failure to replicate may be a failure of the protocol rather than a challenge to the original finding. Formal pre-data-collection peer review by experts may address shortcomings and increase replicability rates. We selected 10 replication studies from the Reproducibility Project: Psychology (RP:P; Open Science Collaboration, 2015) for which the original authors had expressed concerns about the replication designs before data collection; only one of these studies had yielded a statistically significant effect (p < .05). Commenters suggested that lack of adherence to expert review and low-powered tests were the reasons that most of these RP:P studies failed to replicate the original effects. We revised the replication protocols and received formal peer review prior to conducting new replication studies. We administered the RP:P and revised protocols in multiple laboratories (median number of laboratories per original study = 6.5, range = 3?9; median total sample = 1,279.5, range = 276?3,512) for high-powered tests of each original finding with both protocols. Overall, following the preregistered analysis plan, we found that the revised protocols produced effect sizes similar to those of the RP:P protocols (?r = .002 or .014, depending on analytic approach). The median effect size for the revised protocols (r = .05) was similar to that of the RP:P protocols (r = .04) and the original RP:P replications (r = .11), and smaller than that of the original studies (r = .37). Analysis of the cumulative evidence across the original studies and the corresponding three replication attempts provided very precise estimates of the 10 tested effects and indicated that their effect sizes (median r = .07, range = .00?.15) were 78% smaller, on average, than the original effect sizes (median r = .37, range = .19?.50)
Recommended from our members
Volitional Personality Change Across 58 Countries
Recent research suggests that the majority of individuals residing in the US are currently trying to change an aspect of their personalities, and these attempts are related to current personality trait levels. Yet to be understood is how these trends vary within the US and across countries. The current dissertation investigated volitional personality change (VPC) in terms of who is trying to change and what exactly they are trying to change. With use of a custom-made website, 14,227 participants from six US states and 58 countries reported whether they were currently trying to their personality and provided open-ended descriptions of what they were trying to change. Results indicated that on average, 63.54% of individuals around the world report VPC. Furthermore, individuals who have high levels of negative emotionality and low levels of happiness report VPC. Countries with high employment rates and low self-reported health tend to have high proportions of VPC. Finally, there was a near uniform tendency across states and countries for individuals to report trying to change a undesirable aspects of their personalities (e.g., those with low levels of extraversion reported trying to increase levels of extraversion). These findings suggest that the majority of individuals across the United States and around the world report VPC attempts and that these attempts may be motivated by current low levels of socially desired traits and the subsequent desire towards self-improvement
- âŠ