15 research outputs found
Registered Replication Report: Dijksterhuis and van Knippenberg (1998)
Dijksterhuis and van Knippenberg (1998) reported that participants primed with a category associated with intelligence ("professor") subsequently performed 13% better on a trivia test than participants primed with a category associated with a lack of intelligence ("soccer hooligans"). In two unpublished replications of this study designed to verify the appropriate testing procedures, Dijksterhuis, van Knippenberg, and Holland observed a smaller difference between conditions (2%-3%) as well as a gender difference: Men showed the effect (9.3% and 7.6%), but women did not (0.3% and -0.3%). The procedure used in those replications served as the basis for this multilab Registered Replication Report. A total of 40 laboratories collected data for this project, and 23 of these laboratories met all inclusion criteria. Here we report the meta-analytic results for those 23 direct replications (total N = 4,493), which tested whether performance on a 30-item general-knowledge trivia task differed between these two priming conditions (results of supplementary analyses of the data from all 40 labs, N = 6,454, are also reported). We observed no overall difference in trivia performance between participants primed with the "professor" category and those primed with the "hooligan" category (0.14%) and no moderation by gender
Registered Replication Report: Dijksterhuis and van Knippenberg (1998)
Dijksterhuis and van Knippenberg (1998) reported that participants primed with a category associated with intelligence ("professor") subsequently performed 13% better on a trivia test than participants primed with a category associated with a lack of intelligence ("soccer hooligans"). In two unpublished replications of this study designed to verify the appropriate testing procedures, Dijksterhuis, van Knippenberg, and Holland observed a smaller difference between conditions (2%-3%) as well as a gender difference: Men showed the effect (9.3% and 7.6%), but women did not (0.3% and -0.3%). The procedure used in those replications served as the basis for this multilab Registered Replication Report. A total of 40 laboratories collected data for this project, and 23 of these laboratories met all inclusion criteria. Here we report the meta-analytic results for those 23 direct replications (total N = 4,493), which tested whether performance on a 30-item general-knowledge trivia task differed between these two priming conditions (results of supplementary analyses of the data from all 40 labs, N = 6,454, are also reported). We observed no overall difference in trivia performance between participants primed with the "professor" category and those primed with the "hooligan" category (0.14%) and no moderation by gender
Many Labs 5:Testing pre-data collection peer review as an intervention to increase replicability
Replication studies in psychological science sometimes fail to reproduce prior findings. If these studies use methods that are unfaithful to the original study or ineffective in eliciting the phenomenon of interest, then a failure to replicate may be a failure of the protocol rather than a challenge to the original finding. Formal pre-data-collection peer review by experts may address shortcomings and increase replicability rates. We selected 10 replication studies from the Reproducibility Project: Psychology (RP:P; Open Science Collaboration, 2015) for which the original authors had expressed concerns about the replication designs before data collection; only one of these studies had yielded a statistically significant effect (p < .05). Commenters suggested that lack of adherence to expert review and low-powered tests were the reasons that most of these RP:P studies failed to replicate the original effects. We revised the replication protocols and received formal peer review prior to conducting new replication studies. We administered the RP:P and revised protocols in multiple laboratories (median number of laboratories per original study = 6.5, range = 3?9; median total sample = 1,279.5, range = 276?3,512) for high-powered tests of each original finding with both protocols. Overall, following the preregistered analysis plan, we found that the revised protocols produced effect sizes similar to those of the RP:P protocols (?r = .002 or .014, depending on analytic approach). The median effect size for the revised protocols (r = .05) was similar to that of the RP:P protocols (r = .04) and the original RP:P replications (r = .11), and smaller than that of the original studies (r = .37). Analysis of the cumulative evidence across the original studies and the corresponding three replication attempts provided very precise estimates of the 10 tested effects and indicated that their effect sizes (median r = .07, range = .00?.15) were 78% smaller, on average, than the original effect sizes (median r = .37, range = .19?.50)
Development and Validation of Consumers' Need for Ingredient Authenticity (CNIA Scale)
© 2018 Taylor & Francis. Concepts from country-of-origin, the authenticity concept, and ingredient branding make up the essential literature for this scale development. This study intends to develop a scale specifically to measure consumers' motivation to seek for ingredient authenticity. While studies on authenticity have heavily looked into brands, this study aims to uncover consumers' motivations of ingredient authenticity of the raw materials and artisan skills of the products. Four studies were undertaken to develop and validate this scale. The research adopted the Churchill's (1979) method of scale development. The methods for scale development and its implications are also highlighted
Registered replication report: Dijksterhuis and van Knippenberg (1998)
Dijksterhuis and van Knippenberg (1998) reported that participants primed with a category associated with intelligence (âprofessorâ) subsequently performed 13% better on a trivia test than participants primed with a category associated with a lack of intelligence (âsoccer hooligansâ). In two unpublished replications of this study designed to verify the appropriate testing procedures, Dijksterhuis, van Knippenberg, and Holland observed a smaller difference between conditions (2%â3%) as well as a gender difference: Men showed the effect (9.3% and 7.6%), but women did not (0.3% and â0.3%). The procedure used in those replications served as the basis for this multilab Registered Replication Report. A total of 40 laboratories collected data for this project, and 23 of these laboratories met all inclusion criteria. Here we report the meta-analytic results for those 23 direct replications (total N = 4,493), which tested whether performance on a 30-item general-knowledge trivia task differed between these two priming conditions (results of supplementary analyses of the data from all 40 labs, N = 6,454, are also reported). We observed no overall difference in trivia performance between participants primed with the âprofessorâ category and those primed with the âhooliganâ category (0.14%) and no moderation by gender
Registered Replication Report on Mazar, Amir, and Ariely (2008)
The self-concept maintenance theory holds that many people will cheat in order to maximize self-profit, but only to the extent that they can do so while maintaining a positive self-concept. Mazar, Amir, and Ariely (2008, Experiment 1) gave participants an opportunity and incentive to cheat on a problem-solving task. Prior to that task, participants either recalled the Ten Commandments (a moral reminder) or recalled 10 books they had read in high school (a neutral task). Results were consistent with the self-concept maintenance theory. When given the opportunity to cheat, participants given the moral-reminder priming task reported solving 1.45 fewer matrices than did those given a neutral prime (CohenĂą\u80\u99s d = 0.48); moral reminders reduced cheating. Mazar et al.Ăą\u80\u99s article is among the most cited in deception research, but their Experiment 1 has not been replicated directly. This Registered Replication Report describes the aggregated result of 25 direct replications (total N = 5,786), all of which followed the same preregistered protocol. In the primary meta-analysis (19 replications, total n = 4,674), participants who were given an opportunity to cheat reported solving 0.11 more matrices if they were given a moral reminder than if they were given a neutral reminder (95% confidence interval = [â0.09, 0.31]). This small effect was numerically in the opposite direction of the effect observed in the original study (Cohenâs d = â0.04)
Many Labs 5: Testing Pre-Data-Collection Peer Review as an Intervention to Increase Replicability
none172siReplication studies in psychological science sometimes fail to reproduce prior findings. If these studies use methods that are unfaithful to the original study or ineffective in eliciting the phenomenon of interest, then a failure to replicate may be a failure of the protocol rather than a challenge to the original finding. Formal pre-data-collection peer review by experts may address shortcomings and increase replicability rates. We selected 10 replication studies from the Reproducibility Project: Psychology (RP:P; Open Science Collaboration, 2015) for which the original authors had expressed concerns about the replication designs before data collection; only one of these studies had yielded a statistically significant effect (p <.05). Commenters suggested that lack of adherence to expert review and low-powered tests were the reasons that most of these RP:P studies failed to replicate the original effects. We revised the replication protocols and received formal peer review prior to conducting new replication studies. We administered the RP:P and revised protocols in multiple laboratories (median number of laboratories per original study = 6.5, range = 3â9; median total sample = 1,279.5, range = 276â3,512) for high-powered tests of each original finding with both protocols. Overall, following the preregistered analysis plan, we found that the revised protocols produced effect sizes similar to those of the RP:P protocols (Îr =.002 or.014, depending on analytic approach). The median effect size for the revised protocols (r =.05) was similar to that of the RP:P protocols (r =.04) and the original RP:P replications (r =.11), and smaller than that of the original studies (r =.37). Analysis of the cumulative evidence across the original studies and the corresponding three replication attempts provided very precise estimates of the 10 tested effects and indicated that their effect sizes (median r =.07, range =.00â.15) were 78% smaller, on average, than the original effect sizes (median r =.37, range =.19â.50).mixedEbersole C.R.; Mathur M.B.; Baranski E.; Bart-Plange D.-J.; Buttrick N.R.; Chartier C.R.; Corker K.S.; Corley M.; Hartshorne J.K.; IJzerman H.; Lazarevic L.B.; Rabagliati H.; Ropovik I.; Aczel B.; Aeschbach L.F.; Andrighetto L.; Arnal J.D.; Arrow H.; Babincak P.; Bakos B.E.; Banik G.; Baskin E.; Belopavlovic R.; Bernstein M.H.; Bialek M.; Bloxsom N.G.; Bodroza B.; Bonfiglio D.B.V.; Boucher L.; Bruhlmann F.; Brumbaugh C.C.; Casini E.; Chen Y.; Chiorri C.; Chopik W.J.; Christ O.; Ciunci A.M.; Claypool H.M.; Coary S.; Colic M.V.; Collins W.M.; Curran P.G.; Day C.R.; Dering B.; Dreber A.; Edlund J.E.; Falcao F.; Fedor A.; Feinberg L.; Ferguson I.R.; Ford M.; Frank M.C.; Fryberger E.; Garinther A.; Gawryluk K.; Ashbaugh K.; Giacomantonio M.; Giessner S.R.; Grahe J.E.; Guadagno R.E.; Halasa E.; Hancock P.J.B.; Hilliard R.A.; Huffmeier J.; Hughes S.; Idzikowska K.; Inzlicht M.; Jern A.; Jimenez-Leal W.; Johannesson M.; Joy-Gaba J.A.; Kauff M.; Kellier D.J.; Kessinger G.; Kidwell M.C.; Kimbrough A.M.; King J.P.J.; Kolb V.S.; Kolodziej S.; Kovacs M.; Krasuska K.; Kraus S.; Krueger L.E.; Kuchno K.; Lage C.A.; Langford E.V.; Levitan C.A.; de Lima T.J.S.; Lin H.; Lins S.; Loy J.E.; Manfredi D.; Markiewicz L.; Menon M.; Mercier B.; Metzger M.; Meyet V.; Millen A.E.; Miller J.K.; Montealegre A.; Moore D.A.; Muda R.; Nave G.; Nichols A.L.; Novak S.A.; Nunnally C.; Orlic A.; Palinkas A.; Panno A.; Parks K.P.; Pedovic I.; Pekala E.; Penner M.R.; Pessers S.; Petrovic B.; Pfeiffer T.; Pienkosz D.; Preti E.; Puric D.; Ramos T.; Ravid J.; Razza T.S.; Rentzsch K.; Richetin J.; Rife S.C.; Rosa A.D.; Rudy K.H.; Salamon J.; Saunders B.; Sawicki P.; Schmidt K.; Schuepfer K.; Schultze T.; Schulz-Hardt S.; Schutz A.; Shabazian A.N.; Shubella R.L.; Siegel A.; Silva R.; Sioma B.; Skorb L.; de Souza L.E.C.; Steegen S.; Stein L.A.R.; Sternglanz R.W.; Stojilovic D.; Storage D.; Sullivan G.B.; Szaszi B.; Szecsi P.; Szoke O.; Szuts A.; Thomae M.; Tidwell N.D.; Tocco C.; Torka A.-K.; Tuerlinckx F.; Vanpaemel W.; Vaughn L.A.; Vianello M.; Viganola D.; Vlachou M.; Walker R.J.; Weissgerber S.C.; Wichman A.L.; Wiggins B.J.; Wolf D.; Wood M.J.; Zealley D.; Zezelj I.; Zrubka M.; Nosek B.A.Ebersole, C. R.; Mathur, M. B.; Baranski, E.; Bart-Plange, D. -J.; Buttrick, N. R.; Chartier, C. R.; Corker, K. S.; Corley, M.; Hartshorne, J. K.; Ijzerman, H.; Lazarevic, L. B.; Rabagliati, H.; Ropovik, I.; Aczel, B.; Aeschbach, L. F.; Andrighetto, L.; Arnal, J. D.; Arrow, H.; Babincak, P.; Bakos, B. E.; Banik, G.; Baskin, E.; Belopavlovic, R.; Bernstein, M. H.; Bialek, M.; Bloxsom, N. G.; Bodroza, B.; Bonfiglio, D. B. V.; Boucher, L.; Bruhlmann, F.; Brumbaugh, C. C.; Casini, E.; Chen, Y.; Chiorri, C.; Chopik, W. J.; Christ, O.; Ciunci, A. M.; Claypool, H. M.; Coary, S.; Colic, M. V.; Collins, W. M.; Curran, P. G.; Day, C. R.; Dering, B.; Dreber, A.; Edlund, J. E.; Falcao, F.; Fedor, A.; Feinberg, L.; Ferguson, I. R.; Ford, M.; Frank, M. C.; Fryberger, E.; Garinther, A.; Gawryluk, K.; Ashbaugh, K.; Giacomantonio, M.; Giessner, S. R.; Grahe, J. E.; Guadagno, R. E.; Halasa, E.; Hancock, P. J. B.; Hilliard, R. A.; Huffmeier, J.; Hughes, S.; Idzikowska, K.; Inzlicht, M.; Jern, A.; Jimenez-Leal, W.; Johannesson, M.; Joy-Gaba, J. A.; Kauff, M.; Kellier, D. J.; Kessinger, G.; Kidwell, M. C.; Kimbrough, A. M.; King, J. P. J.; Kolb, V. S.; Kolodziej, S.; Kovacs, M.; Krasuska, K.; Kraus, S.; Krueger, L. E.; Kuchno, K.; Lage, C. A.; Langford, E. V.; Levitan, C. A.; de Lima, T. J. S.; Lin, H.; Lins, S.; Loy, J. E.; Manfredi, D.; Markiewicz, L.; Menon, M.; Mercier, B.; Metzger, M.; Meyet, V.; Millen, A. E.; Miller, J. K.; Montealegre, A.; Moore, D. A.; Muda, R.; Nave, G.; Nichols, A. L.; Novak, S. A.; Nunnally, C.; Orlic, A.; Palinkas, A.; Panno, A.; Parks, K. P.; Pedovic, I.; Pekala, E.; Penner, M. R.; Pessers, S.; Petrovic, B.; Pfeiffer, T.; Pienkosz, D.; Preti, E.; Puric, D.; Ramos, T.; Ravid, J.; Razza, T. S.; Rentzsch, K.; Richetin, J.; Rife, S. C.; Rosa, A. D.; Rudy, K. H.; Salamon, J.; Saunders, B.; Sawicki, P.; Schmidt, K.; Schuepfer, K.; Schultze, T.; Schulz-Hardt, S.; Schutz, A.; Shabazian, A. N.; Shubella, R. L.; Siegel, A.; Silva, R.; Sioma, B.; Skorb, L.; de Souza, L. E. C.; Steegen, S.; Stein, L. A. R.; Sternglanz, R. W.; Stojilovic, D.; Storage, D.; Sullivan, G. B.; Szaszi, B.; Szecsi, P.; Szoke, O.; Szuts, A.; Thomae, M.; Tidwell, N. D.; Tocco, C.; Torka, A. -K.; Tuerlinckx, F.; Vanpaemel, W.; Vaughn, L. A.; Vianello, M.; Viganola, D.; Vlachou, M.; Walker, R. J.; Weissgerber, S. C.; Wichman, A. L.; Wiggins, B. J.; Wolf, D.; Wood, M. J.; Zealley, D.; Zezelj, I.; Zrubka, M.; Nosek, B. A