9 research outputs found
Recommended from our members
Many Labs 5: Testing Pre-Data-Collection Peer Review as an Intervention to Increase Replicability
Replication studies in psychological science sometimes fail to reproduce prior findings. If these studies use methods that are unfaithful to the original study or ineffective in eliciting the phenomenon of interest, then a failure to replicate may be a failure of the protocol rather than a challenge to the original finding. Formal pre-data-collection peer review by experts may address shortcomings and increase replicability rates. We selected 10 replication studies from the Reproducibility Project: Psychology (RP:P; Open Science Collaboration, 2015) for which the original authors had expressed concerns about the replication designs before data collection; only one of these studies had yielded a statistically significant effect (p <.05). Commenters suggested that lack of adherence to expert review and low-powered tests were the reasons that most of these RP:P studies failed to replicate the original effects. We revised the replication protocols and received formal peer review prior to conducting new replication studies. We administered the RP:P and revised protocols in multiple laboratories (median number of laboratories per original study = 6.5, range = 3–9; median total sample = 1,279.5, range = 276–3,512) for high-powered tests of each original finding with both protocols. Overall, following the preregistered analysis plan, we found that the revised protocols produced effect sizes similar to those of the RP:P protocols (Δr =.002 or.014, depending on analytic approach). The median effect size for the revised protocols (r =.05) was similar to that of the RP:P protocols (r =.04) and the original RP:P replications (r =.11), and smaller than that of the original studies (r =.37). Analysis of the cumulative evidence across the original studies and the corresponding three replication attempts provided very precise estimates of the 10 tested effects and indicated that their effect sizes (median r =.07, range =.00–.15) were 78% smaller, on average, than the original effect sizes (median r =.37, range =.19–.50)
Many Labs 5:Testing pre-data collection peer review as an intervention to increase replicability
Replication studies in psychological science sometimes fail to reproduce prior findings. If these studies use methods that are unfaithful to the original study or ineffective in eliciting the phenomenon of interest, then a failure to replicate may be a failure of the protocol rather than a challenge to the original finding. Formal pre-data-collection peer review by experts may address shortcomings and increase replicability rates. We selected 10 replication studies from the Reproducibility Project: Psychology (RP:P; Open Science Collaboration, 2015) for which the original authors had expressed concerns about the replication designs before data collection; only one of these studies had yielded a statistically significant effect (p < .05). Commenters suggested that lack of adherence to expert review and low-powered tests were the reasons that most of these RP:P studies failed to replicate the original effects. We revised the replication protocols and received formal peer review prior to conducting new replication studies. We administered the RP:P and revised protocols in multiple laboratories (median number of laboratories per original study = 6.5, range = 3?9; median total sample = 1,279.5, range = 276?3,512) for high-powered tests of each original finding with both protocols. Overall, following the preregistered analysis plan, we found that the revised protocols produced effect sizes similar to those of the RP:P protocols (?r = .002 or .014, depending on analytic approach). The median effect size for the revised protocols (r = .05) was similar to that of the RP:P protocols (r = .04) and the original RP:P replications (r = .11), and smaller than that of the original studies (r = .37). Analysis of the cumulative evidence across the original studies and the corresponding three replication attempts provided very precise estimates of the 10 tested effects and indicated that their effect sizes (median r = .07, range = .00?.15) were 78% smaller, on average, than the original effect sizes (median r = .37, range = .19?.50)
Solitary fibrous tumours of the infratemporal fossa. Two case reports
Introduction: The solitary fibrous tumour is a rare neoplasm originally described as a pleural tumour. An increasing number of different locations are described in the literature. Among the extrapulmonary sites, head and neck can be involved and particularly the nose, the paranasal sinuses, the submandibular region, the parapharyngeal space and the infratemporal fossa. Material: Two cases, one of a young woman and another of an elderly gentleman are reported, each presenting with a solitary fibrous tumour of the infratemporal fossa. In one case an antero-lateral, transcranio-facial and in the other, a transmandibular approach (without labiotomy) were utilized. In both cases complete excision of the lesion and good cosmetic results were achieved. Results: Both patients were free from the disease for 5 postoperatively. Conclusions: To date, very few cases of solitary fibrous tumour of the craniofacial complex have been observed to enable an accurate prognosis. Thus, treatment and follow-up should be identical to fibrous tumours located in other areas. (C) 2006 European Association for Cranio-Maxillofacial Surgery
Solitary fibrous tumours of the infratemporal fossa: two casese reports
INTRODUCTION: The solitary fibrous tumour is a rare neoplasm originally described as a pleural tumour. An increasing number of different locations are described in the literature. Among the extrapulmonary sites, head and neck can be involved and particularly the nose, the paranasal sinuses, the submandibular region, the parapharyngeal space and the infratemporal fossa.
MATERIAL: Two cases, one of a young woman and another of an elderly gentleman are reported, each presenting with a solitary fibrous tumour of the infratemporal fossa. In one case an antero-lateral, transcranio-facial and in the other, a transmandibular approach (without labiotomy) were utilized. In both cases complete excision of the lesion and good cosmetic results were achieved.
RESULTS: Both patients were free from the disease for 5 postoperatively.
CONCLUSIONS: To date, very few cases of solitary fibrous tumour of the craniofacial complex have been observed to enable an accurate prognosis. Thus, treatment and follow-up should be identical to fibrous tumours located in other area
Building an Artificial Plant Cell Wall on a Lipid Bilayer by Assembling Polysaccharides and Engineered Proteins
International audienc
La sorveglianza dello stato di salute della popolazione del Comune di Parona (PV) (II Fase)
Nell'ambito di un intervento di sorveglianza epidemiologica di una popolazione residente in una zona che si è trasformata da prevalentemente agricola ad industriale, il nostro gruppo di lavoro ha svolto nel 2007 la seconda fase dell'indagine, nell'ottica di uno studio prospettico iniziato nel 2000, per monitorare lo stato di salute della popolazione residente. Scopo di questa indagine epidemiologica è stato la rilevazione di eventuali modificazioni nello stato di salute in relazione alle patologie respiratorie associate all'inquinamento atmosferico. La rilevazione dei dati è stata effettuata mediante somministrazione di questionari e di un esame spirometrico. Allo studio hanno partecipato 399 soggetti dai 15 ai 79 anni, appartenenti ad una popolazione di 1484 persone, convocate mediante lettera individuale. I rispondenti sono stati complessivamente il 27% dei convocati. L'età media della popolazione rispondente è stata di 48,43 (±15,75) anni. Solo 149 soggetti (37,2% del totale degli attuali rispondenti) avevano partecipato alla precedente indagine. I dati anamnestici relativi alla sintomatologia associata a patologie respiratorie non hanno dimostrato modificazioni significative nel periodo considerato. 386 soggetti sono stati sottoposti a spirometria: i reperti patologici sono risultati essere 36, corrispondenti al 9,3% delle persone monitorate. Rispetto al 2000 si registra un modico aumento della prevalenza di reperti spirometrici patologici da ostruzione (5,9% vs 2,6%) nelle femmine mentre la prevalenza delle spirometrie patologiche da restrizione è rimasta simile sia tra i maschi (1,7% vs 2,4%) che tra le femmine (2% vs 1,5%). Tra i soggetti che hanno partecipato ad entrambe le indagini (149 soggetti, 65 maschi e 84 femmine) sono state rilevate nel tempo differenze verosimilmente associate all'invecchiamento in quanto le spirometri patologiche sono moderatamente aumentate sia per quel che riguarda i reperti spirometrici da "ostruzione" (9% vs 6,8%) che per quelli da "restrizione" (2,2% vs 1,4%). Sia nella prima che nella seconda indagine, la stessa metodologia operativa è stata applicata ad una popolazione di controllo, omogenea per caratteristiche con quella indagata, eccetto che per la residenza nei pressi dell'inceneritore e di industrie potenzialmente inquinanti. Il confronto dei risultati ottenuti dalle indagini eseguite a Parona e nella località scelta come controllo (Rovescala) per quanto riguarda la funzionalità respiratoria, non ha evidenziato differenze significative nella prevalenza di reperti patologici da ostruzione, anche se tra i maschi la prevalenza è risultata maggiore a Parona (9,2% vs 6,9%). La partecipazione della popolazione all'indagine nei due Comuni è risultata percentualmente inferiore rispetto al 2000, nonostante i ripetuti tentativi di sensibilizzazione. Ciò non ha consentito una valutazione soddisfacente delle condizioni di salute degli abitanti di Parona
Many Labs 5: Testing pre-data collection peer review as an intervention to increase replicability
Replications in psychological science sometimes fail to reproduce prior findings. If replications use methods that are unfaithful to the original study or ineffective in eliciting the phenomenon of interest, then a failure to replicate may be a failure of the protocol rather than a challenge to the original finding. Formal pre-data collection peer review by experts may address shortcomings and increase replicability rates. We selected 10 replications from the Reproducibility Project: Psychology (RP:P; Open Science Collaboration, 2015) in which the original authors had expressed concerns about the replication designs before data collection and only one of which was “statistically significant” (p < .05). Commenters suggested that lack of adherence to expert review and low-powered tests were the reasons that most of these RP:P studies failed to replicate (Gilbert et al., 2016). We revised the replication protocols and received formal peer review prior to conducting new replications. We administered the RP:P and Revised protocols in multiple laboratories (Median number of laboratories per original study = 6.5; Range 3 to 9; Median total sample = 1279.5; Range 276 to 3512) for high-powered tests of each original finding with both protocols. Overall, Revised protocols produced similar effect sizes as RP:P protocols following the preregistered analysis plan (Δr = .002 or .014, depending on analytic approach). The median effect size for Revised protocols (r = .05) was similar to RP:P protocols (r = .04) and the original RP:P replications (r = .11), and smaller than the original studies (r = .37). The cumulative evidence of original study and three replication attempts suggests that effect sizes for all 10 (median r = .07; range .00 to .15) are 78% smaller on average than original findings (median r = .37; range .19 to .50), with very precisely estimated effects
Many Labs 5: Testing Pre-Data-Collection Peer Review as an Intervention to Increase Replicability
Replication studies in psychological science sometimes fail to reproduce prior findings. If these studies use methods that are unfaithful to the original study or ineffective in eliciting the phenomenon of interest, then a failure to replicate may be a failure of the protocol rather than a challenge to the original finding. Formal pre-data-collection peer review by experts may address shortcomings and increase replicability rates. We selected 10 replication studies from the Reproducibility Project: Psychology (RP:P; Open Science Collaboration, 2015) for which the original authors had expressed concerns about the replication designs before data collection; only one of these studies had yielded a statistically significant effect (p < .05). Commenters suggested that lack of adherence to expert review and low-powered tests were the reasons that most of these RP:P studies failed to replicate the original effects. We revised the replication protocols and received formal peer review prior to conducting new replication studies. We administered the RP:P and revised protocols in multiple laboratories (median number of laboratories per original study = 6.5, range = 3–9; median total sample = 1,279.5, range = 276–3,512) for high-powered tests of each original finding with both protocols. Overall, following the preregistered analysis plan, we found that the revised protocols produced effect sizes similar to those of the RP:P protocols (Δr = .002 or .014, depending on analytic approach). The median effect size for the revised protocols (r = .05) was similar to that of the RP:P protocols (r = .04) and the original RP:P replications (r = .11), and smaller than that of the original studies (r = .37). Analysis of the cumulative evidence across the original studies and the corresponding three replication attempts provided very precise estimates of the 10 tested effects and indicated that their effect sizes (median r = .07, range = .00–.15) were 78% smaller, on average, than the original effect sizes (median r = .37, range = .19–.50)
Many Labs 5: Testing Pre-Data-Collection Peer Review as an Intervention to Increase Replicability
Replication studies in psychological science sometimes fail to reproduce prior findings. If these studies use methods that are unfaithful to the original study or ineffective in eliciting the phenomenon of interest, then a failure to replicate may be a failure of the protocol rather than a challenge to the original finding. Formal pre-data-collection peer review by experts may address shortcomings and increase replicability rates. We selected 10 replication studies from the Reproducibility Project: Psychology (RP:P; Open Science Collaboration, 2015) for which the original authors had expressed concerns about the replication designs before data collection; only one of these studies had yielded a statistically significant effect (p < .05). Commenters suggested that lack of adherence to expert review and low-powered tests were the reasons that most of these RP:P studies failed to replicate the original effects. We revised the replication protocols and received formal peer review prior to conducting new replication studies. We administered the RP:P and revised protocols in multiple laboratories (median number of laboratories per original study = 6.5, range = 3–9; median total sample = 1,279.5, range = 276–3,512) for high-powered tests of each original finding with both protocols. Overall, following the preregistered analysis plan, we found that the revised protocols produced effect sizes similar to those of the RP:P protocols (Δr = .002 or .014, depending on analytic approach). The median effect size for the revised protocols (r = .05) was similar to that of the RP:P protocols (r = .04) and the original RP:P replications (r = .11), and smaller than that of the original studies (r = .37). Analysis of the cumulative evidence across the original studies and the corresponding three replication attempts provided very precise estimates of the 10 tested effects and indicated that their effect sizes (median r = .07, range = .00–.15) were 78% smaller, on average, than the original effect sizes (median r = .37, range = .19–.50)