17 research outputs found

    Mucosal Targeting of a BoNT/A Subunit Vaccine Adjuvanted with a Mast Cell Activator Enhances Induction of BoNT/A Neutralizing Antibodies in Rabbits

    Get PDF
    We previously reported that the immunogenicity of Hcβtre, a botulinum neurotoxin A (BoNT/A) immunogen, was enhanced by fusion to an epithelial cell binding domain, Ad2F, when nasally delivered to mice with cholera toxin (CT). This study was performed to determine if Ad2F would enhance the nasal immunogenicity of Hcβtre in rabbits, an animal model with a nasal cavity anatomy similar to humans. Since CT is not safe for human use, we also tested the adjuvant activity of compound 48/80 (C48/80), a mast cell activating compound previously determined to safely exhibit nasal adjuvant activity in mice.New Zealand White or Dutch Belted rabbits were nasally immunized with Hcβtre or Hcβtre-Ad2F alone or combined with CT or C48/80, and serum samples were tested for the presence of Hcβtre-specific binding (ELISA) or BoNT/A neutralizing antibodies.Hcβtre-Ad2F nasally administered with CT induced serum anti-Hcβtre IgG ELISA and BoNT/A neutralizing antibody titers greater than those induced by Hcβtre + CT. C48/80 provided significant nasal adjuvant activity and induced BoNT/A-neutralizing antibodies similar to those induced by CT.Ad2F enhanced the nasal immunogenicity of Hcβtre, and the mast cell activator C48/80 was an effective adjuvant for nasal immunization in rabbits, an animal model with a nasal cavity anatomy similar to that in humans

    Many Labs 5:Testing pre-data collection peer review as an intervention to increase replicability

    Get PDF
    Replication studies in psychological science sometimes fail to reproduce prior findings. If these studies use methods that are unfaithful to the original study or ineffective in eliciting the phenomenon of interest, then a failure to replicate may be a failure of the protocol rather than a challenge to the original finding. Formal pre-data-collection peer review by experts may address shortcomings and increase replicability rates. We selected 10 replication studies from the Reproducibility Project: Psychology (RP:P; Open Science Collaboration, 2015) for which the original authors had expressed concerns about the replication designs before data collection; only one of these studies had yielded a statistically significant effect (p < .05). Commenters suggested that lack of adherence to expert review and low-powered tests were the reasons that most of these RP:P studies failed to replicate the original effects. We revised the replication protocols and received formal peer review prior to conducting new replication studies. We administered the RP:P and revised protocols in multiple laboratories (median number of laboratories per original study = 6.5, range = 3?9; median total sample = 1,279.5, range = 276?3,512) for high-powered tests of each original finding with both protocols. Overall, following the preregistered analysis plan, we found that the revised protocols produced effect sizes similar to those of the RP:P protocols (?r = .002 or .014, depending on analytic approach). The median effect size for the revised protocols (r = .05) was similar to that of the RP:P protocols (r = .04) and the original RP:P replications (r = .11), and smaller than that of the original studies (r = .37). Analysis of the cumulative evidence across the original studies and the corresponding three replication attempts provided very precise estimates of the 10 tested effects and indicated that their effect sizes (median r = .07, range = .00?.15) were 78% smaller, on average, than the original effect sizes (median r = .37, range = .19?.50)
    corecore