5 research outputs found

    Serological screening of influenza A virus antibodies in cats and dogs indicates frequent infection with different subtypes

    Get PDF
    Influenza A viruses (IAVs) infect humans and a variety of other animal species. Infections with some subtypes of IAV were also reported in domestic cats and dogs. Besides animal health implications, close contact between companion animals and humans also poses a potential risk of zoonotic IAV infections. In this study, serum samples from different cat and dog cohorts were analyzed for IAV antibodies against 7 IAV subtypes, using three distinctive IAV-specific assays differing in IAV subtype-specific discriminatory power and sensitivity. Enzyme-linked immunosorbent assays against the complete hemagglutinin (HA) ectodomain or the HA1 domain were used, as well as a novel nanoparticle-based, virus-free hemagglutination inhibition (HI) assay. Using these three assays, we found cat and dog sera from different cohorts to be positive for antibodies against one or more IAV subtypes/strains. Cat and dog serum samples collected after the 2009 pandemic H1N1 outbreak exhibit much higher seropositivity against H1 compared with samples from before 2009. Cat sera furthermore displayed higher reactivity for avian IAVs than dog sera. Our findings show the added value of using complementary serological assays, which are based on reactivity with different numbers of HA epitopes, to study IAV antibody responses and for improved serosurveillance of IAV infections. We conclude that infection of cats and dogs with both human and avian IAVs of different subtypes is prevalent. These observations highlight the role of cats and dogs in IAV ecology and indicate the potential of these companion animals to give rise to novel (reassorted) viruses with increased zoonotic potential

    Non-Standard Errors

    Get PDF
    In statistics, samples are drawn from a population in a data-generating process (DGP). Standard errors measure the uncertainty in estimates of population parameters. In science, evidence is generated to test hypotheses in an evidence-generating process (EGP). We claim that EGP variation across researchers adds uncertainty: Non-standard errors (NSEs). We study NSEs by letting 164 teams test the same hypotheses on the same data. NSEs turn out to be sizable, but smaller for better reproducible or higher rated research. Adding peer-review stages reduces NSEs. We further find that this type of uncertainty is underestimated by participants

    Non-Standard Errors

    Get PDF
    In statistics, samples are drawn from a population in a data-generating process (DGP). Standard errors measure the uncertainty in estimates of population parameters. In science, evidence is generated to test hypotheses in an evidence-generating process (EGP). We claim that EGP variation across researchers adds uncertainty: Non-standard errors (NSEs). We study NSEs by letting 164 teams test the same hypotheses on the same data. NSEs turn out to be sizable, but smaller for better reproducible or higher rated research. Adding peer-review stages reduces NSEs. We further find that this type of uncertainty is underestimated by participants

    Non-Standard Errors

    Get PDF
    In statistics, samples are drawn from a population in a data-generating process (DGP). Standard errors measure the uncertainty in sample estimates of population parameters. In science, evidence is generated to test hypotheses in an evidence-generating process (EGP). We claim that EGP variation across researchers adds uncertainty: non-standard errors. To study them, we let 164 teams test six hypotheses on the same sample. We find that non-standard errors are sizeable, on par with standard errors. Their size (i) co-varies only weakly with team merits, reproducibility, or peer rating, (ii) declines significantly after peer-feedback, and (iii) is underestimated by participants

    Non-standard errors

    No full text
    corecore