70 research outputs found
Atlas of Oregon
2 p. Review produced for HC 441: Science Colloquium: Willamette River Environmental Health, Robert D. Clark Honors College, University of Oregon, Spring term, 2004.Print copies of the reviewed book are available in a number of locations within the UO Libraries, under the call number: G1490 .L63 200
Detection of \u3ci\u3eMannheimia haemolytica\u3c/i\u3e-Specific IgG, IgM and IgA in Sera and Their Relationship to Respiratory Disease in Cattle
Mannheimia haemolytica is one of the major causes of bovine respiratory disease in cattle. The organism is the primary bacterium isolated from calves and young cattle affected with enzootic pneumonia. Novel indirect ELISAs were developed and evaluated to enable quantification of antibody responses to whole cell antigens using M. haemolytica A1 strain P1148. In this study, the ELISAs were initially developed using sera from both M. haemolytica-culture-free and clinically infected cattle, then the final prototypes were tested in the validation phase using a larger set of known-status M. haemolytica sera (n = 145) collected from feedlot cattle. The test showed good inter-assay and intra-assay repeatability. Diagnostic sensitivity and specificity were estimated at 91% and 87% for IgG at a cutoff of S/P ≥ 0.8. IgM diagnostic sensitivity and specificity were 91% and 81% at a cutoff of sample to positive (S/P) ratio ≥ 0.8. IgA diagnostic sensitivity was 89% whereas specificity was 78% at a cutoff of S/P ≥ 0.2. ELISA results of all isotypes were related to the diagnosis of respiratory disease and isolation of M. haemolytica (p-value \u3c 0.05). These data suggest that M. haemolytica ELISAs can be adapted to the detection and quantification of antibody in serum specimens and support the use of these tests for the disease surveillance and disease prevention research in feedlot cattle
Recommended from our members
Many Labs 5: Testing Pre-Data-Collection Peer Review as an Intervention to Increase Replicability
Replication studies in psychological science sometimes fail to reproduce prior findings. If these studies use methods that are unfaithful to the original study or ineffective in eliciting the phenomenon of interest, then a failure to replicate may be a failure of the protocol rather than a challenge to the original finding. Formal pre-data-collection peer review by experts may address shortcomings and increase replicability rates. We selected 10 replication studies from the Reproducibility Project: Psychology (RP:P; Open Science Collaboration, 2015) for which the original authors had expressed concerns about the replication designs before data collection; only one of these studies had yielded a statistically significant effect (p <.05). Commenters suggested that lack of adherence to expert review and low-powered tests were the reasons that most of these RP:P studies failed to replicate the original effects. We revised the replication protocols and received formal peer review prior to conducting new replication studies. We administered the RP:P and revised protocols in multiple laboratories (median number of laboratories per original study = 6.5, range = 3–9; median total sample = 1,279.5, range = 276–3,512) for high-powered tests of each original finding with both protocols. Overall, following the preregistered analysis plan, we found that the revised protocols produced effect sizes similar to those of the RP:P protocols (Δr =.002 or.014, depending on analytic approach). The median effect size for the revised protocols (r =.05) was similar to that of the RP:P protocols (r =.04) and the original RP:P replications (r =.11), and smaller than that of the original studies (r =.37). Analysis of the cumulative evidence across the original studies and the corresponding three replication attempts provided very precise estimates of the 10 tested effects and indicated that their effect sizes (median r =.07, range =.00–.15) were 78% smaller, on average, than the original effect sizes (median r =.37, range =.19–.50)
Many Labs 5:Testing pre-data collection peer review as an intervention to increase replicability
Replication studies in psychological science sometimes fail to reproduce prior findings. If these studies use methods that are unfaithful to the original study or ineffective in eliciting the phenomenon of interest, then a failure to replicate may be a failure of the protocol rather than a challenge to the original finding. Formal pre-data-collection peer review by experts may address shortcomings and increase replicability rates. We selected 10 replication studies from the Reproducibility Project: Psychology (RP:P; Open Science Collaboration, 2015) for which the original authors had expressed concerns about the replication designs before data collection; only one of these studies had yielded a statistically significant effect (p < .05). Commenters suggested that lack of adherence to expert review and low-powered tests were the reasons that most of these RP:P studies failed to replicate the original effects. We revised the replication protocols and received formal peer review prior to conducting new replication studies. We administered the RP:P and revised protocols in multiple laboratories (median number of laboratories per original study = 6.5, range = 3?9; median total sample = 1,279.5, range = 276?3,512) for high-powered tests of each original finding with both protocols. Overall, following the preregistered analysis plan, we found that the revised protocols produced effect sizes similar to those of the RP:P protocols (?r = .002 or .014, depending on analytic approach). The median effect size for the revised protocols (r = .05) was similar to that of the RP:P protocols (r = .04) and the original RP:P replications (r = .11), and smaller than that of the original studies (r = .37). Analysis of the cumulative evidence across the original studies and the corresponding three replication attempts provided very precise estimates of the 10 tested effects and indicated that their effect sizes (median r = .07, range = .00?.15) were 78% smaller, on average, than the original effect sizes (median r = .37, range = .19?.50)
AI is a viable alternative to high throughput screening: a 318-target study
: High throughput screening (HTS) is routinely used to identify bioactive small molecules. This requires physical compounds, which limits coverage of accessible chemical space. Computational approaches combined with vast on-demand chemical libraries can access far greater chemical space, provided that the predictive accuracy is sufficient to identify useful molecules. Through the largest and most diverse virtual HTS campaign reported to date, comprising 318 individual projects, we demonstrate that our AtomNet® convolutional neural network successfully finds novel hits across every major therapeutic area and protein class. We address historical limitations of computational screening by demonstrating success for target proteins without known binders, high-quality X-ray crystal structures, or manual cherry-picking of compounds. We show that the molecules selected by the AtomNet® model are novel drug-like scaffolds rather than minor modifications to known bioactive compounds. Our empirical results suggest that computational methods can substantially replace HTS as the first step of small-molecule drug discovery
- …