313 research outputs found

    The Spouse\u27s Perspective of Agricultural Education as a Career

    Get PDF
    The national shortage of agricultural education teachers is an urgent concern because it results in less students prepared to seek careers in agriculture and other STEM disciplines. Factors including the excessive demands placed on agriculture teachers have contributed to teacher turnover. These demands often spill over into other life domains, such as the family. Since individuals in the family domain can exert an influence on career decisions of their loved ones, it is important to understand the influence of the agricultural education profession on perceptions and work-family conflict of the agriculture teacher\u27s spouse or partner (henceforth spouse). Additionally, job satisfaction has been found to be a strong indicator of a teacher\u27s intent to remain in the profession, however little research has examined the influence of the spouse\u27s attitudes or personal factors related to job satisfaction. This study sought to describe the attitudes of the agriculture teachers\u27 spouse regarding agricultural education as a career, specifically to examine factors associated with the spouse\u27s satisfaction with agricultural education. An online survey consisting of two sections: 1) spouses\u27 demographic information; and 2) spouses\u27 attitudes towards agricultural education (e.g., agriculture teachers\u27 work-family conflict (WFC), satisfaction with career, family supportive work-culture) was distributed to a national sample of 699 agriculture teachers\u27 spouses. Spouses indicated relatively high satisfaction with agricultural education and moderate levels of WFC and family-supportive work culture. Significant predictors of spouses\u27 satisfaction with agricultural education include total family household work hours, WFC, and family supportive work culture. Gender and whether the spouse had participated in SBAE were not significant. Implications exist to reduce WFC and to continue to promote a positive family supportive work culture within the agricultural education profession

    A workflow for the detection of antibiotic residues, measurement of water chemistry and preservation of hospital sink drain samples for metagenomic sequencing

    Get PDF
    Background: Hospital sinks are environmental reservoirs that harbour healthcare-associated (HCA) pathogens. Selective pressures in sink environments, such as antibiotic residues, nutrient waste and hardness ions, may promote antibiotic resistance gene (ARG) exchange between bacteria. However, cheap and accurate sampling methods to characterise these factors are lacking. Aim: To validate a workflow to detect antibiotic residues and evaluate water chemistry using dipsticks. Secondarily, to validate boric acid to preserve the taxonomic and ARG (“resistome”) composition of sink trap samples for metagenomic sequencing. Methods: Antibiotic residue dipsticks were validated against serial dilutions of ampicillin, doxycycline, sulfamethoxazole and ciprofloxacin, and water chemistry dipsticks against serial dilutions of chemical calibration standards. Sink trap aspirates were used for a “real-world” pilot evaluation of dipsticks. To assess boric acid as a preservative of microbial diversity, the impact of incubation with and without boric acid at ~22°C on metagenomic sequencing outputs was evaluated at Day 2 and Day 5 compared with baseline (Day 0). Findings: The limits of detection for each antibiotic were: 3”g/L (ampicillin), 10”g/L (doxycycline), 20”g/L (sulfamethoxazole) and 8”g/L (ciprofloxacin). The best performing water chemistry dipstick correctly characterised 34/40 (85%)standardsin a concentration-dependent manner. One trap sample tested positive for the presence of tetracyclines and sulfonamides. Taxonomic and resistome composition were largely maintained after storage with boric acid at ~22°C for up to five days. Conclusions: Dipsticks can be used to detect antibiotic residues and characterise water chemistry in sink trap samples. Boric acid was an effective preservative of trap sample composition, representing a low-cost alternative to cold-chain transport

    Evaluation of the accuracy of bacterial genome reconstruction with Oxford Nanopore R10.4.1 long-read-only sequencing

    Get PDF
    Whole genome reconstruction of bacterial pathogens has become an important tool for tracking transmission and antimicrobial resistance gene spread, but highly accurate and complete assemblies have largely only historically been achievable using hybrid long and short-read sequencing. We previously found the Oxford Nanopore Technologies (ONT) R10.4/kit12 flowcell/chemistry produced improved assemblies over the R9.4.1/kit10 combination, however long-read only assemblies contained more errors compared to Illumina-ONT hybrid assemblies. ONT have since released an R10.4.1/kit14 flowcell/chemistry upgrade and recommended the use of Bovine Serum Albumin (BSA) during library preparation, both of which reportedly increase accuracy and yield. They have also released updated basecallers trained using native bacterial DNA containing methylation sites intended to fix systematic basecalling errors, including common adenosine (A) to guanine (G) and cytosine (C) to thymine (T) substitutions. To evaluate these improvements, we successfully sequenced four bacterial reference strains, namely Escherichia coli, Klebsiella pneumoniae, Pseudomonas aeruginosa and Staphylococcus aureus, and nine genetically diverse E. coli bloodstream infection-associated isolates from different phylogroups and sequence types, both with and without BSA. These sequences were de novo assembled and compared against Illumina-corrected reference genomes. In this small evaluation of 13 isolates we found that nanopore long read-only R10.4.1/kit 14 assemblies with updated basecallers trained using bacterial methylated DNA produce accurate assemblies with ≄40x depth, sufficient to be cost-effective compared with hybrid ONT/Illumina sequencing in our setting

    Applications of environmental DNA (eDNA) in agricultural systems: Current uses, limitations and future prospects

    Get PDF
    Global food production, food supply chains and food security are increasingly stressed by human population growth and loss of arable land, becoming more vulnerable to anthropogenic and environmental perturbations. Numerous mutualistic and antagonistic species are interconnected with the cultivation of crops and livestock and these can be challenging to identify on the large scales of food production systems. Accurate identifications to capture this diversity and rapid scalable monitoring are necessary to identify emerging threats (i.e. pests and pathogens), inform on ecosystem health (i.e. soil and pollinator diversity), and provide evidence for new management practices (i.e. fertiliser and pesticide applications). Increasingly, environmental DNA (eDNA) is providing rapid and accurate classifications for specific organisms and entire species assemblages in substrates ranging from soil to air. Here, we aim to discuss how eDNA is being used for monitoring of agricultural ecosystems, what current limitations exist, and how these could be managed to expand applications into the future. In a systematic review we identify that eDNA-based monitoring in food production systems accounts for only 4 % of all eDNA studies. We found that the majority of these eDNA studies target soil and plant substrates (60 %), predominantly to identify microbes and insects (60 %) and are biased towards Europe (42 %). While eDNA-based monitoring studies are uncommon in many of the world\u27s food production systems, the trend is most pronounced in emerging economies often where food security is most at risk. We suggest that the biggest limitations to eDNA for agriculture are false negatives resulting from DNA degradation and assay biases, as well as incomplete databases and the interpretation of abundance data. These require in silico, in vitro, and in vivo approaches to carefully design, test and apply eDNA monitoring for reliable and accurate taxonomic identifications. We explore future opportunities for eDNA research which could further develop this useful tool for food production system monitoring in both emerging and developed economies, hopefully improving monitoring, and ultimately food security

    Discordance between different bioinformatic methods for identifying resistance genes from short-read genomic data, with a focus on Escherichia coli

    Get PDF
    Several bioinformatics genotyping algorithms are now commonly used to characterize antimicrobial resistance (AMR) gene profiles in whole-genome sequencing (WGS) data, with a view to understanding AMR epidemiology and developing resistance prediction workflows using WGS in clinical settings. Accurately evaluating AMR in Enterobacterales, particularly Escherichia coli, is of major importance, because this is a common pathogen. However, robust comparisons of different genotyping approaches on relevant simulated and large real-life WGS datasets are lacking. Here, we used both simulated datasets and a large set of real E. coli WGS data (n=1818 isolates) to systematically investigate genotyping methods in greater detail. Simulated constructs and real sequences were processed using four different bioinformatic programs (ABRicate, ARIBA, KmerResistance and SRST2, run with the ResFinder database) and their outputs compared. For simulation tests where 3079 AMR gene variants were inserted into random sequence constructs, KmerResistance was correct for 3076 (99.9 %) simulations, ABRicate for 3054 (99.2 %), ARIBA for 2783 (90.4 %) and SRST2 for 2108 (68.5 %). For simulation tests where two closely related gene variants were inserted into random sequence constructs, KmerResistance identified the correct alleles in 35 338/46 318 (76.3 %) simulations, ABRicate identified them in 11 842/46 318 (25.6 %) simulations, ARIBA identified them in 1679/46 318 (3.6 %) simulations and SRST2 identified them in 2000/46 318 (4.3 %) simulations. In real data, across all methods, 1392/1818 (76 %) isolates had discrepant allele calls for at least 1 gene. In addition to highlighting areas for improvement in challenging scenarios, (e.g. identification of AMR genes at <10× coverage, identifying multiple closely related AMR genes present in the same sample), our evaluations identified some more systematic errors that could be readily soluble, such as repeated misclassification (i.e. naming) of genes as shorter variants of the same gene present within the reference resistance gene database. Such naming errors accounted for at least 2530/4321 (59 %) of the discrepancies seen in real data. Moreover, many of the remaining discrepancies were likely ‘artefactual’, with reporting of cut-off differences accounting for at least 1430/4321 (33 %) discrepants. Whilst we found that comparing outputs generated by running multiple algorithms on the same dataset could identify and resolve these algorithmic artefacts, the results of our evaluations emphasize the need for developing new and more robust genotyping algorithms to further improve accuracy and performance

    Scratching Beneath the Surface : Intentionality in Great Ape Signal production

    Get PDF
    Despite important similarities having been found between human and animal communication systems, surprisingly little research effort has focussed on whether the cognitive mechanisms underpinning these behaviours are also similar. In particular, it is highly debated whether signal production is the result of reflexive processes, or can be characterised as intentional. Here, we critically evaluate the criteria that are used to identify signals produced with different degrees of intentionality, and discuss recent attempts to apply these criteria to the vocal, gestural, and multimodal communicative signals of great apes and more distantly related species. Finally, we outline the necessary research tools, such as physiologically validated measures of arousal, and empirical evidence that we believe would propel this debate forward and help unravel the evolutionary origins of human intentional communication

    Retrospective analysis of hospital electronic health records reveals unseen cases of acute hepatitis with unknown aetiology in adults in Oxfordshire

    Get PDF
    Background: An outbreak of acute severe hepatitis of unknown aetiology (AS-Hep-UA) in children during 2022 was subsequently linked to infections with adenovirus-associated virus 2 and other ‘helper viruses’, including human adenovirus. It is possible that evidence of such an outbreak could be identified at a population level based on routine data captured by electronic health records (EHR). Methods: We used anonymised EHR to collate retrospective data for all emergency presentations to Oxford University Hospitals NHS Foundation Trust in the UK, between 2016–2022, for all ages from 18 months and older. We investigated clinical characteristics and temporal distribution of presentations of acute hepatitis and of adenovirus infections based on laboratory data and clinical coding. We relaxed the stringent case definition adopted during the AS-Hep-UA to identify all cases of acute hepatitis with unknown aetiology (termed AHUA). We compared events within the outbreak period (defined as 1st Oct 2021—31 Aug 2022) to the rest of our study period. Results: Over the study period, there were 903,433 acute presentations overall, of which 391 (0.04%) were classified as AHUA. AHUA episodes had significantly higher critical care admission rates (p < 0.0001, OR = 41.7, 95% CI:26.3–65.0) and longer inpatient admissions (p < 0.0001) compared with the rest of the patient population. During the outbreak period, significantly more adults (≄ 16 years) were diagnosed with AHUA (p < 0.0001, OR = 3.01, 95% CI: 2.20–4.12), and there were significantly more human adenovirus (HadV) infections in children (p < 0.001, OR = 1.78, 95% CI:1.27–2.47). There were also more HAdV tests performed during the outbreak (p < 0.0001, OR = 1.27, 95% CI:1.17–1.37). Among 3,707 individuals who were tested for HAdV, 179 (4.8%) were positive. However, there was no evidence of more acute hepatitis or increased severity of illness in HadV-positive compared to negative cases. Conclusions: Our results highlight an increase in AHUA in adults coinciding with the period of the outbreak in children, but not linked to documented HAdV infection. Tracking changes in routinely collected clinical data through EHR could be used to support outbreak surveillance

    Detecting changes in population trends in infection surveillance using community SARS-CoV-2 prevalence as an exemplar

    Get PDF
    Detecting and quantifying changes in growth rates of infectious diseases is vital to informing public health strategy and can inform policymakers’ rationale for implementing or continuing interventions aimed at reducing impact. Substantial changes in SARS-CoV-2 prevalence with emergence of variants provides opportunity to investigate different methods to do this. We included PCR results from all participants in the UK’s COVID-19 Infection Survey between August 2020-June 2022. Change-points for growth rates were identified using iterative sequential regression (ISR) and second derivatives of generalised additive models (GAMs). Consistency between methods and timeliness of detection were compared. Of 8,799,079 visits, 147,278 (1.7%) were PCR-positive. Change-points associated with emergence of major variants were estimated to occur a median 4 days earlier (IQR 0-8) in GAMs versus ISR. When estimating recent change-points using successive data periods, four change-points (4/96) identified by GAMs were not found when adding later data or by ISR. Change-points were detected 3-5 weeks after they occurred in both methods but could be detected earlier within specific subgroups. Change-points in growth rates of SARS-CoV-2 can be detected in near real-time using ISR and second derivatives of GAMs. To increase certainty about changes in epidemic trajectories both methods could be run in parallel
    • 

    corecore