49 research outputs found

    Commercialisation and Impacts of Pasture Legumes in Southern Australia–Lessons Learnt

    Get PDF
    Forage legumes are a key feature of temperate grasslands in southern Australia, valued for their ability to increase animal production, improve soil fertility and fix atmospheric nitrogen. Of the 36 temperate annual legume and 11 temperate perennial legume species with registered cultivars introduced or domesticated in Australia over the last 100 years, a third have made a major contribution to agriculture, a third have modest use and a third have failed to make any commercial impact. Highly successful species include subterranean clover, barrel medic, white clover, lucerne, French serradella and balansa clover. Species were assessed on the scale of their application, ease of seed production and specific requirements for agronomic management to determine critical factors for maximising commercial success. Of fundamental importance is the need to understand the farming systems context for legume technologies, particularly as it relates to potential scale of application and impact on farm profitability. Other factors included a requirement for parallel investment in rhizobiology, implementing an adequate ‘duty of care’ problem-solving framework for each new plant product and the need to construct a commercialisation model that optimises the trade-off between rapid adoption by farmers and profitability of the seed industry. Our experience to date indicates that seed industry engagement is highest when they have exclusive rights to a cultivar, can exercise some control over seed production and can market seed for a premium price without having to carry over significant seed quantities from one season to the next. A capability for non-specialist seed production on-farm (with lower associated seed costs) is a disincentive for the seed industry, but may be an appropriate commercialisation model for some public cultivars

    Improving the Phosphorus Efficiency of Temperate Australian Pastures

    Get PDF
    Phosphorus (P) is a key input necessary for high production in many temperate, grass-legume pasture systems in Australia because the pastures are situated on P-deficient and moderate to highly P-sorbing soils. A consequence of P-sorption in these soils is that much more P must be applied as fertiliser than will be exported in animal products. The P balance efficiency (PBE=100*Pexport/Pinputs) of grazing enterprises (e.g. wool, meat, milk and live animal export) is about 10-30% and compares poorly with some other agricultural enterprises (e.g. 45-54% for grain production; McLaughlin et al. 1992; Weaver and Wong 2011). P accumulates in these soils when they are fertilised as a result of phosphate reactions with Ca and/or Al and Fe oxides, and P incorporation into resistant organic materials (McLaughlin et al. 2011). Some P in grazed fields is also accumulated in animal camps. The net rate of P accumulation in soil (and in grazed fields as a whole) is related to the concentration of plant-available P in the soil. Operating grazing systems at lower plant-available P levels should help to slow P accumulation and result in more effective use of P fertiliser (Simpson et al. 2010; Simpson et al. 2011). Because the P requirement of grass-legume pastures is usually set by the high P requirements of the legume (Hill et al. 2005), we commenced a study to quantify the P requirements of a range of legumes to determine whether productive, lower P-input grazing systems can be developed. We are also screening subterranean clover, the most widely used pasture legume in temperate Australia, for root traits related to P efficiency. Here we report early findings from the establishment year of a field experiment to determine the P requirement of several alternative temperate legumes

    Phenotypic Signatures Arising from Unbalanced Bacterial Growth

    Get PDF
    Fluctuations in the growth rate of a bacterial culture during unbalanced growth are generally considered undesirable in quantitative studies of bacterial physiology. Under well-controlled experimental conditions, however, these fluctuations are not random but instead reflect the interplay between intra-cellular networks underlying bacterial growth and the growth environment. Therefore, these fluctuations could be considered quantitative phenotypes of the bacteria under a specific growth condition. Here, we present a method to identify “phenotypic signatures” by time-frequency analysis of unbalanced growth curves measured with high temporal resolution. The signatures are then applied to differentiate amongst different bacterial strains or the same strain under different growth conditions, and to identify the essential architecture of the gene network underlying the observed growth dynamics. Our method has implications for both basic understanding of bacterial physiology and for the classification of bacterial strains

    A Glycemia Risk Index (GRI) of Hypoglycemia and Hyperglycemia for Continuous Glucose Monitoring Validated by Clinician Ratings

    Get PDF
    BackgroundA composite metric for the quality of glycemia from continuous glucose monitor (CGM) tracings could be useful for assisting with basic clinical interpretation of CGM data.MethodsWe assembled a data set of 14-day CGM tracings from 225 insulin-treated adults with diabetes. Using a balanced incomplete block design, 330 clinicians who were highly experienced with CGM analysis and interpretation ranked the CGM tracings from best to worst quality of glycemia. We used principal component analysis and multiple regressions to develop a model to predict the clinician ranking based on seven standard metrics in an Ambulatory Glucose Profile: very low-glucose and low-glucose hypoglycemia; very high-glucose and high-glucose hyperglycemia; time in range; mean glucose; and coefficient of variation.ResultsThe analysis showed that clinician rankings depend on two components, one related to hypoglycemia that gives more weight to very low-glucose than to low-glucose and the other related to hyperglycemia that likewise gives greater weight to very high-glucose than to high-glucose. These two components should be calculated and displayed separately, but they can also be combined into a single Glycemia Risk Index (GRI) that corresponds closely to the clinician rankings of the overall quality of glycemia (r = 0.95). The GRI can be displayed graphically on a GRI Grid with the hypoglycemia component on the horizontal axis and the hyperglycemia component on the vertical axis. Diagonal lines divide the graph into five zones (quintiles) corresponding to the best (0th to 20th percentile) to worst (81st to 100th percentile) overall quality of glycemia. The GRI Grid enables users to track sequential changes within an individual over time and compare groups of individuals.ConclusionThe GRI is a single-number summary of the quality of glycemia. Its hypoglycemia and hyperglycemia components provide actionable scores and a graphical display (the GRI Grid) that can be used by clinicians and researchers to determine the glycemic effects of prescribed and investigational treatments

    Erratum to: Methods for evaluating medical tests and biomarkers

    Get PDF
    [This corrects the article DOI: 10.1186/s41512-016-0001-y.]

    Surviving Sepsis Campaign: International guidelines for management of severe sepsis and septic shock: 2008

    Get PDF
    SCOPUS: ar.jinfo:eu-repo/semantics/publishe

    Evidence synthesis to inform model-based cost-effectiveness evaluations of diagnostic tests: a methodological systematic review of health technology assessments

    Get PDF
    Background: Evaluations of diagnostic tests are challenging because of the indirect nature of their impact on patient outcomes. Model-based health economic evaluations of tests allow different types of evidence from various sources to be incorporated and enable cost-effectiveness estimates to be made beyond the duration of available study data. To parameterize a health-economic model fully, all the ways a test impacts on patient health must be quantified, including but not limited to diagnostic test accuracy. Methods: We assessed all UK NIHR HTA reports published May 2009-July 2015. Reports were included if they evaluated a diagnostic test, included a model-based health economic evaluation and included a systematic review and meta-analysis of test accuracy. From each eligible report we extracted information on the following topics: 1) what evidence aside from test accuracy was searched for and synthesised, 2) which methods were used to synthesise test accuracy evidence and how did the results inform the economic model, 3) how/whether threshold effects were explored, 4) how the potential dependency between multiple tests in a pathway was accounted for, and 5) for evaluations of tests targeted at the primary care setting, how evidence from differing healthcare settings was incorporated. Results: The bivariate or HSROC model was implemented in 20/22 reports that met all inclusion criteria. Test accuracy data for health economic modelling was obtained from meta-analyses completely in four reports, partially in fourteen reports and not at all in four reports. Only 2/7 reports that used a quantitative test gave clear threshold recommendations. All 22 reports explored the effect of uncertainty in accuracy parameters but most of those that used multiple tests did not allow for dependence between test results. 7/22 tests were potentially suitable for primary care but the majority found limited evidence on test accuracy in primary care settings. Conclusions: The uptake of appropriate meta-analysis methods for synthesising evidence on diagnostic test accuracy in UK NIHR HTAs has improved in recent years. Future research should focus on other evidence requirements for cost-effectiveness assessment, threshold effects for quantitative tests and the impact of multiple diagnostic tests

    Erratum to: Methods for evaluating medical tests and biomarkers

    Get PDF
    [This corrects the article DOI: 10.1186/s41512-016-0001-y.]

    Multiorgan MRI findings after hospitalisation with COVID-19 in the UK (C-MORE): a prospective, multicentre, observational cohort study

    Get PDF
    Introduction: The multiorgan impact of moderate to severe coronavirus infections in the post-acute phase is still poorly understood. We aimed to evaluate the excess burden of multiorgan abnormalities after hospitalisation with COVID-19, evaluate their determinants, and explore associations with patient-related outcome measures. Methods: In a prospective, UK-wide, multicentre MRI follow-up study (C-MORE), adults (aged ≥18 years) discharged from hospital following COVID-19 who were included in Tier 2 of the Post-hospitalisation COVID-19 study (PHOSP-COVID) and contemporary controls with no evidence of previous COVID-19 (SARS-CoV-2 nucleocapsid antibody negative) underwent multiorgan MRI (lungs, heart, brain, liver, and kidneys) with quantitative and qualitative assessment of images and clinical adjudication when relevant. Individuals with end-stage renal failure or contraindications to MRI were excluded. Participants also underwent detailed recording of symptoms, and physiological and biochemical tests. The primary outcome was the excess burden of multiorgan abnormalities (two or more organs) relative to controls, with further adjustments for potential confounders. The C-MORE study is ongoing and is registered with ClinicalTrials.gov, NCT04510025. Findings: Of 2710 participants in Tier 2 of PHOSP-COVID, 531 were recruited across 13 UK-wide C-MORE sites. After exclusions, 259 C-MORE patients (mean age 57 years [SD 12]; 158 [61%] male and 101 [39%] female) who were discharged from hospital with PCR-confirmed or clinically diagnosed COVID-19 between March 1, 2020, and Nov 1, 2021, and 52 non-COVID-19 controls from the community (mean age 49 years [SD 14]; 30 [58%] male and 22 [42%] female) were included in the analysis. Patients were assessed at a median of 5·0 months (IQR 4·2–6·3) after hospital discharge. Compared with non-COVID-19 controls, patients were older, living with more obesity, and had more comorbidities. Multiorgan abnormalities on MRI were more frequent in patients than in controls (157 [61%] of 259 vs 14 [27%] of 52; p<0·0001) and independently associated with COVID-19 status (odds ratio [OR] 2·9 [95% CI 1·5–5·8]; padjusted=0·0023) after adjusting for relevant confounders. Compared with controls, patients were more likely to have MRI evidence of lung abnormalities (p=0·0001; parenchymal abnormalities), brain abnormalities (p<0·0001; more white matter hyperintensities and regional brain volume reduction), and kidney abnormalities (p=0·014; lower medullary T1 and loss of corticomedullary differentiation), whereas cardiac and liver MRI abnormalities were similar between patients and controls. Patients with multiorgan abnormalities were older (difference in mean age 7 years [95% CI 4–10]; mean age of 59·8 years [SD 11·7] with multiorgan abnormalities vs mean age of 52·8 years [11·9] without multiorgan abnormalities; p<0·0001), more likely to have three or more comorbidities (OR 2·47 [1·32–4·82]; padjusted=0·0059), and more likely to have a more severe acute infection (acute CRP >5mg/L, OR 3·55 [1·23–11·88]; padjusted=0·025) than those without multiorgan abnormalities. Presence of lung MRI abnormalities was associated with a two-fold higher risk of chest tightness, and multiorgan MRI abnormalities were associated with severe and very severe persistent physical and mental health impairment (PHOSP-COVID symptom clusters) after hospitalisation. Interpretation: After hospitalisation for COVID-19, people are at risk of multiorgan abnormalities in the medium term. Our findings emphasise the need for proactive multidisciplinary care pathways, with the potential for imaging to guide surveillance frequency and therapeutic stratification

    SARS-CoV-2 Omicron is an immune escape variant with an altered cell entry pathway

    Get PDF
    Vaccines based on the spike protein of SARS-CoV-2 are a cornerstone of the public health response to COVID-19. The emergence of hypermutated, increasingly transmissible variants of concern (VOCs) threaten this strategy. Omicron (B.1.1.529), the fifth VOC to be described, harbours multiple amino acid mutations in spike, half of which lie within the receptor-binding domain. Here we demonstrate substantial evasion of neutralization by Omicron BA.1 and BA.2 variants in vitro using sera from individuals vaccinated with ChAdOx1, BNT162b2 and mRNA-1273. These data were mirrored by a substantial reduction in real-world vaccine effectiveness that was partially restored by booster vaccination. The Omicron variants BA.1 and BA.2 did not induce cell syncytia in vitro and favoured a TMPRSS2-independent endosomal entry pathway, these phenotypes mapping to distinct regions of the spike protein. Impaired cell fusion was determined by the receptor-binding domain, while endosomal entry mapped to the S2 domain. Such marked changes in antigenicity and replicative biology may underlie the rapid global spread and altered pathogenicity of the Omicron variant
    corecore