58 research outputs found

    Benchmarking of Flexibility and Needs - 2004

    Get PDF
    ITRC interviewed irrigation district personnel from 25 agricultural districts in eastern Washington, northern Idaho and western Montana. Data were analyzed to determine the degree of water delivery flexibility provided to farmers and the extent of existing and planned district modernization. This is the fourth such report the Irrigation Training and Research Center (ITRC) has published for irrigation districts in the western US. The first two evaluations were conducted on behalf of the Mid-Pacific Region of the US Bureau of Reclamation (USBR) and included California irrigation districts that had long-term federal contracts. The third report was prepared on behalf of the California Department of Water Resources (DWR) and did not include irrigation districts with long-term federal contracts. The first three evaluations were conducted in 1996, 2000 and 2002, respectively. All three reports can be downloaded from the ITRC’s Reports web page (http://www.itrc.org/reports/reportsindex.html). This report was prepared on behalf of the USBR Yakima Office of Water Conservation, Upper Columbia Area of the Pacific Northwest Region and includes districts that receive at least some water from federal facilities. The interview process identified a strong perceived need by the districts for more direct technical assistance and training. This perceived need is greater than what ITRC has seen in California irrigation districts. These needs varied by district and region. In addition to general support, some districts acknowledged interest in small, specialized training efforts customized for single or small groups of districts at local facilities. Interest is especially high for information about automation and Supervisory Control and Data Acquisition (SCADA) systems. The data also indicated that more Rapid Appraisal Process (RAP) visits are needed to determine possible physical and operational improvements (modernization and efficiency) for districts to accommodate the ever-changing needs of their consumers and the environment. Direct technical assistance to individual districts has been and will continue to be a key element of continuing success in modernization. Other key items include: Many of the districts, and their farmers, are heavily dependent upon electric power to convey and distribute irrigation water. Presently, the power rates are lower than in other areas of the West. Irrigation district personnel, on the average, consider on-farm water usage/conservation to be beyond their scope of responsibility. This indicates that the “Bridging the Headgate” initiative by USBR and others may need more effort. Although 24 of the 25 districts provide water on at least an “arranged” basis, there is still room for improvement of the water delivery flexibility provided to farmers. The overall Flexibility Index was 11.5 (max. possible = 15; min. possible = 3). This compares with an overall Flexibility Index of 10.9 for sixteen non-Federal irrigation districts ranked by ITRC in 2002, and an Index of 12.9 for 58 Federal irrigation districts ranked by ITRC in 2000. Since 1995 the irrigation districts have made numerous improvements, including both software and hardware. This report summarizes the results and provides brief comments on various aspects of those results

    Agricultural Water Energy Efficiency: Final Report

    Get PDF
    Beginning in 2007, the Irrigation Training and Research Center (ITRC) at California Polytechnic State University, San Luis Obispo, contracted with the California Energy Commission’s (CEC) Public Interest Energy Research (PIER) Program to undertake a large, multi-tiered study on agricultural water energy efficiency in California. The study was broken into the following research tasks: Task 1: Administrative; Task 2.1: Irrigation district energy survey; Task 2.2: Conversion to groundwater pumping with drip/ micro irrigation systems; Task 2.3: GIS-based water scheduling and software system; Task 3: Irrigation component energy analysis; Task 4: RD&D competitive solicitation; Task 5; Technology transfer. The resulting survey, research, and testing data from these tasks have led to a better understanding of current agricultural operations in California, as well as illuminated new avenues for energy conservation that could have widespread impact on energy efficiency in the state’s agricultural industry

    Using Net Groundwater Extractions for Farm Level Groundwater Sustainability Monitoring

    Get PDF
    The Cal Poly Irrigation Training and Research Center (ITRC) has developed a method for computing net groundwater extraction and recharge at the farm level for district management and regulation of sustainable/safe yields. This method is called Net To/From Groundwater (NTFGW). Net groundwater extraction is preferred for assessing sustainable yield in unconfined aquifer systems over direct metering of gross groundwater pumping. A recent pilot project with the Lower Tule River and Pixley Irrigation Districts’ Groundwater Sustainability Agencies (GSAs) compared actual metered groundwater pumping, surface deliveries, and evapotranspiration to the NTFGW outputs on 19 farms within the GSAs over a 3-year period (2014-2016). In nearly all cases gross metered pumping was greater than net groundwater use, as it should be. In the few instances where this was not the case, intensive investigations identified the issues, which will be presented. The average difference between gross and net groundwater extractions was approximately 14”. The variation of this difference was substantial between farms, indicating the difficulty in using gross pumping from flow/volume metering of actual pumping for sustainability. The NTFGW can incorporate seepage and recharge basin operation on a GSA level. It is also capable of tracking banked groundwater supplies on a farm level

    Protocol for developing quality assurance measures to use in surgical trials:an example from the ROMIO study

    Get PDF
    INTRODUCTION: Randomised controlled trials (RCTs) in surgery are frequently criticised because surgeon expertise and standards of surgery are not considered or accounted for during study design. This is particularly true in pragmatic trials (which typically involve multiple centres and surgeons and are based in 'real world' settings), compared with explanatory trials (which are smaller and more tightly controlled).OBJECTIVE: This protocol describes a process to develop and test quality assurance (QA) measures for use within a predominantly pragmatic surgical RCT comparing minimally invasive and open techniques for oesophageal cancer (the NIHR ROMIO study). It builds on methods initiated in the ROMIO pilot RCT.METHODS AND ANALYSIS: We have identified three distinct types of QA measure: (i) entry criteria for surgeons, through assessment of operative videos, (ii) standardisation of operative techniques (by establishing minimum key procedural phases) and (iii) monitoring of surgeons during the trial, using intraoperative photography to document key procedural phases and standardising the pathological assessment of specimens. The QA measures will be adapted from the pilot study and tested iteratively, and the video and photo assessment tools will be tested for reliability and validity.ETHICS AND DISSEMINATION: Ethics approval was obtained (NRES Committee South West-Frenchay, 25 April 2016, ref: 16/SW/0098). Results of the QA development study will be submitted for publication in a peer-reviewed journal.Trial registration number: ISRCTN59036820, ISRCTN10386621.</p

    Blood transcriptional biomarkers of acute viral infection for detection of pre-symptomatic SARS-CoV-2 infection: a nested, case-control diagnostic accuracy study

    Get PDF
    Background We hypothesised that host-response biomarkers of viral infections might contribute to early identification of individuals infected with SARS-CoV-2, which is critical to breaking the chains of transmission. We aimed to evaluate the diagnostic accuracy of existing candidate whole-blood transcriptomic signatures for viral infection to predict positivity of nasopharyngeal SARS-CoV-2 PCR testing.Methods We did a nested case-control diagnostic accuracy study among a prospective cohort of health-care workers (aged ≄18 years) at St Bartholomew’s Hospital (London, UK) undergoing weekly blood and nasopharyngeal swab sampling for whole-blood RNA sequencing and SARS-CoV-2 PCR testing, when fit to attend work. We identified candidate blood transcriptomic signatures for viral infection through a systematic literature search. We searched MEDLINE for articles published between database inception and Oct 12, 2020, using comprehensive MeSH and keyword terms for “viral infection”, “transcriptome”, “biomarker”, and “blood”. We reconstructed signature scores in blood RNA sequencing data and evaluated their diagnostic accuracy for contemporaneous SARS-CoV-2 infection, compared with the gold standard of SARS-CoV-2 PCR testing, by quantifying the area under the receiver operating characteristic curve (AUROC), sensitivities, and specificities at a standardised Z score of at least 2 based on the distribution of signature scores in test-negative controls. We used pairwise DeLong tests compared with the most discriminating signature to identify the subset of best performing biomarkers. We evaluated associations between signature expression, viral load (using PCR cycle thresholds), and symptom status visually and using Spearman rank correlation. The primary outcome was the AUROC for discriminating between samples from participants who tested negative throughout the study (test-negative controls) and samples from participants with PCR-confirmed SARS-CoV-2 infection (test-positive participants) during their first week of PCR positivity.Findings We identified 20 candidate blood transcriptomic signatures of viral infection from 18 studies and evaluated their accuracy among 169 blood RNA samples from 96 participants over 24 weeks. Participants were recruited between March 23 and March 31, 2020. 114 samples were from 41 participants with SARS-CoV-2 infection, and 55 samples were from 55 test-negative controls. The median age of participants was 36 years (IQR 27–47) and 69 (72%) of 96 were women. Signatures had little overlap of component genes, but were mostly correlated as components of type I interferon responses. A single blood transcript for IFI27 provided the highest accuracy for discriminating between test-negative controls and test-positive individuals at the time of their first positive SARS-CoV-2 PCR result, with AUROC of 0·95 (95% CI 0·91–0·99), sensitivity 0·84 (0·70–0·93), and specificity 0·95 (0·85–0·98) at a predefined threshold (Z score >2). The transcript performed equally well in individuals with and without symptoms. Three other candidate signatures (including two to 48 transcripts) had statistically equivalent discrimination to IFI27 (AUROCs 0·91–0·95).Interpretation Our findings support further urgent evaluation and development of blood IFI27 transcripts as a biomarker for early phase SARS-CoV-2 infection for screening individuals at high risk of infection, such as contacts of index cases, to facilitate early case isolation and early use of antiviral treatments as they emerge

    Immune boosting by B.1.1.529 (Omicron) depends on previous SARS-CoV-2 exposure

    Get PDF
    The Omicron, or Pango lineage B.1.1.529, variant of SARS-CoV-2 carries multiple spike mutations with high transmissibility and partial neutralizing antibody (nAb) escape. Vaccinated individuals show protection from severe disease, often attributed to primed cellular immunity. We investigated T and B cell immunity against B.1.1.529 in triple mRNA vaccinated healthcare workers (HCW) with different SARS-CoV-2 infection histories. B and T cell immunity against previous variants of concern was enhanced in triple vaccinated individuals, but magnitude of T and B cell responses against B.1.1.529 spike protein was reduced. Immune imprinting by infection with the earlier B.1.1.7 (Alpha) variant resulted in less durable binding antibody against B.1.1.529. Previously infection-naĂŻve HCW who became infected during the B.1.1.529 wave showed enhanced immunity against earlier variants, but reduced nAb potency and T cell responses against B.1.1.529 itself. Previous Wuhan Hu-1 infection abrogated T cell recognition and any enhanced cross-reactive neutralizing immunity on infection with B.1.1.529

    Quantitative, multiplexed, targeted proteomics for ascertaining variant specific SARS-CoV-2 antibody response

    Get PDF
    Determining the protection an individual has to severe acute respiratory syndrome coronavirus-2 (SARS-CoV-2) variants of concern (VoCs) is crucial for future immune surveillance, vaccine development, and understanding of the changing immune response. We devised an informative assay to current ELISA-based serology using multiplexed, baited, targeted proteomics for direct detection of multiple proteins in the SARS-CoV-2 anti-spike antibody immunocomplex. Serum from individuals collected after infection or first- and second-dose vaccination demonstrates this approach and shows concordance with existing serology and neutralization. Our assays show altered responses of both immunoglobulins and complement to the Alpha (B.1.1.7), Beta (B.1.351), and Delta (B.1.617.1) VoCs and a reduced response to Omicron (B1.1.1529). We were able to identify individuals who had prior infection, and observed that C1q is closely associated with IgG1 (r > 0.82) and may better reflect neutralization to VoCs. Analyzing additional immunoproteins beyond immunoglobulin (Ig) G, provides important information about our understanding of the response to infection and vaccination

    Effectiveness of a national quality improvement programme to improve survival after emergency abdominal surgery (EPOCH): a stepped-wedge cluster-randomised trial

    Get PDF
    Background: Emergency abdominal surgery is associated with poor patient outcomes. We studied the effectiveness of a national quality improvement (QI) programme to implement a care pathway to improve survival for these patients. Methods: We did a stepped-wedge cluster-randomised trial of patients aged 40 years or older undergoing emergency open major abdominal surgery. Eligible UK National Health Service (NHS) hospitals (those that had an emergency general surgical service, a substantial volume of emergency abdominal surgery cases, and contributed data to the National Emergency Laparotomy Audit) were organised into 15 geographical clusters and commenced the QI programme in a random order, based on a computer-generated random sequence, over an 85-week period with one geographical cluster commencing the intervention every 5 weeks from the second to the 16th time period. Patients were masked to the study group, but it was not possible to mask hospital staff or investigators. The primary outcome measure was mortality within 90 days of surgery. Analyses were done on an intention-to-treat basis. This study is registered with the ISRCTN registry, number ISRCTN80682973. Findings: Treatment took place between March 3, 2014, and Oct 19, 2015. 22 754 patients were assessed for elegibility. Of 15 873 eligible patients from 93 NHS hospitals, primary outcome data were analysed for 8482 patients in the usual care group and 7374 in the QI group. Eight patients in the usual care group and nine patients in the QI group were not included in the analysis because of missing primary outcome data. The primary outcome of 90-day mortality occurred in 1210 (16%) patients in the QI group compared with 1393 (16%) patients in the usual care group (HR 1·11, 0·96–1·28). Interpretation: No survival benefit was observed from this QI programme to implement a care pathway for patients undergoing emergency abdominal surgery. Future QI programmes should ensure that teams have both the time and resources needed to improve patient care. Funding: National Institute for Health Research Health Services and Delivery Research Programme
    • 

    corecore