194 research outputs found

    Water Availability and Subsidence in California's Central Valley

    Get PDF
    doi: http://dx.doi.org/10.15447/sfews.v13iss3art4&lt;Abstracts are not associated with Essays. -The Editors of SFEWS.&gt;</p

    Groundwater depletion and sustainability of irrigation in the US High Plains and Central Valley

    Get PDF
    Aquifer overexploitation could significantly impact crop production in the United States because 60% of irrigation relies on groundwater. Groundwater depletion in the irrigated High Plains and California Central Valley accounts for ∼50% of groundwater depletion in the United States since 1900. A newly developed High Plains recharge map shows that high recharge in the northern High Plains results in sustainable pumpage, whereas lower recharge in the central and southern High Plains has resulted in focused depletion of 330 km3 of fossil groundwater, mostly recharged during the past 13,000 y. Depletion is highly localized with about a third of depletion occurring in 4% of the High Plains land area. Extrapolation of the current depletion rate suggests that 35% of the southern High Plains will be unable to support irrigation within the next 30 y. Reducing irrigation withdrawals could extend the lifespan of the aquifer but would not result in sustainable management of this fossil groundwater. The Central Valley is a more dynamic, engineered system, with north/south diversions of surface water since the 1950s contributing to ∼7× higher recharge. However, these diversions are regulated because of impacts on endangered species. A newly developed Central Valley Hydrologic Model shows that groundwater depletion since the 1960s, totaling 80 km3, occurs mostly in the south (Tulare Basin) and primarily during droughts. Increasing water storage through artificial recharge of excess surface water in aquifers by up to 3 km3 shows promise for coping with droughts and improving sustainability of groundwater resources in the Central Valley

    Managed Aquifer Recharge as a Tool to Enhance Sustainable Groundwater Management in California

    Get PDF
    A growing population and an increased demand for water resources have resulted in a global trend of groundwater depletion. Arid and semi-arid climates are particularly susceptible, often relying on groundwater to support large population centers or irrigated agriculture in the absence of sufficient surface water resources. In an effort to increase the security of groundwater resources, managed aquifer recharge (MAR) programs have been developed and implemented globally. MAR is the approach of intentionally harvesting and infiltrating water to recharge depleted aquifer storage. California is a prime example of this growing problem, with three cities that have over a million residents and an agricultural industry that was valued at 47 billion dollars in 2015. The present-day groundwater overdraft of over 100 km3 (since 1962) indicates a clear disparity between surface water supply and water demand within the state. In the face of groundwater overdraft and the anticipated effects of climate change, many new MAR projects are being constructed or investigated throughout California, adding to those that have existed for decades. Some common MAR types utilized in California include injection wells, infiltration basins (also known as spreading basins, percolation basins, or recharge basins), and low-impact development. An emerging MAR type that is actively being investigated is the winter flooding of agricultural fields using existing irrigation infrastructure and excess surface water resources, known as agricultural MAR. California therefore provides an excellent case study to look at the historical use and performance of MAR, ongoing and emerging challenges, novel MAR applications, and the potential for expansion of MAR. Effective MAR projects are an essential tool for increasing groundwater security, both in California and on a global scale. This chapter aims to provide an overview of the most common MAR types and applications within the State of California and neighboring semi-arid regions

    Accuracy of the electronic patient record in a first opinion veterinary practice

    Get PDF
    The use of electronic patient records (EPRs) in veterinary research is becoming more common place. To date no-one has investigated how accurately and completely they represent the clinical interactions that happen between veterinary professionals, and their clients and patients. The aim of this study was to compare data extracted from consultations within EPRs with data gathered by direct observation of the same consultation. A secondary aim was to establish the inter-rater reliability of two researchers who examined the data extracted from the EPRs. A convenience sample of 36 small animal consultations undertaken by 2 veterinary surgeons (83% by one veterinary surgeon) at a mixed veterinary practice in the United Kingdom was studied. All 36 consultations were observed by a single researcher using a standardised data collection tool. The information recorded in the EPRs was extracted from the Practice Management Software (PMS) systems using a validated XML schema. The XML extracted data was then converted into the same format as the observed data by two independent researchers who examined the extracted information and recorded their findings using the same tool as for the observation. The issues discussed and any action taken relating to those problems recorded in the observed and extracted datasets were then compared. In addition the inter-rater reliability of the two researchers who examined the extracted data was assessed. Only 64.4% of the observed problems discussed during the consultations were recorded in the EPR. The type of problem, who raised the problem and at what point in the consultation the problem was raised significantly affected whether the problem was recorded or not in the EPR. Only 58.3% of observed actions taken during the consultations were recorded in the EPR and the type of action significantly affected whether it would be recorded or not. There was moderate agreement between the two researchers who examined the extracted data. This is the first study that examines how much of the activity that occurs in small animal consultations is recorded in the EPR. Understanding the completeness, reliability and validity of EPRs is vital if they are to continue to be used for clinical research and the results to direct clinical care

    Why Are Outcomes Different for Registry Patients Enrolled Prospectively and Retrospectively? Insights from the Global Anticoagulant Registry in the FIELD-Atrial Fibrillation (GARFIELD-AF).

    Get PDF
    Background: Retrospective and prospective observational studies are designed to reflect real-world evidence on clinical practice, but can yield conflicting results. The GARFIELD-AF Registry includes both methods of enrolment and allows analysis of differences in patient characteristics and outcomes that may result. Methods and Results: Patients with atrial fibrillation (AF) and ≥1 risk factor for stroke at diagnosis of AF were recruited either retrospectively (n = 5069) or prospectively (n = 5501) from 19 countries and then followed prospectively. The retrospectively enrolled cohort comprised patients with established AF (for a least 6, and up to 24 months before enrolment), who were identified retrospectively (and baseline and partial follow-up data were collected from the emedical records) and then followed prospectively between 0-18 months (such that the total time of follow-up was 24 months; data collection Dec-2009 and Oct-2010). In the prospectively enrolled cohort, patients with newly diagnosed AF (≤6 weeks after diagnosis) were recruited between Mar-2010 and Oct-2011 and were followed for 24 months after enrolment. Differences between the cohorts were observed in clinical characteristics, including type of AF, stroke prevention strategies, and event rates. More patients in the retrospectively identified cohort received vitamin K antagonists (62.1% vs. 53.2%) and fewer received non-vitamin K oral anticoagulants (1.8% vs . 4.2%). All-cause mortality rates per 100 person-years during the prospective follow-up (starting the first study visit up to 1 year) were significantly lower in the retrospective than prospectively identified cohort (3.04 [95% CI 2.51 to 3.67] vs . 4.05 [95% CI 3.53 to 4.63]; p = 0.016). Conclusions: Interpretations of data from registries that aim to evaluate the characteristics and outcomes of patients with AF must take account of differences in registry design and the impact of recall bias and survivorship bias that is incurred with retrospective enrolment. Clinical Trial Registration: - URL: http://www.clinicaltrials.gov . Unique identifier for GARFIELD-AF (NCT01090362)
    • …
    corecore