1,028 research outputs found

    Improving Representation of Deforestation Effects on Evapotranspiration in the E3SM Land Model

    Get PDF
    Evapotranspiration (ET) plays an important role in land-atmosphere coupling of energy, water, and carbon cycles. Following deforestation, ET is typically observed to decrease substantially as a consequence of decreases in leaf area and roots and increases in runoff. Changes in ET (latent heat flux) revise the surface energy and water budgets, which further affects large-scale atmospheric dynamics and feeds back positively or negatively to long-term forest sustainability. In this study, we used observations from a recent synthesis of 29 pairs of adjacent intact and deforested FLUXNET sites to improve model parameterization of stomatal characteristics, photosynthesis, and soil water dynamics in version 1 of the Energy Exascale Earth System Model (E3SM) Land Model (ELMv1). We found that default ELMv1 predicts an increase in ET after deforestation, likely leading to incorrect estimates of the effects of deforestation on land-atmosphere coupling. The calibrated model accurately represented the FLUXNET observed deforestation effects on ET. Importantly, the search for global optimal parameters converged at values consistent with recent observational syntheses, confirming the reliability of the calibrated physical parameters. Applying this improved model parameterization to the globe scale reduced the bias of annual ET simulation by up to ~600 mm/year. Analysis on the roles of parameters suggested that future model development to improve ET simulation should focus on stomatal resistance and soil water-related parameterizations. Finally, our predicted differences in seasonal ET changes from deforestation are large enough to substantially affect land-atmosphere coupling and should be considered in such studies

    Antibodies to the Mr 64,000 (64K) protein in islet cell antibody positive non-diabetic individuals indicate high risk for impaired Beta-cell function

    Get PDF
    A prospective study of a normal childhood population identified 44 islet cell antibody positive individuals. These subjects were typed for HLA DR and DQ alleles and investigated for the presence of antibodies to the Mr 64,000 (64K) islet cell antigen, complement-fixing islet cell antibodies and radiobinding insulin autoantibodies to determine their potency in detecting subjects with impaired Beta-cell function. At initial testing 64K antibodies were found in six of 44 islet cell antibody positive subjects (13.6%). The same sera were also positive for complement-fixing islet cell antibodies and five of them had insulin autoantibodies. During the follow-up at 18 months, islet cell antibodies remained detectable in 50% of the subjects studied. In all six cases who were originally positive, 64K antibodies were persistently detectable, whereas complement-fixing islet cell antibodies became negative in two of six and insulin autoantibodies in one of five individuals. HLA DR4 (p < 0.005) and absence of asparic acid (Asp) at position 57 of the HLA DQ chain (p < 0.05) were significantly increased in subjects with 64K antibodies compared with control subjects. Of 40 individuals tested in the intravenous glucose tolerance test, three had a first phase insulin response below the first percentile of normal control subjects. Two children developed Type 1 (insulin-dependent) diabetes mellitus after 18 and 26 months, respectively. Each of these subjects was non-Asp homozygous and had persistent islet cell and 64K antibodies. We conclude that 64K antibodies, complement-fixing islet cell antibodies and insulin autoantibodies represent sensitive serological markers in assessing high risk for a progression to Type 1 diabetes in islet cell antibody positive non-diabetic individuals

    Systematic review finds "spin" practices and poor reporting standards in studies on machine learning-based prediction models

    Get PDF
    Objectives We evaluated the presence and frequency of spin practices and poor reporting standards in studies that developed and/or validated clinical prediction models using supervised machine learning techniques. Study Design and Setting We systematically searched PubMed from 01/2018 to 12/2019 to identify diagnostic and prognostic prediction model studies using supervised machine learning. No restrictions were placed on data source, outcome, or clinical specialty. Results We included 152 studies: 38% reported diagnostic models and 62% prognostic models. When reported, discrimination was described without precision estimates in 53/71 abstracts (74.6% [95% CI 63.4–83.3]) and 53/81 main texts (65.4% [95% CI 54.6–74.9]). Of the 21 abstracts that recommended the model to be used in daily practice, 20 (95.2% [95% CI 77.3–99.8]) lacked any external validation of the developed models. Likewise, 74/133 (55.6% [95% CI 47.2–63.8]) studies made recommendations for clinical use in their main text without any external validation. Reporting guidelines were cited in 13/152 (8.6% [95% CI 5.1–14.1]) studies. Conclusion Spin practices and poor reporting standards are also present in studies on prediction models using machine learning techniques. A tailored framework for the identification of spin will enhance the sound reporting of prediction model studies

    Toward Human-Carnivore Coexistence: Understanding Tolerance for Tigers in Bangladesh

    Get PDF
    Fostering local community tolerance for endangered carnivores, such as tigers (Panthera tigris), is a core component of many conservation strategies. Identification of antecedents of tolerance will facilitate the development of effective tolerance-building conservation action and secure local community support for, and involvement in, conservation initiatives. We use a stated preference approach for measuring tolerance, based on the ‘Wildlife Stakeholder Acceptance Capacity’ concept, to explore villagers’ tolerance levels for tigers in the Bangladesh Sundarbans, an area where, at the time of the research, human-tiger conflict was severe. We apply structural equation modeling to test an a priori defined theoretical model of tolerance and identify the experiential and psychological basis of tolerance in this community. Our results indicate that beliefs about tigers and about the perceived current tiger population trend are predictors of tolerance for tigers. Positive beliefs about tigers and a belief that the tiger population is not currently increasing are both associated with greater stated tolerance for the species. Contrary to commonly-held notions, negative experiences with tigers do not directly affect tolerance levels; instead, their effect is mediated by villagers’ beliefs about tigers and risk perceptions concerning human-tiger conflict incidents. These findings highlight a need to explore and understand the socio-psychological factors that encourage tolerance towards endangered species. Our research also demonstrates the applicability of this approach to tolerance research to a wide range of socio-economic and cultural contexts and reveals its capacity to enhance carnivore conservation efforts worldwide

    No association between islet cell antibodies and coxsackie B, mumps, rubella and cytomegalovirus antibodies in non-diabetic individuals aged 7–19 years

    Get PDF
    Viral antibodies were tested in a cohort of 44 isletcell antibody-positive individuals age 7–19 years, and 44 of their islet cell antibody-negative age and sex-matched classmates selected from a population study of 4208 pupils who had been screened for islet cell antibodies. Anti-coxsackie B1-5 IgM responses were detected in 14 of 44 (32%) of the islet cell antibody-positive subjects and in 7 of 44 (16%) control subjects. This difference did not reach the level of statistical significance. None of the islet cell antibody-positive subjects had specific IgM antibodies to mumps, rubella, or cytomegalovirus. There was also no increase in the prevalence or the mean titres of anti-mumps-IgG or IgA and anti-cytomegalovirus-IgG in islet cell antibody-positive subjects compared to control subjects. These results do not suggest any association between islet cell antibodies, and possibly insulitis, with recent mumps, rubella or cytomegalo virus infection. Further studies are required to clarify the relationship between islet cell antibodies and coxsackie B virus infections

    Systematic review identifies the design and methodological conduct of studies on machine learning-based prediction models

    Get PDF
    Background and Objectives We sought to summarize the study design, modelling strategies, and performance measures reported in studies on clinical prediction models developed using machine learning techniques. Methods We search PubMed for articles published between 01/01/2018 and 31/12/2019, describing the development or the development with external validation of a multivariable prediction model using any supervised machine learning technique. No restrictions were made based on study design, data source, or predicted patient-related health outcomes. Results We included 152 studies, 58 (38.2% [95% CI 30.8–46.1]) were diagnostic and 94 (61.8% [95% CI 53.9–69.2]) prognostic studies. Most studies reported only the development of prediction models (n = 133, 87.5% [95% CI 81.3–91.8]), focused on binary outcomes (n = 131, 86.2% [95% CI 79.8–90.8), and did not report a sample size calculation (n = 125, 82.2% [95% CI 75.4–87.5]). The most common algorithms used were support vector machine (n = 86/522, 16.5% [95% CI 13.5–19.9]) and random forest (n = 73/522, 14% [95% CI 11.3–17.2]). Values for area under the Receiver Operating Characteristic curve ranged from 0.45 to 1.00. Calibration metrics were often missed (n = 494/522, 94.6% [95% CI 92.4–96.3]). Conclusion Our review revealed that focus is required on handling of missing values, methods for internal validation, and reporting of calibration to improve the methodological conduct of studies on machine learning–based prediction models. Systematic review registration PROSPERO, CRD42019161764

    Clinical implications of having reduced mid forced expiratory flow rates (FEF25-75), independently of FEV1, in adult patients with asthma

    Get PDF
    INTRODUCTION:FEF25-75 is one of the standard results provided in spirometry reports; however, in adult asthmatics there is limited information on how this physiological measure relates to clinical or biological outcomes independently of the FEV1 or the FEV1/FVC ratio. PURPOSE:To determine the association between Hankinson's percent-predicted FEF25-75 (FEF25-75%) levels with changes in healthcare utilization, respiratory symptom frequency, and biomarkers of distal airway inflammation. METHODS:In participants enrolled in the Severe Asthma Research Program 1-2, we compared outcomes across FEF25-75% quartiles. Multivariable analyses were done to avoid confounding by demographic characteristics, FEV1, and the FEV1/FVC ratio. In a sensitivity analysis, we also compared outcomes across participants with FEF25-75% below the lower limit of normal (LLN) and FEV1/FVC above LLN. RESULTS:Subjects in the lowest FEF25-75% quartile had greater rates of healthcare utilization and higher exhaled nitric oxide and sputum eosinophils. In multivariable analysis, being in the lowest FEF25-75% quartile remained significantly associated with nocturnal symptoms (OR 3.0 [95%CI 1.3-6.9]), persistent symptoms (OR 3.3 [95%CI 1-11], ICU admission for asthma (3.7 [1.3-10.8]) and blood eosinophil % (0.18 [0.07, 0.29]). In the sensitivity analysis, those with FEF25-75% <LLN had significantly more nocturnal and persistent symptoms, emergency room visits, higher serum eosinophil levels and increased methacholine responsiveness. CONCLUSIONS:After controlling for demographic variables, FEV1 and FEV1/FVC, a reduced FEF25-75% is independently associated with previous ICU admission, persistent symptoms, nocturnal symptoms, blood eosinophilia and bronchial hyperreactivity. This suggests that in some asthmatics, a reduced FEF25-75% is an independent biomarker for more severe asthma

    Feasibility, acceptability, and cost of tuberculosis testing by whole-blood interferon-gamma assay

    Get PDF
    BACKGROUND: The whole-blood interferon-gamma release assay (IGRA) is recommended in some settings as an alternative to the tuberculin skin test (TST). Outcomes from field implementation of the IGRA for routine tuberculosis (TB) testing have not been reported. We evaluated feasibility, acceptability, and costs after 1.5 years of IGRA use in San Francisco under routine program conditions. METHODS: Patients seen at six community clinics serving homeless, immigrant, or injection-drug user (IDU) populations were routinely offered IGRA (Quantiferon-TB). Per guidelines, we excluded patients who were <17 years old, HIV-infected, immunocompromised, or pregnant. We reviewed medical records for IGRA results and completion of medical evaluation for TB, and at two clinics reviewed TB screening logs for instances of IGRA refusal or phlebotomy failure. RESULTS: Between November 1, 2003 and February 28, 2005, 4143 persons were evaluated by IGRA. 225(5%) specimens were not tested, and 89 (2%) were IGRA-indeterminate. Positive or negative IGRA results were available for 3829 (92%). Of 819 patients with positive IGRA results, 524 (64%) completed diagnostic evaluation within 30 days of their IGRA test date. Among 503 patients eligible for IGRA testing at two clinics, phlebotomy was refused by 33 (7%) and failed in 40 (8%). Including phlebotomy, laboratory, and personnel costs, IGRA use cost $33.67 per patient tested. CONCLUSION: IGRA implementation in a routine TB control program setting was feasible and acceptable among homeless, IDU, and immigrant patients in San Francisco, with results more frequently available than the historically described performance of TST. Laboratory-based diagnosis and surveillance for M. tuberculosis infection is now possible

    Simulation of an SEIR infectious disease model on the dynamic contact network of conference attendees

    Get PDF
    The spread of infectious diseases crucially depends on the pattern of contacts among individuals. Knowledge of these patterns is thus essential to inform models and computational efforts. Few empirical studies are however available that provide estimates of the number and duration of contacts among social groups. Moreover, their space and time resolution are limited, so that data is not explicit at the person-to-person level, and the dynamical aspect of the contacts is disregarded. Here, we want to assess the role of data-driven dynamic contact patterns among individuals, and in particular of their temporal aspects, in shaping the spread of a simulated epidemic in the population. We consider high resolution data of face-to-face interactions between the attendees of a conference, obtained from the deployment of an infrastructure based on Radio Frequency Identification (RFID) devices that assess mutual face-to-face proximity. The spread of epidemics along these interactions is simulated through an SEIR model, using both the dynamical network of contacts defined by the collected data, and two aggregated versions of such network, in order to assess the role of the data temporal aspects. We show that, on the timescales considered, an aggregated network taking into account the daily duration of contacts is a good approximation to the full resolution network, whereas a homogeneous representation which retains only the topology of the contact network fails in reproducing the size of the epidemic. These results have important implications in understanding the level of detail needed to correctly inform computational models for the study and management of real epidemics
    • …
    corecore