213 research outputs found

    Hospitalizations due to rotavirus gastroenteritis in Catalonia, Spain, 2003-2008

    Get PDF
    BACKGROUND: Rotavirus is the most common cause of severe gastroenteritis among young children in Spain and worldwide. We evaluated hospitalizations due to community and hospital-acquired rotavirus gastroenteritis (RVGE) and estimated related costs in children under 5 years old in Catalonia, Spain. RESULTS: We analyzed hospital discharge data from the Catalan Health Services regarding hospital admissions coded as infectious gastroenteritis in children under 5 for the period 2003-2008. In order to estimate admission incidence, we used population estimates for each study year published by the Statistic Institut of Catalonia (Idescat). The costs associated with hospital admissions due to rotavirus diarrhea were estimated for the same years. A decision tree model was used to estimate the threshold cost of rotavirus vaccine to achieve cost savings from the healthcare system perspective in Catalonia. From 2003 through 2008, 10655 children under 5 years old were admitted with infectious gastroenteritis (IGE). Twenty-two percent of these admissions were coded as RVGE, yielding an estimated average annual incidence of 104 RVGE hospitalizations per 100000 children in Catalonia. Eighty seven percent of admissions for RVGE occurred during December through March. The mean hospital stay was 3.7 days, 0.6 days longer than for other IGE. An additional 892 cases of presumed nosocomial RVGE were detected, yielding an incidence of 2.5 cases per 1000 child admissions. Total rotavirus hospitalization costs due to community acquired RVGE for the years 2003 and 2008 were 431,593 and 809,224 €, respectively. According to the estimated incidence and hospitalization costs, immunization would result in health system cost savings if the cost of the vaccine was 1.93 € or less. At a vaccine cost of 187 € the incremental cost per hospitalization prevented is 195,388 € (CI 95% 159,300; 238,400). CONCLUSIONS: The burden of hospitalizations attributable to rotavirus appeared to be lower in Catalonia than in other regions of Spain and Europe. The relatively low incidence of hospitalization due to rotavirus makes rotavirus vaccination less cost-effective in Catalonia than in other areas with higher rotavirus disease burden

    Novel multiple sclerosis susceptibility loci implicated in epigenetic regulation

    Get PDF
    We conducted a genome-wide association study (GWAS) on multiple sclerosis (MS) susceptibility in German cohorts with 4888 cases and 10,395 controls. In addition to associations within the major histocompatibility complex (MHC) region, 15 non-MHC loci reached genome-wide significance. Four of these loci are novel MS susceptibility loci. They map to the genes L3MBTL3, MAZ, ERG, and SHMT1. The lead variant at SHMT1 was replicated in an independent Sardinian cohort. Products of the genes L3MBTL3, MAZ, and ERG play important roles in immune cell regulation. SHMT1 encodes a serine hydroxymethyltransferase catalyzing the transfer of a carbon unit to the folate cycle. This reaction is required for regulation of methylation homeostasis, which is important for establishment and maintenance of epigenetic signatures. Our GWAS approach in a defined population with limited genetic substructure detected associations not found in larger, more heterogeneous cohorts, thus providing new clues regarding MS pathogenesis

    Clinical implications of serum neurofilament in newly diagnosed MS patients: a longitudinal multicentre cohort study

    Get PDF
    BACKGROUND: We aim to evaluate serum neurofilament light chain (sNfL), indicating neuroaxonal damage, as a biomarker at diagnosis in a large cohort of early multiple sclerosis (MS) patients. METHODS: In a multicentre prospective longitudinal observational cohort, patients with newly diagnosed relapsing-remitting MS (RRMS) or clinically isolated syndrome (CIS) were recruited between August 2010 and November 2015 in 22 centers. Clinical parameters, MRI, and sNfL levels (measured by single molecule array) were assessed at baseline and up to four-year follow-up. FINDINGS: Of 814 patients, 54.7% (445) were diagnosed with RRMS and 45.3% (369) with CIS when applying 2010 McDonald criteria (RRMS[2010] and CIS[2010]). After reclassification of CIS[2010] patients with existing CSF analysis, according to 2017 criteria, sNfL levels were lower in CIS[2017] than RRMS[2017] patients (9.1 pg/ml, IQR 6.2-13.7 pg/ml, n = 45; 10.8 pg/ml, IQR 7.4-20.1 pg/ml, n = 213; p = 0.036). sNfL levels correlated with number of T2 and Gd+ lesions at baseline and future clinical relapses. Patients receiving disease-modifying therapy (DMT) during the first four years had higher baseline sNfL levels than DMT-naïve patients (11.8 pg/ml, IQR 7.5-20.7 pg/ml, n = 726; 9.7 pg/ml, IQR 6.4-15.3 pg/ml, n = 88). Therapy escalation decisions within this period were reflected by longitudinal changes in sNfL levels. INTERPRETATION: Assessment of sNfL increases diagnostic accuracy, is associated with disease course prognosis and may, particularly when measured longitudinally, facilitate therapeutic decisions

    Accounting for training data error in machine learning applied to earth observations

    Get PDF
    Remote sensing, or Earth Observation (EO), is increasingly used to understand Earth system dynamics and create continuous and categorical maps of biophysical properties and land cover, especially based on recent advances in machine learning (ML). ML models typically require large, spatially explicit training datasets to make accurate predictions. Training data (TD) are typically generated by digitizing polygons on high spatial-resolution imagery, by collecting in situ data, or by using pre-existing datasets. TD are often assumed to accurately represent the truth, but in practice almost always have error, stemming from (1) sample design, and (2) sample collection errors. The latter is particularly relevant for image-interpreted TD, an increasingly commonly used method due to its practicality and the increasing training sample size requirements of modern ML algorithms. TD errors can cause substantial errors in the maps created using ML algorithms, which may impact map use and interpretation. Despite these potential errors and their real-world consequences for map-based decisions, TD error is often not accounted for or reported in EO research. Here we review the current practices for collecting and handling TD. We identify the sources of TD error, and illustrate their impacts using several case studies representing different EO applications (infrastructure mapping, global surface flux estimates, and agricultural monitoring), and provide guidelines for minimizing and accounting for TD errors. To harmonize terminology, we distinguish TD from three other classes of data that should be used to create and assess ML models: training reference data, used to assess the quality of TD during data generation; validation data, used to iteratively improve models; and map reference data, used only for final accuracy assessment. We focus primarily on TD, but our advice is generally applicable to all four classes, and we ground our review in established best practices for map accuracy assessment literature. EO researchers should start by determining the tolerable levels of map error and appropriate error metrics. Next, TD error should be minimized during sample design by choosing a representative spatio-temporal collection strategy, by using spatially and temporally relevant imagery and ancillary data sources during TD creation, and by selecting a set of legend definitions supported by the data. Furthermore, TD error can be minimized during the collection of individual samples by using consensus-based collection strategies, by directly comparing interpreted training observations against expert-generated training reference data to derive TD error metrics, and by providing image interpreters with thorough application-specific training. We strongly advise that TD error is incorporated in model outputs, either directly in bias and variance estimates or, at a minimum, by documenting the sources and implications of error. TD should be fully documented and made available via an open TD repository, allowing others to replicate and assess its use. To guide researchers in this process, we propose three tiers of TD error accounting standards. Finally, we advise researchers to clearly communicate the magnitude and impacts of TD error on map outputs, with specific consideration given to the likely map audience

    A global reference database of crowdsourced cropland data collected using the Geo-Wiki platform

    Get PDF
    A global reference data set on cropland was collected through a crowdsourcing campaign using the Geo-Wiki crowdsourcing tool. The campaign lasted three weeks, with over 80 participants from around the world reviewing almost 36,000 sample units, focussing on cropland identification. For quality assessment purposes, two additional data sets are provided. The first is a control set of 1,793 sample locations validated by students trained in satellite image interpretation. This data set was used to assess the quality of the crowd as the campaign progressed. The second data set contains 60 expert validations for additional evaluation of the quality of the contributions. All data sets are split into two parts: the first part shows all areas classified as cropland and the second part shows cropland average per location and user. After further processing, the data presented here might be suitable to validate and compare medium and high resolution cropland maps generated using remote sensing. These could also be used to train classification algorithms for developing new maps of land cover and cropland extent

    A global reference database of crowdsourced cropland data collected using the Geo-Wiki platform

    Get PDF
    A global reference data set on cropland was collected through a crowdsourcing campaign using the Geo-Wiki crowdsourcing tool. The campaign lasted three weeks, with over 80 participants from around the world reviewing almost 36,000 sample units, focussing on cropland identification. For quality assessment purposes, two additional data sets are provided. The first is a control set of 1,793 sample locations validated by students trained in satellite image interpretation. This data set was used to assess the quality of the crowd as the campaign progressed. The second data set contains 60 expert validations for additional evaluation of the quality of the contributions. All data sets are split into two parts: the first part shows all areas classified as cropland and the second part shows cropland average per location and user. After further processing, the data presented here might be suitable to validate and compare medium and high resolution cropland maps generated using remote sensing. These could also be used to train classification algorithms for developing new maps of land cover and cropland extent

    The potential of crowdsourcing and mobile technology to support flood disaster risk reduction

    Get PDF
    The last decade has seen a rise in citizen science and crowdsourcing for carrying out a variety of tasks across a number of different fields, most notably the collection of data such as the identification of species (e.g. eBird and iNaturalist) and the classification of images (e.g. Galaxy Zoo and Geo-Wiki). Combining human computing with the proliferation of mobile technology has resulted in vast amounts of geo-located data that have considerable value across multiple domains including flood disaster risk reduction. Crowdsourcing technologies, in the form of online mapping, are now being utilized to great effect in post-disaster mapping and relief efforts, e.g. the activities of Humanitarian OpenStreetMap, complementing official channels of relief (e.g. Haiti, Nepal and New York). Disaster event monitoring efforts have been further complemented with the use of social media (e.g. twitter for earthquakes, flood monitoring, and fire detection). Much of the activity in this area has focused on ex-post emergency management while there is considerable potential for utilizing crowdsourcing and mobile technology for vulnerability assessment, early warning and to bolster resilience to flood events. This paper examines the use of crowdsourcing and mobile technology for measuring and monitoring flood hazards, exposure to floods, and vulnerability, drawing upon examples from the literature and ongoing projects on flooding and food security at IIASA

    Technologies to Support Community Flood Disaster Risk Reduction

    Get PDF
    Floods affect more people globally than any other type of natural hazard. Great potential exists for new technologies to support flood disaster risk reduction. In addition to existing expert-based data collection and analysis, direct input from communities and citizens across the globe may also be used to monitor, validate, and reduce flood risk. New technologies have already been proven to effectively aid in humanitarian response and recovery. However, while ex-ante technologies are increasingly utilized to collect information on exposure, efforts directed towards assessing and monitoring hazards and vulnerability remain limited. Hazard model validation and social vulnerability assessment deserve particular attention. New technologies offer great potential for engaging people and facilitating the coproduction of knowledge
    corecore