19 research outputs found

    How Will Hydroelectric Power Generation Develop under Climate Change Scenarios?

    Get PDF
    Climate change has a large impact on water resources and thus on hydropower. Hydroelectric power generation is closely linked to the regional hydrological situation of a watershed and reacts sensitively to changes in water quantity and seasonality. The development of hydroelectric power generation in the Upper Danube basin was modelled for two future decades, namely 2021-2030 and 2051-2060, using a special hydropower module coupled with the physically-based hydrological model PROMET. To cover a possible range of uncertainties, 16 climate scenarios were taken as meteorological drivers which were defined from different ensemble outputs of a stochastic climate generator, based on the IPCC-SRES-A1B emission scenario and four regional climate trends. Depending on the trends, the results show a slight to severe decline in hydroelectric power generation. Whilst the mean summer values indicate a decrease, the mean winter values display an increase. To show past and future regional differences within the Upper Danube basin, three hydropower plants at individual locations were selected. Inter-annual differences originate predominately from unequal contributions of the runoff compartments rain, snow-and ice-melt

    Multi-year mapping of water demand at crop level:An end-to-end workflow based on high-resolution crop type maps and meteorological data

    Get PDF
    This article presents a novel system that produces multiyear high-resolution irrigation water demand maps for agricultural areas, enabling a new level of detail for irrigation support for farmers and agricultural stakeholders. The system is based on a scalable distributed deep learning (DL) model trained on dense time series of Sentinel-2 images and a large training set for the first year of observation and fine tuned on new labeled data for the consecutive years. The trained models are used to generate multiyear crop type maps, which are assimilated together with the Sentinel-2 dense time series and the meteorological data into a physically based agrohydrological model to derive the irrigation water demand for different crops. To process the required large volume of multiyear Copernicus Sentinel-2 data, the software architecture of the proposed system has been built on the integration of the Food Security thematic exploitation platform (TEP) and the data-intensive artificial intelligence Hopsworks platform. While the Food Security TEP provides easy access to Sentinel-2 data and the possibility of developing processing algorithms directly in the cloud, the Hopsworks platform has been used to train DL algorithms in a distributed manner. The experimental analysis was carried out in the upper part of the Danube Basin for the years 2018, 2019, and 2020 considering 37 Sentinel-2 tiles acquired in Austria, Moravia, Hungary, Slovakia, and Germany

    From Copernicus Big Data to Extreme Earth Analytics

    Get PDF
    Copernicus is the European programme for monitoring the Earth. It consists of a set of systems that collect data from satellites and in-situ sensors, process this data and provide users with reliable and up-to-date information on a range of environmental and security issues. The data and information processed and disseminated puts Copernicus at the forefront of the big data paradigm, giving rise to all relevant challenges, the so-called 5 Vs: volume, velocity, variety, veracity and value. In this short paper, we discuss the challenges of extracting information and knowledge from huge archives of Copernicus data. We propose to achieve this by scale-out distributed deep learning techniques that run on very big clusters offering virtual machines and GPUs. We also discuss the challenges of achieving scalability in the management of the extreme volumes of information and knowledge extracted from Copernicus data. The envisioned scientific and technical work will be carried out in the context of the H2020 project ExtremeEarth which starts in January 2019

    The German National Pandemic Cohort Network (NAPKON): rationale, study design and baseline characteristics

    Get PDF
    Schons M, Pilgram L, Reese J-P, et al. The German National Pandemic Cohort Network (NAPKON): rationale, study design and baseline characteristics. European Journal of Epidemiology . 2022.The German government initiated the Network University Medicine (NUM) in early 2020 to improve national research activities on the Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2) pandemic. To this end, 36 German Academic Medical Centers started to collaborate on 13 projects, with the largest being the National Pandemic Cohort Network (NAPKON). The NAPKON's goal is creating the most comprehensive Coronavirus Disease 2019 (COVID-19) cohort in Germany. Within NAPKON, adult and pediatric patients are observed in three complementary cohort platforms (Cross-Sectoral, High-Resolution and Population-Based) from the initial infection until up to three years of follow-up. Study procedures comprise comprehensive clinical and imaging diagnostics, quality-of-life assessment, patient-reported outcomes and biosampling. The three cohort platforms build on four infrastructure core units (Interaction, Biosampling, Epidemiology, and Integration) and collaborations with NUM projects. Key components of the data capture, regulatory, and data privacy are based on the German Centre for Cardiovascular Research. By April 01, 2022, 34 university and 40 non-university hospitals have enrolled 5298 patients with local data quality reviews performed on 4727 (89%). 47% were female, the median age was 52 (IQR 36-62-) and 50 pediatric cases were included. 44% of patients were hospitalized, 15% admitted to an intensive care unit, and 12% of patients deceased while enrolled. 8845 visits with biosampling in 4349 patients were conducted by April 03, 2022. In this overview article, we summarize NAPKON's design, relevant milestones including first study population characteristics, and outline the potential of NAPKON for German and international research activities.Trial registration https://clinicaltrials.gov/ct2/show/NCT04768998 . https://clinicaltrials.gov/ct2/show/NCT04747366 . https://clinicaltrials.gov/ct2/show/NCT04679584. © 2022. The Author(s)

    Differential uptake of three clinically relevant allergens by human plasmacytoid dendritic cells

    No full text
    Background!#!Human plasmacytoid dendritic cells (pDC) have a dual role as interferon-producing and antigen-presenting cells. Their relevance for allergic diseases is controversial. and the impact of pDC on allergic immune responses is poorly understood.!##!Methods!#!This in vitro study on human pDC isolated from peripheral blood was designed to compare side by side the uptake of three clinically relevant representative allergens: fluorochrome-labeled house dust mite Der p 1, Bee venom extract from Apis mellifera (Api) and the food allergen OVA analyzed flow cytometry and confocal microscopy.!##!Results!#!We found that the internalization and its regulation by TLR9 ligation was significantly different between allergens in terms of time course and strength of uptake. Api and OVA uptake in pDC of healthy subjects was faster and reached higher levels than Der p 1 uptake. CpG ODN 2006 suppressed OVA uptake and to a lesser extent Der p 1, while Api internalization was not affected. All allergens colocalized with LAMP1 and EEA1, with Api being internalized particularly fast and reaching highest intracellular levels in pDC. Of note, we could not determine any specific differences in antigen uptake in allergic compared with healthy subjects.!##!Conclusions!#!To our knowledge this is the first study that directly compares uptake regulation of clinically relevant inhalative, injective and food allergens in pDC. Our findings may help to explain differences in the onset and severity of allergic reactions as well as in the efficiency of AIT

    Advances in Snow Hydrology Using a Combined Approach of GNSS In Situ Stations, Hydrological Modelling and Earth Observation—A Case Study in Canada

    No full text
    The availability of in situ snow water equivalent (SWE), snowmelt and run-off measurements is still very limited especially in remote areas as the density of operational stations and field observations is often scarce and usually costly, labour-intense and/or risky. With remote sensing products, spatially distributed information on snow is potentially available, but often lacks the required spatial or temporal requirements for hydrological applications. For the assurance of a high spatial and temporal resolution, however, it is often necessary to combine several methods like Earth Observation (EO), modelling and in situ approaches. Such a combination was targeted within the business applications demonstration project SnowSense (2015–2018), co-funded by the European Space Agency (ESA), where we designed, developed and demonstrated an operational snow hydrological service. During the run-time of the project, the entire service was demonstrated for the island of Newfoundland, Canada. The SnowSense service, developed during the demonstration project, is based on three pillars, including (i) newly developed in situ snow monitoring stations based on signals of the Global Navigation Satellite System (GNSS); (ii) EO snow cover products on the snow cover extent and on information whether the snow is dry or wet; and (iii) an integrated physically based hydrological model. The key element of the service is the novel GNSS based in situ sensor, using two static low-cost antennas with one being mounted on the ground and the other one above the snow cover. This sensor setup enables retrieving the snow parameters SWE and liquid water content (LWC) in the snowpack in parallel, using GNSS carrier phase measurements and signal strength information. With the combined approach of the SnowSense service, it is possible to provide spatially distributed SWE to assess run-off and to provide relevant information for hydropower plant management in a high spatial and temporal resolution. This is particularly needed for so far non, or only sparsely equipped catchments in remote areas. We present the results and validation of (i) the GNSS in situ sensor setup for SWE and LWC measurements at the well-equipped study site Forêt Montmorency near Quebec, Canada and (ii) the entire combined in situ, EO and modelling SnowSense service resulting in assimilated SWE maps and run-off information for two different large catchments in Newfoundland, Canada

    Crop Water Availability Mapping in the Danube Basin Based on Deep Learning, Hydrological and Crop Growth Modelling

    No full text
    The Danube Basin has been hit by several droughts in the last few years. As climate change makes weather extremes and temperature records in late winter and early spring more likely, water availability and irrigation possibilities become more important. In this paper, the crop water demand at field and national scale within the Danube Basin is presented using a dense time series of multispectral Sentinel-2 data, for crop type maps derived with deep learning techniques and physically-based models for crop parameter retrieval and crop growth modelling

    Ein Screeninginstrument für Missbrauch und Vernachlässigung in der Kindheit: der Childhood Trauma Screener (CTS)

    No full text
    Grabe H, Schulz A, Schmidt C, et al. Ein Screeninginstrument für Missbrauch und Vernachlässigung in der Kindheit: der Childhood Trauma Screener (CTS). Psychiatrische Praxis. 2012;39(3):109-115.Anliegen: Ziel war die Entwicklung eines zeitökonomischen Screeninginstruments (Childhood Trauma Screener, CTS) zur Erfassung traumatischer Ereignisse in der Kindheit und Jugend. Methode: Auf der Basis einer Stichprobe der SHIP-LEGENDE-Studie (n = 1668) wurden 5 Items des „Childhood Trauma Questionnaire“ (CTQ, 28 Items) ermittelt, die die 5 Missbrauchs- und Vernachlässigungsdimensionen des CTQ am besten abbildeten. Ergebnisse: In der Validierung auf der Grundlage einer klinischen Stichprobe (n = 211) zeigten sich Korrelationen der 5 CTS Items mit der jeweils zugehörigen CTQ-Dimension von r = 0,55−0,87 sowie des CTS-Gesamtwerts zum CTQ-Gesamtwert von r = 0,88. Cronbachs α lag bei 0,757 (n = 499). Schlussfolgerungen: Der CTS ist ein reliables und sehr ökonomisches Screeninginstrument zur Erfassung traumatischer Ereignisse in Kindheit und Jugend
    corecore