457 research outputs found

    Towards a Reproducible Pan-European Soil Erosion Risk Assessment - RUSLE

    Get PDF
    Soil is a valuable, non-renewable natural resource that offers a multitude of ecosystems goods and services. Given the increasing threat of soil erosion in Europe and the implications this has on future food security and water quality, it is important that land managers and decision makers are provided with accurate and appropriate information on the areas more prone to erosion phenomena. The present study shows an attempt to locate, at regional scale, the most sensitive areas and to highlight any changes of soil erosion trends with climate change. The choice of the input datasets is crucial as they have to offer the most homogeneous and complete covering at the pan-European level and to allow the produced information to be harmonized and easily validated. The model is based on available datasets (HWSD, SGDBE, SRTM, CLC and E-OBS) and The Revised Universal Soil Loss Equation (RUSLE) is used because of its flexibility and least data demanding. A significant effort has been made to select the better simplified equations to be used when a strict application of the RUSLE model was not possible. In particular for the computation of the Rainfall Erosivity factor a validation based on measured precipitation time series (having a temporal resolution of 10-15 minutes) has been implemented to be easily reproducible. The validation computational framework is available as free software. Designing the computational modeling architecture with the aim to ease as much as possible the future reuse of the model in analyzing climate change scenarios has also been a challenging goal of the research

    Toward Open Science at the European Scale: Geospatial Semantic Array Programming for Integrated Environmental Modelling

    Get PDF
    [Excerpt] Interfacing science and policy raises challenging issues when large spatial-scale (regional, continental, global) environmental problems need transdisciplinary integration within a context of modelling complexity and multiple sources of uncertainty. This is characteristic of science-based support for environmental policy at European scale, and key aspects have also long been investigated by European Commission transnational research. Approaches (either of computational science or of policy-making) suitable at a given domain-specific scale may not be appropriate for wide-scale transdisciplinary modelling for environment (WSTMe) and corresponding policy-making. In WSTMe, the characteristic heterogeneity of available spatial information and complexity of the required data-transformation modelling (D-TM) appeal for a paradigm shift in how computational science supports such peculiarly extensive integration processes. In particular, emerging wide-scale integration requirements of typical currently available domain-specific modelling strategies may include increased robustness and scalability along with enhanced transparency and reproducibility. This challenging shift toward open data and reproducible research (open science) is also strongly suggested by the potential - sometimes neglected - huge impact of cascading effects of errors within the impressively growing interconnection among domain-specific computational models and frameworks. Concise array-based mathematical formulation and implementation (with array programming tools) have proved helpful in supporting and mitigating the complexity of WSTMe when complemented with generalized modularization and terse array-oriented semantic constraints. This defines the paradigm of Semantic Array Programming (SemAP) where semantic transparency also implies free software use (although black-boxes - e.g. legacy code - might easily be semantically interfaced). A new approach for WSTMe has emerged by formalizing unorganized best practices and experience-driven informal patterns. The approach introduces a lightweight (non-intrusive) integration of SemAP and geospatial tools - called Geospatial Semantic Array Programming (GeoSemAP). GeoSemAP exploits the joint semantics provided by SemAP and geospatial tools to split a complex D-TM into logical blocks which are easier to check by means of mathematical array-based and geospatial constraints. Those constraints take the form of precondition, invariant and postcondition semantic checks. This way, even complex WSTMe may be described as the composition of simpler GeoSemAP blocks. GeoSemAP allows intermediate data and information layers to be more easily and formally semantically described so as to increase fault-tolerance, transparency and reproducibility of WSTMe. This might also help to better communicate part of the policy-relevant knowledge, often diffcult to transfer from technical WSTMe to the science-policy interface. [...

    Free and Open Source Software underpinning the European Forest Data Centre

    Get PDF
    Worldwide, governments are growingly focusing on free and open source software (FOSS) as a move toward transparency and the freedom to run, copy, study, change and improve the software. The European Commission (EC) is also supporting the development of FOSS [...]. In addition to the financial savings, FOSS contributes to scientific knowledge freedom in computational science (CS) and is increasingly rewarded in the science-policy interface within the emerging paradigm of open science. Since complex computational science applications may be affected by software uncertainty, FOSS may help to mitigate part of the impact of software errors by CS community- driven open review, correction and evolution of scientific code. The continental scale of EC science-based policy support implies wide networks of scientific collaboration. Thematic information systems also may benefit from this approach within reproducible integrated modelling. This is supported by the EC strategy on FOSS: "for the development of new information systems, where deployment is foreseen by parties outside of the EC infrastructure, [F]OSS will be the preferred choice and in any case used whenever possible". The aim of this contribution is to highlight how a continental scale information system may exploit and integrate FOSS technologies within the transdisciplinary research underpinning such a complex system. A European example is discussed where FOSS innervates both the structure of the information system itself and the inherent transdisciplinary research for modelling the data and information which constitute the system content. [...

    Assessing the potential distribution of insect pests: case studies on large pine weevil (Hylobius abietis L) and horse-chestnut leaf miner (Cameraria ohridella) under present and future climate conditions in European forests†

    Get PDF
    Forest insect pests represent a serious threat to European forests and their negative effects could be exacerbated by climate change. This paper illustrates how species distribution modelling integrated with host tree species distribution data can be used to assess forest vulnerability to this threat. Two case studies are used: large pine weevil (Hylobius abietis L) and horse-chestnut leaf miner (Cameraria ohridella Deschka & Dimic) both at pan-European level. The proposed approach integrates information from different sources. Occurrence data of insect pests were collected from the Global Biodiversity Information Facility (GBIF), climatic variables for present climate and future scenarios were sourced, respectively, from WorldClim and from the Research Program on Climate Change, Agriculture and Food Security (CCAFS), and distributional data of host tree species were obtained from the European Forest Data Centre (EFDAC), within the Forest Information System for Europe (FISE). The potential habitat of the target pests was calculated using the machine learning algorithm of Maxent model. On the one hand, the results highlight the potential of species distribution modelling as a valuable tool for decision makers. On the other hand, they stress how this approach can be limited by poor pest data availability, emphasizing the need to establish a harmonised open European database of geo-referenced insect pest distribution data. Evaluation de la repartition potentielle des insectes nuisibles: etudes de cas sur le grand charancon du pin (Hylobius abietis L.) et sur la mineuse du marronnier (Cameraria ohridella) dans les conditions climatiques actuelles et futures dans les forets europeennes Les insectes nuisibles des forets representent une menace serieuse pour les forets europeennes et leurs effets negatifs pourraient etre aggraves par le changement climatique. Cet article illustre l'utilisation de la modelisation de la repartition des especes, integree aux donnees de repartition des arbres-hotes, pour evaluer la vulnerabilite des forets a cette menace. Deux etudes de cas sont utilisees, toutes deux au niveau paneuropeen, pour le grand charancon du pin (Hylobius abietis L.) et la mineuse du marronnier (Cameraria ohridella Deschka & Dimic). L'approche proposee utilise des informations de differentes sources. Les donnees sur la presence des insectes nuisibles proviennent du service mondial d'information sur la biodiversite ('Global Biodiversity Information Facility', GBIF), les variables climatiques pour le climat actuel et des scenarios futurs ont ete obtenues, respectivement, a partir de WorldClim et du Programme de recherche sur le changement climatique, l'agriculture et la securite alimentaire (CCAFS), et les donnees sur la repartition des arbres-hotes ont ete obtenues aupres du Centre europeen de donnees sur les forets (EFDAC), qui fait partie du systeme d'information forestiere pour l'Europe ('Forest Information System for Europe', FISE). L'habitat potentiel des ravageurs etudies a ete calcule en utilisant l'algorithme d'apprentissage automatique du modele Maxent. D'une part, les resultats indiquent que la modelisation de la repartition des especes peut devenir un outil precieux pour les decideurs. D'autre part, ils indiquent que cette approche peut etre limitee par le manque de donnees sur les organismes nuisibles, renforcant ainsi la necessite de creer une base de donnees europeenne harmonisee et ouverte pour les donnees geo-referencees sur la repartition des insectes nuisibles. Oцeнкa пoтeнциaльнoгo pacпpocтpaнeния вpeдныx нaceкoмыx нa пpимepe бoльшoгo cocнoвoгo дoлгoнocикa (Hylobius abietis L) и лиcтoвoгo минёpa кoнcкoгo кaштaнa (Cameraria ohridella) пpи cyщecтвyющиx и бyдyщиx климaтичecкиx ycлoвияx в eвpoпeйcкиx лeca

    dReDBox: Materializing a full-stack rack-scale system prototype of a next-generation disaggregated datacenter

    Get PDF
    Current datacenters are based on server machines, whose mainboard and hardware components form the baseline, monolithic building block that the rest of the system software, middleware and application stack are built upon. This leads to the following limitations: (a) resource proportionality of a multi-tray system is bounded by the basic building block (mainboard), (b) resource allocation to processes or virtual machines (VMs) is bounded by the available resources within the boundary of the mainboard, leading to spare resource fragmentation and inefficiencies, and (c) upgrades must be applied to each and every server even when only a specific component needs to be upgraded. The dRedBox project (Disaggregated Recursive Datacentre-in-a-Box) addresses the above limitations, and proposes the next generation, low-power, across form-factor datacenters, departing from the paradigm of the mainboard-as-a-unit and enabling the creation of function-block-as-a-unit. Hardware-level disaggregation and software-defined wiring of resources is supported by a full-fledged Type-1 hypervisor that can execute commodity virtual machines, which communicate over a low-latency and high-throughput software-defined optical network. To evaluate its novel approach, dRedBox will demonstrate application execution in the domains of network functions virtualization, infrastructure analytics, and real-time video surveillance.This work has been supported in part by EU H2020 ICTproject dRedBox, contract #687632.Peer ReviewedPostprint (author's final draft

    A specific scoliosis classification correlating with brace treatment: description and reliability

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Spinal classification systems for scoliosis which were developed to correlate with surgical treatment historically have been used in brace treatment as well. Previously, there had not been a scoliosis classification system developed specifically to correlate with brace design and treatment. The purpose of this study is to show the intra- and inter- observer reliability of a new scoliosis classification system correlating with brace treatment.</p> <p>Methods</p> <p>An original classification system ("Rigo Classification") was developed in order to define specific principles of correction required for efficacious brace design and fabrication. The classification includes radiological as well as clinical criteria. The radiological criteria are utilized to differentiate five basic types of curvatures including: (I) imbalanced thoracic (or three curves pattern), (II) true double (or four curve pattern), (III) balanced thoracic and false double (non 3 non 4), (IV) single lumbar and (V) single thoracolumbar. In addition to the radiological criteria, the Rigo Classification incorporates the curve pattern according to SRS terminology, the balance/imbalance at the transitional point, and L4-5 counter-tilting. To test the intra-and inter-observer reliability of the Rigo Classification, three observers (1 MD, 1 PT and 1 CPO) measured (and one of them, the MD, re-measured) 51 AP radiographs including all curvature types.</p> <p>Results</p> <p>The intra-observer Kappa value was 0.87 (acceptance >0.70). The inter-observer Kappa values fluctuated from 0.61 to 0.81 with an average of 0.71 (acceptance > 0.70).</p> <p>Conclusions</p> <p>A specific scoliosis classification which correlates with brace treatment has been proposed with an acceptable intra-and inter-observer reliability.</p

    Entanglement purification of unknown quantum states

    Get PDF
    A concern has been expressed that ``the Jaynes principle can produce fake entanglement'' [R. Horodecki et al., Phys. Rev. A {\bf 59}, 1799 (1999)]. In this paper we discuss the general problem of distilling maximally entangled states from NN copies of a bipartite quantum system about which only partial information is known, for instance in the form of a given expectation value. We point out that there is indeed a problem with applying the Jaynes principle of maximum entropy to more than one copy of a system, but the nature of this problem is classical and was discussed extensively by Jaynes. Under the additional assumption that the state ρ(N)\rho^{(N)} of the NN copies of the quantum system is exchangeable, one can write down a simple general expression for ρ(N)\rho^{(N)}. We show how to modify two standard entanglement purification protocols, one-way hashing and recurrence, so that they can be applied to exchangeable states. We thus give an explicit algorithm for distilling entanglement from an unknown or partially known quantum state.Comment: 20 pages RevTeX 3.0 + 1 figure (encapsulated Postscript) Submitted to Physical Review

    An overview of the ciao multiparadigm language and program development environment and its design philosophy

    Full text link
    We describe some of the novel aspects and motivations behind the design and implementation of the Ciao multiparadigm programming system. An important aspect of Ciao is that it provides the programmer with a large number of useful features from different programming paradigms and styles, and that the use of each of these features can be turned on and off at will for each program module. Thus, a given module may be using e.g. higher order functions and constraints, while another module may be using objects, predicates, and concurrency. Furthermore, the language is designed to be extensible in a simple and modular way. Another important aspect of Ciao is its programming environment, which provides a powerful preprocessor (with an associated assertion language) capable of statically finding non-trivial bugs, verifying that programs comply with specifications, and performing many types of program optimizations. Such optimizations produce code that is highly competitive with other dynamic languages or, when the highest levéis of optimization are used, even that of static languages, all while retaining the interactive development environment of a dynamic language. The environment also includes a powerful auto-documenter. The paper provides an informal overview of the language and program development environment. It aims at illustrating the design philosophy rather than at being exhaustive, which would be impossible in the format of a paper, pointing instead to the existing literature on the system
    corecore