31,380 research outputs found

    Two-stage Bayesian model to evaluate the effect of air pollution on chronic respiratory diseases using drug prescriptions

    Get PDF
    Exposure to high levels of air pollutant concentration is known to be associated with respiratory problems which can translate into higher morbidity and mortality rates. The link between air pollution and population health has mainly been assessed considering air quality and hospitalisation or mortality data. However, this approach limits the analysis to individuals characterised by severe conditions. In this paper we evaluate the link between air pollution and respiratory diseases using general practice drug prescriptions for chronic respiratory diseases, which allow to draw conclusions based on the general population. We propose a two-stage statistical approach: in the first stage we specify a space-time model to estimate the monthly NO2 concentration integrating several data sources characterised by different spatio-temporal resolution; in the second stage we link the concentration to the β2-agonists prescribed monthly by general practices in England and we model the prescription rates through a small area approach

    Monitoring the impact of land cover change on surface urban heat island through google earth engine. Proposal of a global methodology, first applications and problems

    Get PDF
    All over the world, the rapid urbanization process is challenging the sustainable development of our cities. In 2015, the United Nation highlighted in Goal 11 of the SDGs (Sustainable Development Goals) the importance to "Make cities inclusive, safe, resilient and sustainable". In order to monitor progress regarding SDG 11, there is a need for proper indicators, representing different aspects of city conditions, obviously including the Land Cover (LC) changes and the urban climate with its most distinct feature, the Urban Heat Island (UHI). One of the aspects of UHI is the Surface Urban Heat Island (SUHI), which has been investigated through airborne and satellite remote sensing over many years. The purpose of this work is to show the present potential of Google Earth Engine (GEE) to process the huge and continuously increasing free satellite Earth Observation (EO) Big Data for long-term and wide spatio-temporal monitoring of SUHI and its connection with LC changes. A large-scale spatio-temporal procedure was implemented under GEE, also benefiting from the already established Climate Engine (CE) tool to extract the Land Surface Temperature (LST) from Landsat imagery and the simple indicator Detrended Rate Matrix was introduced to globally represent the net effect of LC changes on SUHI. The implemented procedure was successfully applied to six metropolitan areas in the U.S., and a general increasing of SUHI due to urban growth was clearly highlighted. As a matter of fact, GEE indeed allowed us to process more than 6000 Landsat images acquired over the period 1992-2011, performing a long-term and wide spatio-temporal study on SUHI vs. LC change monitoring. The present feasibility of the proposed procedure and the encouraging obtained results, although preliminary and requiring further investigations (calibration problems related to LST determination from Landsat imagery were evidenced), pave the way for a possible global service on SUHI monitoring, able to supply valuable indications to address an increasingly sustainable urban planning of our cities

    Historical Arctic Logbooks Provide Insights into Past Diets and Climatic Responses of Cod

    Get PDF
    Gadus morhua (Atlantic cod) stocks in the Barents Sea are currently at levels not seen since the 1950s. Causes for the population increase last century, and understanding of whether such large numbers will be maintained in the future, are unclear. To explore this, we digitised and interrogated historical cod catch and diet datasets from the Barents Sea. Seventeen years of catch data and 12 years of prey data spanning 1930–1959 cover unexplored spatial and temporal ranges, and importantly capture the end of a previous warm period, when temperatures were similar to those currently being experienced. This study aimed to evaluate cod catch per unit effort and prey frequency in relation to spatial, temporal and environmental variables. There was substantial spatio-temporal heterogeneity in catches through the time series. The highest catches were generally in the 1930s and 1940s, although at some localities more cod were recorded late in the 1950s. Generalized Additive Models showed that environmental, spatial and temporal variables are all valuable descriptors of cod catches, with the highest occurring from 15–45°E longitude and 73–77°N latitude, at bottom temperatures between 2 and 4°C and at depths between 150 and 250 m. Cod diets were highly variable during the study period, with frequent changes in the relative frequencies of different prey species, particularly Mallotus villosus (capelin). Environmental variables were particularly good at describing the importance of capelin and Clupea harengus (herring) in the diet. These new analyses support existing knowledge about how the ecology of the region is controlled by climatic variability. When viewed in combination with more recent data, these historical relationships will be valuable in forecasting the future of Barents Sea fisheries, and in understanding how environments and ecosystems may respond

    A Graph-structured Dataset for Wikipedia Research

    Get PDF
    Wikipedia is a rich and invaluable source of information. Its central place on the Web makes it a particularly interesting object of study for scientists. Researchers from different domains used various complex datasets related to Wikipedia to study language, social behavior, knowledge organization, and network theory. While being a scientific treasure, the large size of the dataset hinders pre-processing and may be a challenging obstacle for potential new studies. This issue is particularly acute in scientific domains where researchers may not be technically and data processing savvy. On one hand, the size of Wikipedia dumps is large. It makes the parsing and extraction of relevant information cumbersome. On the other hand, the API is straightforward to use but restricted to a relatively small number of requests. The middle ground is at the mesoscopic scale when researchers need a subset of Wikipedia ranging from thousands to hundreds of thousands of pages but there exists no efficient solution at this scale. In this work, we propose an efficient data structure to make requests and access subnetworks of Wikipedia pages and categories. We provide convenient tools for accessing and filtering viewership statistics or "pagecounts" of Wikipedia web pages. The dataset organization leverages principles of graph databases that allows rapid and intuitive access to subgraphs of Wikipedia articles and categories. The dataset and deployment guidelines are available on the LTS2 website \url{https://lts2.epfl.ch/Datasets/Wikipedia/}

    GLOBE: Science and Education

    Get PDF
    This article provides a brief overview of the GLOBE Program and describes its benefits to scientists, teachers, and students. The program itself is designed to use environmental research as a means to improve student achievement in basic science, mathematics, geography, and use of technology. Linking of students and scientists as collaborators is seen as a fundamental part of the process. GLOBE trains teachers to teach students how to take measurements of environmental parameters at quality levels acceptable for scientific research. Teacher training emphasizes a hands-on, inquiry-based methodology. Student-collected GLOBE data are universally accessible through the Web. An annual review over the past six years indicates that GLOBE has had a positive impact on students' abilities to use scientific data in decision-making and on students' scientifically informed awareness of the environment. Educational levels: Graduate or professional
    • …
    corecore