10 research outputs found

    Greenhouse gas observation network design for Africa

    Get PDF
    An optimal network design was carried out to prioritise the installation or refurbishment of greenhouse gas (GHG) monitoring stations around Africa. The network was optimised to reduce the uncertainty in emissions across three of the most important GHGs: CO2, CH4, and N2O. Optimal networks were derived using incremental optimisation of the percentage uncertainty reduction achieved by a Gaussian Bayesian atmospheric inversion. The solution for CO2 was driven by seasonality in net primary productivity. The solution for N2O was driven by activity in a small number of soil flux hotspots. The optimal solution for CH4 was consistent over different seasons. All solutions for CO2 and N2O placed sites in central Africa at places such as Kisangani, Kinshasa and Bunia (Democratic Republic of Congo), Dundo and Lubango (Angola), Zoétélé (Cameroon), Am Timan (Chad), and En Nahud (Sudan). Many of these sites appeared in the CH4 solutions, but with a few sites in southern Africa as well, such as Amersfoort (South Africa). The multi-species optimal network design solutions tended to have sites more evenly spread-out, but concentrated the placement of new tall-tower stations in Africa between 10ºN and 25ºS. The uncertainty reduction achieved by the multi-species network of twelve stations reached 47.8% for CO2, 34.3% for CH4, and 32.5% for N2O. The gains in uncertainty reduction diminished as stations were added to the solution, with an expected maximum of less than 60%. A reduction in the absolute uncertainty in African GHG emissions requires these additional measurement stations, as well as additional constraint from an integrated GHG observatory and a reduction in uncertainty in the prior biogenic fluxes in tropical Africa

    Issues of geographic context variable calculation methods applied at different geographic levels in spatial historical demographic research : a case study over four parishes in Southern Sweden

    No full text
    Spatial analysis is dependent on the geo-referencing quality of the spatial data, as well as on the definition of the geographic context variables used. However, these facts are rarely taken into consideration in historical demographic research where the geographic factor is considered. An important obstacle in this kind of research is the availability of historical data (spatial and non-spatial), sufficient timeframes, and financial resources to employ qualified scientists to perform the geo-coding and the linking of population to specific geographic levels according to the existent historical sources. Since these are essential issues, it is important to determine if and how much the choice of different geographic context variable definitions calculated over different geographic levels could affect the research outcome. This thesis project attempts to address this problem, by examining how much the results of geographic context variables differ when different definitions of the variables are used or when the variables are calculated over different geographic levels. For this purpose, geographic and demographic data from four rural parishes set in the 19th century southern Sweden is used to define geographic context variables that might affect mortality in historical demographic research (e.g. soil types, proximity to water, proximity to wetlands, proximity to gathering places, and population density). The results show that different definitions of distance might produce contradictory results, depending on the geography of the research location and the shape or size of the geographical units. Similarly, results tend to differ when different geographic levels are used. Though the suitability of what geographic level is chosen is highly dependent upon the research hypothesis, additional research is needed to determine when a geographic level is deemed suitable enough.In recent years, the rapid development of Geographical Information Systems (GIS) has resulted in an ever growing list of new application areas. Historical demography is one of the more recent scientific fields to be benefited by the usage of these systems. GIS have provided the tools necessary for historical demographers to explore new and old data sources including detailed geographical information. As a result, data from national censuses containing geographical identifiers at the scale of the individual or at other levels of aggregation (i.e. municipalities, counties or other administrative areas) as well as detailed historical maps of cities and sites have been digitized and therefore become available for further processing. The integration of contextual geo-spatial information with socioeconomic and demographic data enables historical demographers to search for patterns and map the population behaviors of the past. Further studies regarding the interaction between climatic, environmental, socioeconomic and demographic processes and how they affect different aspects of society such as public health, mortality, fertility and migration are now feasible. This study includes information regarding the definition of certain geographic context variables (e.g. population density, soil types, proximity to water, proximity to wetlands, and proximity to gathering places) that are suspected to affect mortality. The definition of geographic variables and the choice of suitable methods of computation are not trivial and should always be made considering the objective of the demographic research as well as the special characteristics of the research area. This study provides proof of how different results can be produced over the same area of implementation, depending on the definition of a geographic context variable and the choice of suitable computation method. Another problem when dealing with the integration of historical geographic and demographic information, is the choice of suitable geographic level. Cities, counties and countries are examples of different geographic levels. The higher the resolution of a geographic level (i.e. the smaller the geographic unit of a geographic level) the more representative the results of the geographic context variable computations. Depending on the circumstances, high resolution historic data may be hard to find, especially when the research objective is to study the population of larger areas for a long period in time. Even if this kind of information was made available, it may come from a variety of sources and, therefore, be in need of excessive processing before it can be used. This can be a very expensive procedure as it often demands the employment of skilled professionals to conduct a set of very time-demanding tasks. Limited financial resources and timeframes might force researchers to focus on smaller areas for shorter time periods or on larger areas of coarser geographical level. When attempting to link demographic data to geographical data it is important to be aware of the fact that depending on the choice of geographic level the linkage is performed over, there is a possibility of associations between demographic and geographic data being lost. This study attempts to examine if and how much the computed results of geographic context variables might vary, when computed over different geographical levels

    Importance of the Geocoding Level for Historical Demographic Analyses : A Case Study of Rural Parishes in Sweden, 1850–1914

    No full text
    Geocoding longitudinal and individual-level historical demographic databases enables novel analyses of how micro-level geographic factors affected demographic outcomes over long periods. However, such detailed geocoding involves high costs. Additionally, the high spatial resolution cannot be properly utilized if inappropriate methods are used to quantify the geographic factors. We assess how different geocoding levels and methods used to define geographic variables affects the outcome of detailed spatial and historical demographic analyses. Using a longitudinal and individual-level demographic database geocoded at the property unit level, we analyse the effects of population density and proximity to wetlands on all-cause mortality for individuals who lived in five Swedish parishes, 1850–1914. We compare the results from analyses on three detailed geocoding levels using two common quantification methods for each geographic variable. Together with the method selected for quantifying the geographic factors, even small differences in positional accuracy (20–50 m) between the property units and slightly coarser geographic levels heavily affected the results of the demographic analyses. The results also show the importance of accounting for geographic changes over time. Finally, proximity to wetlands and population density affected the mortality of women and children, respectively. However, all possible determinants of mortality were not evaluated in the analyses. In conclusion, for rural historical areas, geocoding to property units is likely necessary for fine-scale analyses at distances within a few hundred metres. We must also carefully consider the quantification methods that are the most logical for the geographic context and the type of analyses

    The influence of geocoding level and definition of geographic context variables on historical demographic analyses

    No full text
    The quality of spatial analysis is highly dependent on the geo-referencing quality and the definition of the geographic context variables used. However, such information is seldom considered in historical demographic and epidemiological research that includes the geographic context. We investigate a suitable geographic level for geocoding of the population and definition of geographic context variables for historical demographic studies. Using longitudinal demographic data combined with unique historical geographic microdata on residential histories, we compare two geocoding levels (property unit and address unit) and two definitions of distance to wetlands (an indicator of exposure to malaria in the 19th century Sweden). We first statistically compare the differences in the distance to wetlands between the two geocoding levels. Thereafter, by analysing how distance to wetlands affected the mortality of children aged 1-15 living in four rural parishes in Sweden, 1850-1914, we study the effect of different geocoding levels and definitions of context variables on the quality of historical demographic analysis. We find that both the geocoding level and the definition of the geographic context variables strongly influence the results of the analyses. For the distance to wetland variable, decreasing its average positional accuracy by at least 100 meters affects the results. Consequently, the significant differences between the two geocoding levels indicate the importance of considering the geographic level of detail when geocodin

    Long-term Reproducibility for Jupyter Notebook

    No full text
    Computational notebooks (e.g. Jupyter notebook) are a popular choice for interactive scientific computing to convey descriptive information together with executable source code. The user can annotate the scientific development of the work, the methods applied, describe ancillary data or the analysis of results, with text, illustrations, figures, and equations. Such ‘executable’ documents provide a paradigm shift in scientific writing, where not only the science is described, but the actual computation and source code are openly available and can be reproduced and validated.Therefore, it is of paramount importance to preserve these documents. A unique and persistent identification (PID) is essential together with providing enough information to execute the source code. Generating a PID for a Jupyter notebook is not technically challenging. We can automatically collect system and run-time information and, with a guided workflow for the user, assemble a rich set of metadata. The collected information allows us to recreate the computational environment and run the source code, which in return (theoretically) should produce the same results as published.The importance of providing a rich set of metadata for all digital objects in a human readable and machine actionable form is well understood and widely accepted as necessity for reproducibility, traceability, and provenance. This is reflected in the FAIR principles (Wilkinson, https://doi.org/10.1038/sdata.2016.18) which are regarded as gold standard by many scientific communities.Pimentel et al. (https://doi.org/10.1109/MSR.2019.00077) analysed over 800’000 Jupyter notebooks from GitHub. 24 % executed without errors and only 4 % produced the same results. The likelihood to successfully compile and run a decade old source code is slim. Long term support for well established operating systems varies between 5 to 10 years, user software support is usually shorter and looking at free and open-source repositories there is often no support (or best effort) offered.We present an approach to safely reproduce the computational environment in the future with a focus on long-term availability. Instead of trying to reinstall the computational environment based on the stored metadata, we propose to archive the docker image, the user space (user installed packages) and finally the source code. Recreating the system in this way is more like restoring a backup, where backup is the equivalent of an entire computer system. It does not solve all the problems but removes a great deal of complexity and uncertainty.Though there are shortcomings in our approach, we believe our solution will lower the threshold for scientists to provide rich meta data, code and results attached to a publication that can be reproduced in the far future

    Input data requirements for daylight simulations in urban densifications

    No full text
    One of the biggest challenges in urban densifications is securing adequate daylight access. This study examines the potential of using semantic 3D city models as input to daylight simulations. It is focusing on investigating input data requirements to these simulations from a geodata, 3D city model specification and measuring guideline perspective. To achieve this, geodata simulation input requirements for the most common daylight metrics are documented. Next, 3D city model data from two Swedish municipalities along with 3D data constructed by ourselves in CAD- and GIS-environments are utilized to explore the impact of using 3D city models of different levels of detail (LOD) and positional accuracy in daylight simulations linked to Swedish and European laws and recommendations. Similarly, the measuring guidelines and 3D city model specification requirements related to balconies and other façade accessories are also evaluated along with the utilization of façade reflectance properties and colour. It is found that LOD1 is sufficient for the obstruction angle metric for most roof types but for e.g., gabled roofs LOD2 should be used. A positional accuracy on a decimetre-level is accurate enough for the aforementioned metric. Daylight factor simulations require that balconies and façade accessories protruding more than a couple of decimetres must be represented in the 3D city model along with information on façade material and colour. The outcome of the study is expressed in the form of a list of recommendations for the creation of national profiles of 3D city models supporting daylight simulations

    Future Swedish 3D City Models—Specifications, Test Data, and Evaluation

    No full text
    Three-dimensional city models are increasingly being used for analyses and simulations. To enable such applications, it is necessary to standardise semantically richer city models and, in some cases, to connect the models with external data sources. In this study, we describe the development of a new Swedish specification for 3D city models, denoted as 3CIM, which is a joint effort between the three largest cities in Sweden—Stockholm, Gothenburg, and Malmö. Technically, 3CIM is an extension of the OGC standard CityGML 2.0, implemented as an application domain extension (ADE). The ADE is semantically thin, mainly extending CityGML 2.0 to harmonise with national standards; in contrast, 3CIM is mainly based on linkages to external databases, registers, and operational systems for the semantic part. The current version, 3CIM 1.0, includes various themes, including Bridge, Building, Utility, City Furniture, Transportation, Tunnel, Vegetation, and Water. Three test areas were created with 3CIM data, one in each city. These data were evaluated in several use-cases, including visualisation as well as daylight, noise, and flooding simulations. The conclusion from these use-cases is that the 3CIM data, together with the linked external data sources, allow for the inclusion of the necessary information for the visualisation and simulations, but extract, transform, and load (ETL) processes are required to tailor the input data. The next step is to implement 3CIM within the three cities, which will entail several challenges, as discussed at the end of the paper

    Future Swedish 3D City Models : Specifications, Test Data, and Evaluation

    No full text
    Three-dimensional city models are increasingly being used for analyses and simulations. To enable such applications, it is necessary to standardise semantically richer city models and, in some cases, to connect the models with external data sources. In this study, we describe the development of a new Swedish specification for 3D city models, denoted as 3CIM, which is a joint effort between the three largest cities in Sweden—Stockholm, Gothenburg, and Malmö. Technically, 3CIM is an extension of the OGC standard CityGML 2.0, implemented as an application domain extension (ADE). The ADE is semantically thin, mainly extending CityGML 2.0 to harmonise with national standards; in contrast, 3CIM is mainly based on linkages to external databases, registers, and operational systems for the semantic part. The current version, 3CIM 1.0, includes various themes, including Bridge, Building, Utility, City Furniture, Transportation, Tunnel, Vegetation, and Water. Three test areas were created with 3CIM data, one in each city. These data were evaluated in several use-cases, including visualisation as well as daylight, noise, and flooding simulations. The conclusion from these use-cases is that the 3CIM data, together with the linked external data sources, allow for the inclusion of the necessary information for the visualisation and simulations, but extract, transform, and load (ETL) processes are required to tailor the input data. The next step is to implement 3CIM within the three cities, which will entail several challenges, as discussed at the end of the paper

    Patchy field sampling biases understanding of climate change impacts across the Arctic

    No full text
    Effective societal responses to rapid climate change in the Arctic rely on an accurate representation of region-specific ecosystem properties and processes. However, this is limited by the scarcity and patchy distribution of field measurements. Here, we use a comprehensive, geo-referenced database of primary field measurements in 1,840 published studies across the Arctic to identify statistically significant spatial biases in field sampling and study citation across this globally important region. We find that 31% of all study citations are derived from sites located within 50 km of just two research sites: Toolik Lake in the USA and Abisko in Sweden. Furthermore, relatively colder, more rapidly warming and sparsely vegetated sites are under-sampled and under-recognized in terms of citations, particularly among microbiology-related studies. The poorly sampled and cited areas, mainly in the Canadian high-Arctic archipelago and the Arctic coastline of Russia, constitute a large fraction of the Arctic ice-free land area. Our results suggest that the current pattern of sampling and citation may bias the scientific consensuses that underpin attempts to accurately predict and effectively mitigate climate change in the region. Further work is required to increase both the quality and quantity of sampling, and incorporate existing literature from poorly cited areas to generate a more representative picture of Arctic climate change and its environmental impacts

    arctic review database_Metcalfe.xlsx

    No full text
    Meta-analysis of the spatial distribution and environmental properties present at sampling locations of published papers on environmental science above 66.3 oN latitude (the arctic circle). The dataset consists of 1,840 cited articles featuring 6,246 sampling locations and 58,215 citations. The articles were selected Each paper has been broadly classified by habitat sampled and discipline featured. For each sampling location, public databases have been used to extract location-specific mean annual temperature, predicted change in mean annual temperature, fAPAR and recorded changed in fAPAR
    corecore