18 research outputs found

    CartAGen: an Open Source Research Platform for Map Generalization

    Get PDF
    International audienceAutomatic map generalization is a complex task that is still a research problem and requires the development of research prototypes before being usable in productive map processes. In the meantime, reproducible research principles are becoming a standard. Publishing reproducible research means that researchers share their code and their data so that other researchers might be able to reproduce the published experiments, in order to check them, extend them, or compare them to their own experiments. Open source software is a key tool to share code and software, and CartAGen is the first open source research platform that tackles the overall map generalization problem: not only the building blocks that are generalization algorithms, but also methods to chain them, and spatial analysis tools necessary for data enrichment. This paper presents the CartAGen platform, its architecture and its components. The main component of the platform is the implementation of several multi-agent based models of the literature such as AGENT, CartACom, GAEL, CollaGen, or DIOGEN. The paper also explains and discusses different ways, as a researcher, to use or to contribute to CartAGen

    Understanding urban gentrification through machine learning

    Get PDF
    Recent developments in the field of machine learning offer new ways of modelling complex socio-spatial processes, allowing us to make predictions about how and where they might manifest in the future. Drawing on earlier empirical and theoretical attempts to understand gentrification and urban change, this paper shows it is possible to analyse existing patterns and processes of neighbourhood change to identify areas likely to experience change in the future. This is evidenced through an analysis of socio-economic transition in London neighbourhoods (based on 2001 and 2011 Census variables) which is used to predict those areas most likely to demonstrate ‘uplift’ or ‘decline’ by 2021. The paper concludes with a discussion of the implications of such modelling for the understanding of gentrification processes, noting that if qualitative work on gentrification and neighbourhood change is to offer more than a rigorous post-mortem then intensive, qualitative case studies must be confronted with – and complemented by – predictions stemming from other, more extensive approaches. As a demonstration of the capabilities of machine learning, this paper underlines the continuing value of quantitative approaches in understanding complex urban processes such as gentrification

    To use or not to use proprietary street view images in (health and place) research? That is the question

    Get PDF
    Computer vision-based analysis of street view imagery has transformative impacts on environmental assessments. Interactive web services, particularly Google Street View, play an ever-important role in making imagery data ubiquitous. Despite the technical ease of harnessing millions of Google Street View images, this article questions the current practices in using this proprietary data source from a European viewpoint. Our concern lies with Google's terms of service, which restrict bulk image downloads and the generation of street view image-based indices. To reconcile the challenge of advancing society through groundbreaking research while maintaining data license agreements and legal integrity, we believe it is crucial to 1) include an author's statement on using proprietary street view data and the directives it entails, 2) negotiate academic-specific license to democratize Google Street View data access, and 3) adhere to open data principles and utilize open image sources for future research

    A Classification of Multidimensional Open Data for Urban Morphology

    Get PDF
    Identifying socio-spatial pa erns through geodemographic classification has provenutility over a range of disciplines. While most of these spatial classification systems include a plethora of socioeconomic attributes, there is arguably little to no input regarding attributes of the built environment or physical space, and their relationship to socioeconomic profiles within this context has not been evaluated in any systematic way. This research explores the generation of neighbourhood characteristics and other attributes using a geographic data science approach, taking advantage of the increasing availability of such spatial data from open data sources. We adopt a SOM (Self-Organizing Maps) methodology to create a classification of Multidimensional Open Data Urban Morphology (MODUM) and test the extent to which this output systematically follows conventional socioeconomic profiles. Such an analysis can also provide a simplified structure of the physical properties of geographic space that can be further used as input to more complex socioeconomic models

    Geographic Data Science

    Get PDF
    It is widely acknowledged that the emergence of “Big Data” is having a profound and often controversial impact on the production of knowledge. In this context, Data Science has developed as an interdisciplinary approach that turns such “Big Data” into information. This article argues for the positive role that Geography can have on Data Science when being applied to spatially explicit problems; and inversely, makes the case that there is much that Geography and Geographical Analysis could learn from Data Science. We propose a deeper integration through an ambitious research agenda, including systems engineering, new methodological development, and work toward addressing some acute challenges around epistemology. We argue that such issues must be resolved in order to realize a Geographic Data Science, and that such goal would be a desirable one

    INVESTIGATION OF PFAS EXPOSURE RISKS IN KENTUCKY USING MAPPING TOOLS

    Get PDF
    Per- and polyfluoroalkyl substances (PFAS), a group of environmentally-persistent compounds, are environmentally ubiquitous and challenging to remediate. Several studies show that PFAS are detected in drinking water systems when sampling is conducted, making human exposure via drinking water an important health consideration. This research: 1) develops a mapping tool for prioritizing sampling locations; 2) establishes a method for making GIS data and meta(data) in the mapping tool accessible; 3) fosters decision making by integrating knowledge brokering and the alignment interest and influence matrix (AIIM). The tool developed is this research is a geospatial and statistical PFAS hot-spot screening model that assists decision makers in prioritizing and identifying drinking water systems that may be prone to PFAS contamination. There is a need for this type of model because current PFAS exposures are most often identified only when sampling occurs; however, it is too timely and expensive to sample everywhere immediately and there is a lack of tools to assist decision-makers with prioritizing where is a sample. This research also ensures accessibility for the geographic information system (GIS) data and (meta)data by developing a data deposition method that aligns with findable, accessible, interoperable, and reusable (FAIR) principles. Lastly, this research emphasizes the importance of stakeholder engagement to foster informed decisions when science is emerging and uncertain

    Reproducibility and Replicability in Unmanned Aircraft Systems and Geographic Information Science

    Get PDF
    Multiple scientific disciplines face a so-called crisis of reproducibility and replicability (R&R) in which the validity of methodologies is questioned due to an inability to confirm experimental results. Trust in information technology (IT)-intensive workflows within geographic information science (GIScience), remote sensing, and photogrammetry depends on solutions to R&R challenges affecting multiple computationally driven disciplines. To date, there have only been very limited efforts to overcome R&R-related issues in remote sensing workflows in general, let alone those tied to disruptive technologies such as unmanned aircraft systems (UAS) and machine learning (ML). To accelerate an understanding of this crisis, a review was conducted to identify the issues preventing R&R in GIScience. Key barriers included: (1) awareness of time and resource requirements, (2) accessibility of provenance, metadata, and version control, (3) conceptualization of geographic problems, and (4) geographic variability between study areas. As a case study, a replication of a GIScience workflow utilizing Yolov3 algorithms to identify objects in UAS imagery was attempted. Despite the ability to access source data and workflow steps, it was discovered that the lack of accessibility to provenance and metadata of each small step of the work prohibited the ability to successfully replicate the work. Finally, a novel method for provenance generation was proposed to address these issues. It was found that artificial intelligence (AI) could be used to quickly create robust provenance records for workflows that do not exceed time and resource constraints and provide the information needed to replicate work. Such information can bolster trust in scientific results and provide access to cutting edge technology that can improve everyday life

    Reproducibility and Replicability in Unmanned Aircraft Systems and Geographic Information Science

    Get PDF
    Multiple scientific disciplines face a so-called crisis of reproducibility and replicability (R&R) in which the validity of methodologies is questioned due to an inability to confirm experimental results. Trust in information technology (IT)-intensive workflows within geographic information science (GIScience), remote sensing, and photogrammetry depends on solutions to R&R challenges affecting multiple computationally driven disciplines. To date, there have only been very limited efforts to overcome R&R-related issues in remote sensing workflows in general, let alone those tied to disruptive technologies such as unmanned aircraft systems (UAS) and machine learning (ML). To accelerate an understanding of this crisis, a review was conducted to identify the issues preventing R&R in GIScience. Key barriers included: (1) awareness of time and resource requirements, (2) accessibility of provenance, metadata, and version control, (3) conceptualization of geographic problems, and (4) geographic variability between study areas. As a case study, a replication of a GIScience workflow utilizing Yolov3 algorithms to identify objects in UAS imagery was attempted. Despite the ability to access source data and workflow steps, it was discovered that the lack of accessibility to provenance and metadata of each small step of the work prohibited the ability to successfully replicate the work. Finally, a novel method for provenance generation was proposed to address these issues. It was found that artificial intelligence (AI) could be used to quickly create robust provenance records for workflows that do not exceed time and resource constraints and provide the information needed to replicate work. Such information can bolster trust in scientific results and provide access to cutting edge technology that can improve everyday life
    corecore