7,570 research outputs found

    CartAGen: an Open Source Research Platform for Map Generalization

    Get PDF
    International audienceAutomatic map generalization is a complex task that is still a research problem and requires the development of research prototypes before being usable in productive map processes. In the meantime, reproducible research principles are becoming a standard. Publishing reproducible research means that researchers share their code and their data so that other researchers might be able to reproduce the published experiments, in order to check them, extend them, or compare them to their own experiments. Open source software is a key tool to share code and software, and CartAGen is the first open source research platform that tackles the overall map generalization problem: not only the building blocks that are generalization algorithms, but also methods to chain them, and spatial analysis tools necessary for data enrichment. This paper presents the CartAGen platform, its architecture and its components. The main component of the platform is the implementation of several multi-agent based models of the literature such as AGENT, CartACom, GAEL, CollaGen, or DIOGEN. The paper also explains and discusses different ways, as a researcher, to use or to contribute to CartAGen

    Deep Learning for Enrichment of Vector Spatial Databases: Application to Highway Interchange

    Get PDF
    Spatial analysis and pattern recognition with vector spatial data is particularly useful to enrich raw data. In road networks for instance, there are many patterns and structures that are implicit with only road line features, among which highway interchange appeared very complex to recognise with vector-based techniques. The goal is to find the roads that belong to an interchange, i.e. the slip roads and the highway roads connected to the slip roads. In order to go further than state-of-the-art vector-based techniques, this paper proposes to use raster-based deep learning techniques to recognise highway interchanges. The contribution of this work is to study how to optimally convert vector data into small images suitable for state-of-the-art deep learning models. Image classification with a convolutional neural network (i.e. is there an interchange in this image or not?) and image segmentation with a u-net (i.e. find the pixels that cover the interchange) are experimented and give results way better than existing vector-based techniques in this specific use case

    Polynomial-Chaos-based Kriging

    Full text link
    Computer simulation has become the standard tool in many engineering fields for designing and optimizing systems, as well as for assessing their reliability. To cope with demanding analysis such as optimization and reliability, surrogate models (a.k.a meta-models) have been increasingly investigated in the last decade. Polynomial Chaos Expansions (PCE) and Kriging are two popular non-intrusive meta-modelling techniques. PCE surrogates the computational model with a series of orthonormal polynomials in the input variables where polynomials are chosen in coherency with the probability distributions of those input variables. On the other hand, Kriging assumes that the computer model behaves as a realization of a Gaussian random process whose parameters are estimated from the available computer runs, i.e. input vectors and response values. These two techniques have been developed more or less in parallel so far with little interaction between the researchers in the two fields. In this paper, PC-Kriging is derived as a new non-intrusive meta-modeling approach combining PCE and Kriging. A sparse set of orthonormal polynomials (PCE) approximates the global behavior of the computational model whereas Kriging manages the local variability of the model output. An adaptive algorithm similar to the least angle regression algorithm determines the optimal sparse set of polynomials. PC-Kriging is validated on various benchmark analytical functions which are easy to sample for reference results. From the numerical investigations it is concluded that PC-Kriging performs better than or at least as good as the two distinct meta-modeling techniques. A larger gain in accuracy is obtained when the experimental design has a limited size, which is an asset when dealing with demanding computational models

    A Data-driven, High-performance and Intelligent CyberInfrastructure to Advance Spatial Sciences

    Get PDF
    abstract: In the field of Geographic Information Science (GIScience), we have witnessed the unprecedented data deluge brought about by the rapid advancement of high-resolution data observing technologies. For example, with the advancement of Earth Observation (EO) technologies, a massive amount of EO data including remote sensing data and other sensor observation data about earthquake, climate, ocean, hydrology, volcano, glacier, etc., are being collected on a daily basis by a wide range of organizations. In addition to the observation data, human-generated data including microblogs, photos, consumption records, evaluations, unstructured webpages and other Volunteered Geographical Information (VGI) are incessantly generated and shared on the Internet. Meanwhile, the emerging cyberinfrastructure rapidly increases our capacity for handling such massive data with regard to data collection and management, data integration and interoperability, data transmission and visualization, high-performance computing, etc. Cyberinfrastructure (CI) consists of computing systems, data storage systems, advanced instruments and data repositories, visualization environments, and people, all linked together by software and high-performance networks to improve research productivity and enable breakthroughs that are not otherwise possible. The Geospatial CI (GCI, or CyberGIS), as the synthesis of CI and GIScience has inherent advantages in enabling computationally intensive spatial analysis and modeling (SAM) and collaborative geospatial problem solving and decision making. This dissertation is dedicated to addressing several critical issues and improving the performance of existing methodologies and systems in the field of CyberGIS. My dissertation will include three parts: The first part is focused on developing methodologies to help public researchers find appropriate open geo-spatial datasets from millions of records provided by thousands of organizations scattered around the world efficiently and effectively. Machine learning and semantic search methods will be utilized in this research. The second part develops an interoperable and replicable geoprocessing service by synthesizing the high-performance computing (HPC) environment, the core spatial statistic/analysis algorithms from the widely adopted open source python package – Python Spatial Analysis Library (PySAL), and rich datasets acquired from the first research. The third part is dedicated to studying optimization strategies for feature data transmission and visualization. This study is intended for solving the performance issue in large feature data transmission through the Internet and visualization on the client (browser) side. Taken together, the three parts constitute an endeavor towards the methodological improvement and implementation practice of the data-driven, high-performance and intelligent CI to advance spatial sciences.Dissertation/ThesisDoctoral Dissertation Geography 201

    Efficient Localization of Discontinuities in Complex Computational Simulations

    Full text link
    Surrogate models for computational simulations are input-output approximations that allow computationally intensive analyses, such as uncertainty propagation and inference, to be performed efficiently. When a simulation output does not depend smoothly on its inputs, the error and convergence rate of many approximation methods deteriorate substantially. This paper details a method for efficiently localizing discontinuities in the input parameter domain, so that the model output can be approximated as a piecewise smooth function. The approach comprises an initialization phase, which uses polynomial annihilation to assign function values to different regions and thus seed an automated labeling procedure, followed by a refinement phase that adaptively updates a kernel support vector machine representation of the separating surface via active learning. The overall approach avoids structured grids and exploits any available simplicity in the geometry of the separating surface, thus reducing the number of model evaluations required to localize the discontinuity. The method is illustrated on examples of up to eleven dimensions, including algebraic models and ODE/PDE systems, and demonstrates improved scaling and efficiency over other discontinuity localization approaches

    Representing Vector Geographic Information As a Tensor for Deep Learning Based Map Generalisation

    Get PDF
    Recently, many researchers tried to generate (generalised) maps using deep learning, and most of the proposed methods deal with deep neural network architecture choices. Deep learning learns to reproduce examples, so we think that improving the training examples, and especially the representation of the initial geographic information, is the key issue for this problem. Our article extracts some representation issues from a literature review and proposes different ways to represent vector geographic information as a tensor.We propose two kinds of contributions: 1) the representation of information by layers; 2) the representation of additional information. Then, we demonstrate the interest of some of our propositions with experiments that show a visual improvement for the generation of generalised topographic maps in urban areas

    GANerated Hands for Real-time 3D Hand Tracking from Monocular RGB

    Full text link
    We address the highly challenging problem of real-time 3D hand tracking based on a monocular RGB-only sequence. Our tracking method combines a convolutional neural network with a kinematic 3D hand model, such that it generalizes well to unseen data, is robust to occlusions and varying camera viewpoints, and leads to anatomically plausible as well as temporally smooth hand motions. For training our CNN we propose a novel approach for the synthetic generation of training data that is based on a geometrically consistent image-to-image translation network. To be more specific, we use a neural network that translates synthetic images to "real" images, such that the so-generated images follow the same statistical distribution as real-world hand images. For training this translation network we combine an adversarial loss and a cycle-consistency loss with a geometric consistency loss in order to preserve geometric properties (such as hand pose) during translation. We demonstrate that our hand tracking system outperforms the current state-of-the-art on challenging RGB-only footage
    • …
    corecore