6,482 research outputs found

    Crowd-sourced cadastral geospatial information : defining a workflow from unmanned aerial system (UAS) data to 3D building volumes using opensource applications

    Get PDF
    Dissertation submitted in partial fulfilment of the requirements for the degree of Master of Science in Geospatial TechnologiesThe surveying field has been impacted over many decades by new inventions and improvements in technology. This has ensured that the profession remains one of high precision with the employment of sophisticated technologies by Cadastral Experts. The use of Unmanned Aerial Systems (UAS) within surveying is not new. However, the standards, technologies, tools and licenses developed by the open source community of developers, have opened new possibilities of utilising UAS within surveying. UASs are being constantly improved to obtain high quality imagery, so efforts were made to find novel ways to add value to the data. This thesis defines a workflow aimed at deriving Cadastral Geospatial Information (Cadastral GI), as three-dimensional (3D) building volumes from the original inputted UAS imagery. To achieve this, an investigation was done to see how crowd-sourced UAS data can be uploaded to open online repositories, downloaded by Cadastral Experts, and then manipulated using open source applications. The Cadastral Experts had to utilise multiple applications and manipulate the data through many data formats, to obtain the (3D) building volumes as final results. Such a product can potentially improve the management of cadastral data by Cadastral Experts, City Managers and National Mapping Agencies. Additionally, an ideal suite of tools is presented, that can be used store, manipulate and share the 3D building volume data while facilitating the contribution of attribute data from the crowd

    A method for matching crowd-sourced and authoritative geospatial data

    Get PDF
    A method for matching crowd-sourced and authoritative geospatial data is presented. A level of tolerance is defined as an input parameter as some difference in the geometry representation of a spatial object is to be expected. The method generates matches between spatial objects using location information and lexical information, such as names and types, and verifies consistency of matches using reasoning in qualitative spatial logic and description logic. We test the method by matching geospatial data from OpenStreetMap and the national mapping agencies of Great Britain and France. We also analyze how the level of tolerance affects the precision and recall of matching results for the same geographic area using 12 different levels of tolerance within a range of 1 to 80 meters. The generated matches show potential in helping enrich and update geospatial data

    A method for matching crowd-sourced and authoritative geospatial data

    Get PDF
    A method for matching crowd-sourced and authoritative geospatial data is presented. A level of tolerance is defined as an input parameter as some difference in the geometry representation of a spatial object is to be expected. The method generates matches between spatial objects using location information and lexical information, such as names and types, and verifies consistency of matches using reasoning in qualitative spatial logic and description logic. We test the method by matching geospatial data from OpenStreetMap and the national mapping agencies of Great Britain and France. We also analyze how the level of tolerance affects the precision and recall of matching results for the same geographic area using 12 different levels of tolerance within a range of 1 to 80 meters. The generated matches show potential in helping enrich and update geospatial data

    PolyMerge: A Novel Technique aimed at Dynamic HD Map Updates Leveraging Polylines

    Full text link
    Currently, High-Definition (HD) maps are a prerequisite for the stable operation of autonomous vehicles. Such maps contain information about all static road objects for the vehicle to consider during navigation, such as road edges, road lanes, crosswalks, and etc. To generate such an HD map, current approaches need to process pre-recorded environment data obtained from onboard sensors. However, recording such a dataset often requires a lot of time and effort. In addition, every time actual road environments are changed, a new dataset should be recorded to generate a relevant HD map. This paper addresses a novel approach that allows to continuously generate or update the HD map using onboard sensor data. When there is no need to pre-record the dataset, updating the HD map can be run in parallel with the main autonomous vehicle navigation pipeline. The proposed approach utilizes the VectorMapNet framework to generate vector road object instances from a sensor data scan. The PolyMerge technique is aimed to merge new instances into previous ones, mitigating detection errors and, therefore, generating or updating the HD map. The performance of the algorithm was confirmed by comparison with ground truth on the NuScenes dataset. Experimental results showed that the mean error for different levels of environment complexity was comparable to the VectorMapNet single instance error.Comment: 6 pages, 9 figure

    The devices, experimental scaffolds, and biomaterials ontology (DEB): a tool for mapping, annotation, and analysis of biomaterials' data

    Get PDF
    The size and complexity of the biomaterials literature makes systematic data analysis an excruciating manual task. A practical solution is creating databases and information resources. Implant design and biomaterials research can greatly benefit from an open database for systematic data retrieval. Ontologies are pivotal to knowledge base creation, serving to represent and organize domain knowledge. To name but two examples, GO, the gene ontology, and CheBI, Chemical Entities of Biological Interest ontology and their associated databases are central resources to their respective research communities. The creation of the devices, experimental scaffolds, and biomaterials ontology (DEB), an open resource for organizing information about biomaterials, their design, manufacture, and biological testing, is described. It is developed using text analysis for identifying ontology terms from a biomaterials gold standard corpus, systematically curated to represent the domain's lexicon. Topics covered are validated by members of the biomaterials research community. The ontology may be used for searching terms, performing annotations for machine learning applications, standardized meta-data indexing, and other cross-disciplinary data exploitation. The input of the biomaterials community to this effort to create data-driven open-access research tools is encouraged and welcomed.Preprin

    Learning Aerial Image Segmentation from Online Maps

    Get PDF
    This study deals with semantic segmentation of high-resolution (aerial) images where a semantic class label is assigned to each pixel via supervised classification as a basis for automatic map generation. Recently, deep convolutional neural networks (CNNs) have shown impressive performance and have quickly become the de-facto standard for semantic segmentation, with the added benefit that task-specific feature design is no longer necessary. However, a major downside of deep learning methods is that they are extremely data-hungry, thus aggravating the perennial bottleneck of supervised classification, to obtain enough annotated training data. On the other hand, it has been observed that they are rather robust against noise in the training labels. This opens up the intriguing possibility to avoid annotating huge amounts of training data, and instead train the classifier from existing legacy data or crowd-sourced maps which can exhibit high levels of noise. The question addressed in this paper is: can training with large-scale, publicly available labels replace a substantial part of the manual labeling effort and still achieve sufficient performance? Such data will inevitably contain a significant portion of errors, but in return virtually unlimited quantities of it are available in larger parts of the world. We adapt a state-of-the-art CNN architecture for semantic segmentation of buildings and roads in aerial images, and compare its performance when using different training data sets, ranging from manually labeled, pixel-accurate ground truth of the same city to automatic training data derived from OpenStreetMap data from distant locations. We report our results that indicate that satisfying performance can be obtained with significantly less manual annotation effort, by exploiting noisy large-scale training data.Comment: Published in IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSIN

    High-Resolution Poverty Maps in Sub-Saharan Africa

    Full text link
    Up-to-date poverty maps are an important tool for policy makers, but until now, have been prohibitively expensive to produce. We propose a generalizable prediction methodology to produce poverty maps at the village level using geospatial data and machine learning algorithms. We tested the proposed method for 25 Sub-Saharan African countries and validated them against survey data. The proposed method can increase the validity of both single country and cross-country estimations leading to higher precision in poverty maps of 44 Sub-Saharan African countries than previously available. More importantly, our cross-country estimation enables the creation of poverty maps when it is not practical or cost-effective to field new national household surveys, as is the case with many low- and middle-income countries.Comment: Updated appendi
    • …
    corecore