60 research outputs found

    Formalising cartographic generalisation knowledge in an ontology to support on-demand mapping

    Get PDF
    This thesis proposes that on-demand mapping - where the user can choose the geographic features to map and the scale at which to map them - can be supported by formalising, and making explicit, cartographic generalisation knowledge in an ontology. The aim was to capture the semantics of generalisation, in the form of declarative knowledge, in an ontology so that it could be used by an on-demand mapping system to make decisions about what generalisation algorithms are required to resolve a given map condition, such as feature congestion, caused by a change in scale. The lack of a suitable methodology for designing an application ontology was identified and remedied by the development of a new methodology that was a hybrid of existing domain ontology design methodologies. Using this methodology an ontology that described not only the geographic features but also the concepts of generalisation such as geometric conditions, operators and algorithms was built. A key part of the evaluation phase of the methodology was the implementation of the ontology in a prototype on-demand mapping system. The prototype system was used successfully to map road accidents and the underlying road network at three different scales. A major barrier to on-demand mapping is the need to automatically provide parameter values for generalisation algorithms. A set of measure algorithms were developed to identify the geometric conditions in the features, caused by a change in scale. From this a Degree of Generalisation (DoG) is calculated, which represents the “amount” of generalisation required. The DoG is used as an input to a number of bespoke generalisation algorithms. In particular a road network pruning algorithm was developed that respected the relationship between accidents and road segments. The development of bespoke algorithms is not a sustainable solution and a method for employing the DoG concept with existing generalisation algorithms is required. Consideration was given to how the ontology-driven prototype on-demand mapping system could be extended to use cases other than mapping road accidents and a need for collaboration with domain experts on an ontology for generalisation was identified. Although further testing using different uses cases is required, this work has demonstrated that an ontological approach to on-demand mapping has promise

    Cartographic modelling for automated map generation

    Get PDF

    Continuous and Adaptive Cartographic Generalization of River Networks

    Get PDF
    The focus of our research is on a new automated smoothing method and its applications. Traditionally, the application of a smoothing method to a collection of polylines produces a new smoothed dataset. Although the new dataset was derived from the original dataset, it is stored independently. Since many smoothing methods are slow to execute, this is a valid trade-off. However, this greatly increases the data storage requirements for each new smoothing. A consequence of this approach is that interactive map systems can only offer maps at a discrete set of scales. It is desirable to have a fast enough method that would support the reuse of a single base dataset for on-the-fly smoothing for the production of maps at any scale.We were able to create a framework for the automated smoothing of river networks based on the following major contributions:– A wavelet--based method for polyline smoothing and endpoint preservation– Inverse Mirror Periodic (IMP) representation of functions and signals, and dimensional wavelets– Smoothing of features that does not change abruptly between scales– Features are pruned in a continuous manner with respect to scale– River network connectedness is maintained for all scales– Reuse of a base geographic dataset for all scales– Design and implementation of an interactive map viewer for linear hydrographic features that renders in subsecond timeWe have created an interactive map that can smoothly zoom to any region. Numerical experiments show that our wavelet-based method produces cartographically appropriate smoothing for tributaries. The system is implemented to view hydrographic data, such as the USGS National Hydrography Dataset (NHD). The map demonstrates that a wavelet--based approach is well suited for basic generalization operations. It provides smoothing and pruning that is continuously dependent on map scale

    An investigation into automated processes for generating focus maps

    Get PDF
    The use of geographic information for mobile applications such as wayfinding has increased rapidly, enabling users to view information on their current position in relation to the neighbouring environment. This is due to the ubiquity of small devices like mobile phones, coupled with location finding devices utilising global positioning system. However, such applications are still not attractive to users because of the difficulties in viewing and identifying the details of the immediate surroundings that help users to follow directions along a route. This results from a lack of presentation techniques to highlight the salient features (such as landmarks) among other unique features. Another problem is that since such applications do not provide any eye-catching distinction between information about the region of interest along the route and the background information, users are not tempted to focus and engage with wayfinding applications. Although several approaches have previously been attempted to solve these deficiencies by developing focus maps, such applications still need to be improved in order to provide users with a visually appealing presentation of information to assist them in wayfinding. The primary goal of this research is to investigate the processes involved in generating a visual representation that allows key features in an area of interest to stand out from the background in focus maps for wayfinding users. In order to achieve this, the automated processes in four key areas - spatial data structuring, spatial data enrichment, automatic map generalization and spatial data mining - have been thoroughly investigated by testing existing algorithms and tools. Having identified the gaps that need to be filled in these processes, the research has developed new algorithms and tools in each area through thorough testing and validation. Thus, a new triangulation data structure is developed to retrieve the adjacency relationship between polygon features required for data enrichment and automatic map generalization. Further, a new hierarchical clustering algorithm is developed to group polygon features under data enrichment required in the automatic generalization process. In addition, two generalization algorithms for polygon merging are developed for generating a generalized background for focus maps, and finally a decision tree algorithm - C4.5 - is customised for deriving salient features, including the development of a new framework to validate derived landmark saliency in order to improve the representation of focus maps

    Simplification of rivers based on the spatial reduction method

    Get PDF
    This master thesis is focused on cartography generalization of a rivers using collapse and partial collapse method with the usage of straight skeleton data structure. The proposed method was designed for large scale maps in geographical view and for medium scale maps in cartographic view (till 1 : 100 000). The thesis is focusing on width of a river as stand alone criteria for generalization decision. The presented solution represents set of a criteria which decides on generalization of a river. The presented thesis also solves problematic situations that exist on a river such as islands, junctions, shoulders or bifurcation. The thesis also includes proposed generalization algorithm which is using straight skeleton data structure. The algorithm is implemented in C++ programming language in Microsoft Visual Studio IDE. The algorithm uses external libraries Qt and CGAL (Computational Geometry Algorithms Library) for functioning. Algorithm results are saved in ESRI geodatabase with the usage of Python 2.7 programming language and external library ArcPy. Water areas from ZABAGED were chosen as appropriate data for testing. Achieved results of generalization are presented on test data for various scales and they are compared with base maps of Czech Republic. Keywords: digital cartography, cartography...Tato diplomová práce se zabývá generalizací vodních toků metodou úplné a částečné prostorové redukce s využitím datové struktury straight skeleton. Navržená metoda je koncipována pro mapy z pohledu geografie velkých, a pohledu kartografie středních měřítek (do 1 : 100 000). Je zde řešen problém šířky vodního toku jako samostatného kritéria pro rozhodování o generalizaci. Prezentované řešení představuje sadu doplňkových kritérií, která rozhodují o generalizaci vodního toku. V práci jsou řešeny také problematické situace vyskytující se na vodním toku, jako jsou ostrovy, rozdvojení, ramena či soutoky. Součástí práce je navržený generalizační algoritmus, který je vícefázový a využívá datovou strukturu straight skeleton. Algoritmus je implementován v programovacím jazyce C++ ve vývojovém prostředí Microsoft Visual Studio. Ke svému fungování využívá algoritmus externí knihovny Qt a CGAL (Computational Geometry Algorithms Library). Výsledky algoritmu jsou ukládány do ESRI geodatabáze s využitím programovacího jazyku Python 2.7 a externí knihovny ArcPy. Za vhodná testovací data byla zvolena data vodních ploch ze ZABAGED. Dosažené výsledky generalizace jsou prezentovány na testovacích datech pro různá měřítka a jsou porovnávány se Základními mapami České republiky. Klíčová slova: digitální kartografie,...Katedra aplikované geoinformatiky a kartografieDepartment of Applied Geoinformatics and CartographyFaculty of SciencePřírodovědecká fakult

    GEOBIA 2016 : Solutions and Synergies., 14-16 September 2016, University of Twente Faculty of Geo-Information and Earth Observation (ITC): open access e-book

    Get PDF

    The Application of Expert Systems to Small Scale Map Designs

    Get PDF
    The increased availability of inexpensive computer mapping programs in recent years has lead to a great increase in the number of map authors and the number of maps being produced, but does not however appear to have lead to more widespread knowledge of cartographic design theory. The large number of poorly designed maps created by users of these computer systems indicates that there is a lack of knowledge of how to design maps. These poorly designed maps are not the fault of the computer programs, since most programs do have the capability of producing well designed maps when used by someone knowledgeable in map design. Rather, the problem lies with map authors who are not skilled in cartographic design and who would probably never produce a map by conventional means, but would contract a cartographer to produce it. What is required are programs to be used by naive map authors that are better able to produce reasonably well designed maps, or at least maps which do not break the most fundamental rules of map design. The area of computer science devoted to producing programs that include knowledge of how an expert solves a problem is that of Expert Systems. An Expert System is essentially a program which includes a codified form of the rules that an expert uses to solve a problem. Thus a cartographic design expert system would include the rules a cartographer uses when designing a map. This study examines the fields of artificial intelligence and expert system to assess how they may best be applied to the map design problem. A comprehensive review of the application of expert systems in design, mapping generally and map design in particular is also provided. In order to develop an expert system, the problem or 'domain' must be defined in a relatively formal manner. A structure for describing geographic information and cartographic representation is developed and a model of the cartographic design process for application in expert systems is also described. Based on the models developed, a functional specification for a cartographic design expert system for small scale maps is produced, with the rules required for each stage in the design process being set out. The development of an expert system, written in Prolog, incorporating these rules is then described in some detail. Details of how the Prolog language can be applied to a specific problem, colouring the political map, are also given. It has been found that as long as realistic goals are set and that the system is limited either in scale or range of topics, it is possible to develop an operational cartographic design expert system. However, it must be recognised that a considerable amount of further development will be needed to bring such a system to market with the support structures and robustness that this entails

    Cybergis-enabled remote sensing data analytics for deep learning of landscape patterns and dynamics

    Get PDF
    Mapping landscape patterns and dynamics is essential to various scientific domains and many practical applications. The availability of large-scale and high-resolution light detection and ranging (LiDAR) remote sensing data provides tremendous opportunities to unveil complex landscape patterns and better understand landscape dynamics from a 3D perspective. LiDAR data have been applied to diverse remote sensing applications where large-scale landscape mapping is among the most important topics. While researchers have used LiDAR for understanding landscape patterns and dynamics in many fields, to fully reap the benefits and potential of LiDAR is increasingly dependent on advanced cyberGIS and deep learning approaches. In this context, the central goal of this dissertation is to develop a suite of innovative cyberGIS-enabled deep-learning frameworks for combining LiDAR and optical remote sensing data to analyze landscape patterns and dynamics with four interrelated studies. The first study demonstrates a high-accuracy land-cover mapping method by integrating 3D information from LiDAR with multi-temporal remote sensing data using a 3D deep-learning model. The second study combines a point-based classification algorithm and an object-oriented change detection strategy for urban building change detection using deep learning. The third study develops a deep learning model for accurate hydrological streamline detection using LiDAR, which has paved a new way of harnessing LiDAR data to map landscape patterns and dynamics at unprecedented computational and spatiotemporal scales. The fourth study resolves computational challenges in handling remote sensing big data and deep learning of landscape feature extraction and classification through a cutting-edge cyberGIS approach
    corecore