2,539 research outputs found

    A computer science perspective on the bendsimplification algorithm

    Get PDF
    The primary aim of this study was to evaluate whether the use of bends provides a better basis than point elimination for research on line structuring. These investigations were undertaken using Arc/Info 7.1.1. Comparative experimental results suggest that the algorithm may not be as widely applicable as the much simpler geometric filters, such as the Douglas-Peucker or Visvalingam algorithms. The paper therefore provides a brief review of these three algorithms. A more detailed conceptual and empirical evaluation of the bendsimplification system follows, highlighting some problems with implementing the system in Arc/Info. The paper then questions the value of over-coupling model- and image-oriented generalization processes within the black-box bendsimplification system. It suggests the type of parameters which could enhance the utility and usability of the Bendsimplify option within the Arc/Info (and perhaps also within the ArcView) environment and provides some pointers for further research. With respect to the main aim of the research, the evidence suggests that bendsimplification is less useful for line segmentation than Visvalingam's algorithm. Further research is needed to assess the value of the iterative bend elimination operator within bendsimplification

    Cartographic Algorithms: Problems of Implementation and Evaluation and the Impact of Digitising Errors

    Get PDF
    Cartographic generalisation remains one of the outstanding challenges in digital cartography and Geographical Information Systems (GIS). It is generally assumed that computerisation will lead to the removal of spurious variability introduced by the subjective decisions of individual cartographers. This paper demonstrates through an in‐depth study of a line simplification algorithm that computerisation introduces its own sources of variability. The algorithm, referred to as the Douglas‐Peucker algorithm in cartographic literature, has been widely used in image processing, pattern recognition and GIS for some 20 years. An analysis of this algorithm and study of some implementations in wide use identify the presence of variability resulting from the subjective decisions of software implementors. Spurious variability in software complicates the processes of evaluation and comparison of alternative algorithms for cartographic tasks. No doubt, variability in implementation could be removed by rigorous study and specification of algorithms. Such future work must address the presence of digitising error in cartographic data. Our analysis suggests that it would be difficult to adapt the Douglas‐Peucker algorithm to cope with digitising error without altering the method. Copyright © 1991, Wiley Blackwell. All rights reserve

    The Douglas-Peucker algorithm for line simplification: Re-evaluation through visualization

    Get PDF
    The primary aim of this paper is to illustrate the value of visualization in cartography and to indicate that tools for the generation and manipulation of realistic images are of limited value within this application. This paper demonstrates the value of visualization within one problem in cartography, namely the generalisation of lines. It reports on the evaluation of the Douglas-Peucker algorithm for line simplification. Visualization of the simplification process and of the results suggest that the mathematical measures of performance proposed by some other researchers are inappropriate, misleading and questionable

    Collaboration on an Ontology for Generalisation

    Get PDF
    workshopInternational audienceTo move beyond the current plateau in automated cartography we need greater sophistication in the process of selecting generalisation algorithms. This is particularly so in the context of machine comprehension. We also need to build on existing algorithm development instead of duplication. More broadly we need to model the geographical context that drives the selection, sequencing and degree of application of generalisation algorithms. We argue that a collaborative effort is required to create and share an ontology for cartographic generalisation focused on supporting the algorithm selection process. The benefits of developing a collective ontology will be the increased sharing of algorithms and support for on-demand mapping and generalisation web services

    Trends and concerns in digital cartography

    Get PDF
    CISRG discussion paper ;

    Constrained set-up of the tGAP structure for progressive vector data transfer

    Get PDF
    A promising approach to submit a vector map from a server to a mobile client is to send a coarse representation first, which then is incrementally refined. We consider the problem of defining a sequence of such increments for areas of different land-cover classes in a planar partition. In order to submit well-generalised datasets, we propose a method of two stages: First, we create a generalised representation from a detailed dataset, using an optimisation approach that satisfies certain cartographic constraints. Second, we define a sequence of basic merge and simplification operations that transforms the most detailed dataset gradually into the generalised dataset. The obtained sequence of gradual transformations is stored without geometrical redundancy in a structure that builds up on the previously developed tGAP (topological Generalised Area Partitioning) structure. This structure and the algorithm for intermediate levels of detail (LoD) have been implemented in an object-relational database and tested for land-cover data from the official German topographic dataset ATKIS at scale 1:50 000 to the target scale 1:250 000. Results of these tests allow us to conclude that the data at lowest LoD and at intermediate LoDs is well generalised. Applying specialised heuristics the applied optimisation method copes with large datasets; the tGAP structure allows users to efficiently query and retrieve a dataset at a specified LoD. Data are sent progressively from the server to the client: First a coarse representation is sent, which is refined until the requested LoD is reached
    corecore