44 research outputs found

    Collaboration on an Ontology for Generalisation

    Get PDF
    To move beyond the current plateau in automated cartography we need greater sophistication in the process of selecting generalisation algorithms. This is particularly so in the context of machine comprehension. We also need to build on existing algorithm development instead of duplication. More broadly we need to model the geographical context that drives the selection, sequencing and degree of application of generalisation algorithms. We argue that a collaborative effort is required to create and share an ontology for cartographic generalisation focused on supporting the algorithm selection process. The benefits of developing a collective ontology will be the increased sharing of algorithms and support for on-demand mapping and generalisation web services

    Process Modelling, Web Services and Geoprocessing

    Get PDF
    Process modelling has always been an important part of research in generalisation. In the early days this would take the form of a static sequence of generalisation actions, but currently the focus is on modelling much more complex processes, capable of generalising geographic data into various maps according to specific user requirements. To channel the growing complexity of the processes required, better process models had to be developed. This chapter discusses several aspects of the problem of building such systems. As the system gets more complex, it becomes important to be able to reuse components which already exist. Web services have been used to encapsulate generalisation processes in a way that maximises their interoperability and therefore reusability. However, for a system to discover and trigger such a service, it needs to be formalised and described in a machine understandable way, and the system needs to have the knowledge about where and when to use such tools. This chapter therefore explores the requirements and potential approaches to the design and building of such systems

    Active Brownian Particles. From Individual to Collective Stochastic Dynamics

    Full text link
    We review theoretical models of individual motility as well as collective dynamics and pattern formation of active particles. We focus on simple models of active dynamics with a particular emphasis on nonlinear and stochastic dynamics of such self-propelled entities in the framework of statistical mechanics. Examples of such active units in complex physico-chemical and biological systems are chemically powered nano-rods, localized patterns in reaction-diffusion system, motile cells or macroscopic animals. Based on the description of individual motion of point-like active particles by stochastic differential equations, we discuss different velocity-dependent friction functions, the impact of various types of fluctuations and calculate characteristic observables such as stationary velocity distributions or diffusion coefficients. Finally, we consider not only the free and confined individual active dynamics but also different types of interaction between active particles. The resulting collective dynamical behavior of large assemblies and aggregates of active units is discussed and an overview over some recent results on spatiotemporal pattern formation in such systems is given.Comment: 161 pages, Review, Eur Phys J Special-Topics, accepte

    Methodologies for the evaluation of generalised data derived with commercial available generalisation systems

    Full text link
    The paper investigates methodical questions on the analyses and evaluation of automated generalised maps. The maps are produced with commercially available out-of-the-box generalisation systems, in a way that every system was tested by several persons on four test cases. The requirements on the generalised maps were described by cartographic constraints in a formal way. In addition, manually generalised maps were provided to give further reference information for the tester. The analyses of the generalised maps are to be based on empirical and automated evaluation methods. The paper will present these evaluation methods in detail with objectives, related research, how the methods are realised and expected outcomes. Possible interchanges and synergies between the evaluation methods will also be described. The work published within this paper contributes to research on formal descriptions of cartographic requirements on generalised maps. It supports the development of methods for the situation and context dependent application of generalisation functionality and serves on the evaluation of existing generalisation products, to derive future research and development potentia

    COMPARING IMAGE-BASED METHODS FOR ASSESSING VISUAL CLUTTER IN GENERALIZED MAPS

    No full text
    Map generalization abstracts and simplifies geographic information to derive maps at smaller scales. The automation of map generalization requires techniques to evaluate the global quality of a generalized map. The quality and legibility of a generalized map is related to the complexity of the map, or the amount of clutter in the map, i.e. the excessive amount of information and its disorganization. Computer vision research is highly interested in measuring clutter in images, and this paper proposes to compare some of the existing techniques from computer vision, applied to generalized maps evaluation. Four techniques from the literature are described and tested on a large set of maps, generalized at different scales: edge density, subband entropy, quad tree complexity, and segmentation clutter. The results are analyzed against several criteria related to generalized maps, the identification of cluttered areas, the preservation of the global amount of information, the handling of occlusions and overlaps, foreground vs background, and blank space reduction

    A study on the state-of-the-art in automated map generalisation implemented in commercial out-of-the-box software

    Full text link
    This paper describes the set up and the progress of the EuroSDR project that studies the state-of-the-art in automated map generalisation implemented in commercial out-of-thebox software. The project started in October 2006 with a project team consisting of National Mapping Agencies (NMAs) and research institutes. From October 2006 till May 2007 four test cases of four different NMAs were selected, consisting of a large scale source data set, requirements for the smaller scale output map as well as symbolisation information. Much effort has been put in specifying and harmonising requirements for the output maps. These requirements have been defined as a set of constraints to be respected in the output maps. From June 2007 the project team tested the four test cases with four commercial out-of-the-box software systems: ArcGIS, Genesys, Change/Push/Typify and Clarity. The vendors of these systems performed parallel tests on the four test cases in which they were allowed to customise their systems. An evaluation methodology has been designed and partly implemented. Results are expected by the end of 2008
    corecore