5,965 research outputs found
Multi-dimensional modelling for the national mapping agency: a discussion of initial ideas, considerations, and challenges
The Ordnance Survey, the National Mapping Agency (NMA) for Great Britain, has recently
begun to research the possible extension of its 2-dimensional geographic information into a
multi-dimensional environment. Such a move creates a number of data creation and storage
issues which the NMA must consider. Many of these issues are highly relevant to all NMA’s
and their customers alike, and are presented and explored here.
This paper offers a discussion of initial considerations which NMA’s face in the creation of
multi-dimensional datasets. Such issues include assessing which objects should be mapped in
3 dimensions by a National Mapping Agency, what should be sensibly represented
dynamically, and whether resolution of multi-dimensional models should change over space.
This paper also offers some preliminary suggestions for the optimal creation method for any
future enhanced national height model for the Ordnance Survey. This discussion includes
examples of problem areas and issues in both the extraction of 3-D data and in the
topological reconstruction of such. 3-D feature extraction is not a new problem. However, the
degree of automation which may be achieved and the suitability of current techniques for
NMA’s remains a largely unchartered research area, which this research aims to tackle.
The issues presented in this paper require immediate research, and if solved adequately
would mark a cartographic paradigm shift in the communication of geographic information –
and could signify the beginning of the way in which NMA’s both present and interact with
their customers in the future
Techniques for augmenting the visualisation of dynamic raster surfaces
Despite their aesthetic appeal and condensed nature, dynamic raster surface representations such as a temporal series of a landform and an attribute series of a socio-economic attribute of an area, are often criticised for the lack of an effective information delivery and interactivity.In this work, we readdress some of the earlier raised reasons for these limitations -information-laden quality of surface datasets, lack of spatial and temporal continuity in the original data, and a limited scope for a real-time interactivity. We demonstrate with examples that the use of four techniques namely the re-expression of the surfaces as a framework of morphometric features, spatial generalisation, morphing, graphic lag and brushing can augment the visualisation of dynamic raster surfaces in temporal and attribute series
Weighted and metric surface networks - new insights and an interactive application for their generalisation in Tcl/Tk
The idea of characterising the different forms of natural topographic surfaces by a topologicalmodel based on their fundamental surface features has attracted many proposals. In this paper, adetailed discussion and new proposals on various issues related to the concept, generation, andvisualisation of two graph theoretic based surface topology data structures ? Weighted SurfaceNetworks and their improved version, Metric Surface Networks - are presented. Also presented isan interactive Tcl/Tk application called Surface Topology Toolkit, which has been developed tosupport the discussion on aspects of their generalisation and visualisation. The highlight of theSurface Topology Toolkit is the utility to allow arbitrary contraction unlike the usual verteximportance based criterion. This paper proposes that effective automated surface topologymodelling based on these surface networks requires (a) further research in the development of?computing? algorithms that will accurately locate critical surface points, be able to establishtopological links, and also check topological consistency, (b) transforming their 2D straight linegraph like appearance to 3D to improve visualisation and contraction, and (c) assessment and userawarenessabout the effects of each type of contraction criterion on the topography
Historical collaborative geocoding
The latest developments in digital have provided large data sets that can
increasingly easily be accessed and used. These data sets often contain
indirect localisation information, such as historical addresses. Historical
geocoding is the process of transforming the indirect localisation information
to direct localisation that can be placed on a map, which enables spatial
analysis and cross-referencing. Many efficient geocoders exist for current
addresses, but they do not deal with the temporal aspect and are based on a
strict hierarchy (..., city, street, house number) that is hard or impossible
to use with historical data. Indeed historical data are full of uncertainties
(temporal aspect, semantic aspect, spatial precision, confidence in historical
source, ...) that can not be resolved, as there is no way to go back in time to
check. We propose an open source, open data, extensible solution for geocoding
that is based on the building of gazetteers composed of geohistorical objects
extracted from historical topographical maps. Once the gazetteers are
available, geocoding an historical address is a matter of finding the
geohistorical object in the gazetteers that is the best match to the historical
address. The matching criteriae are customisable and include several dimensions
(fuzzy semantic, fuzzy temporal, scale, spatial precision ...). As the goal is
to facilitate historical work, we also propose web-based user interfaces that
help geocode (one address or batch mode) and display over current or historical
topographical maps, so that they can be checked and collaboratively edited. The
system is tested on Paris city for the 19-20th centuries, shows high returns
rate and is fast enough to be used interactively.Comment: WORKING PAPE
The Douglas-Peucker algorithm for line simplification: Re-evaluation through visualization
The primary aim of this paper is to illustrate the value of visualization in cartography and to indicate that tools for the generation and manipulation of realistic images are of limited value within this application. This paper demonstrates the value of visualization within one problem in cartography, namely the generalisation of lines. It reports on the evaluation of the Douglas-Peucker algorithm for line simplification. Visualization of the simplification process and of the results suggest that the mathematical measures of performance proposed by some other researchers are inappropriate, misleading and questionable
Line generalisation by repeated elimination of points
This paper presents a new approach to line generalisation which uses the concept of ‘effective area’ for progressive simplification of a line by point elimination. Two coastlines are used to compare the performance of this, with that of the widely used Douglas-Peucker, algorithm. The results from the area-based algorithm compare favourably with manual generalisation of the same lines. It is capable of achieving both imperceptible minimal simplifications and caricatural generalisations. By careful selection of cutoff values, it is possible to use the same algorithm for scale-dependent and scale-independent generalisations. More importantly, it offers scope for modelling cartographic lines as consisting of features within features so that their geometric manipulation may be modified by application- and/or user-defined rules and weights. The paper examines the merits and limitations of the algorithm and the opportunities it offers for further research and progress in the field of line generalisation. © 1993 Maney Publishing
- …