8 research outputs found

    THE VISUALIZATION AND ANALYSIS OF URBAN FACILITY POIS USING NETWORK KERNEL DENSITY ESTIMATION CONSTRAINED BY MULTI-FACTORS

    Get PDF
    The urban facility, one of the most important service providers is usuallyrepresented by sets of points in GIS applications using POI (Point of Interest) modelassociated with certain human social activities. The knowledge about distributionintensity and pattern of facility POIs is of great significance in spatial analysis,including urban planning, business location choosing and social recommendations.Kernel Density Estimation (KDE), an efficient spatial statistics tool for facilitatingthe processes above, plays an important role in spatial density evaluation, becauseKDE method considers the decay impact of services and allows the enrichment ofthe information from a very simple input scatter plot to a smooth output densitysurface. However, the traditional KDE is mainly based on the Euclidean distance,ignoring the fact that in urban street network the service function of POI is carriedout over a network-constrained structure, rather than in a Euclidean continuousspace. Aiming at this question, this study proposes a computational method of KDEon a network and adopts a new visualization method by using 3-D “wall” surface.Some real conditional factors are also taken into account in this study, such astraffic capacity, road direction and facility difference. In practical works theproposed method is implemented in real POI data in Shenzhen city, China to depictthe distribution characteristic of services under impacts of multi-factors

    The visualization and analysis of urban facility pois using network kernel density estimation constrained by multi-factors

    Get PDF
    The urban facility, one of the most important service providers is usually represented by sets of points in GIS applications using POI (Point of Interest) model associated with certain human social activities. The knowledge about distribution intensity and pattern of facility POIs is of great significance in spatial analysis, including urban planning, business location choosing and social recommendations. Kernel Density Estimation (KDE), an efficient spatial statistics tool for facilitating the processes above, plays an important role in spatial density evaluation, because KDE method considers the decay impact of services and allows the enrichment of the information from a very simple input scatter plot to a smooth output density surface. However, the traditional KDE is mainly based on the Euclidean distance, ignoring the fact that in urban street network the service function of POI is carried out over a network-constrained structure, rather than in a Euclidean continuous space. Aiming at this question, this study proposes a computational method of KDE on a network and adopts a new visualization method by using 3-D "wall" surface. Some real conditional factors are also taken into account in this study, such as traffic capacity, road direction and facility difference. In practical works the proposed method is implemented in real POI data in Shenzhen city, China to depict the distribution characteristic of services under impacts of multi-factors

    A heuristic approach to resolving graphical point feature conflicts in large-scale maps

    Get PDF
    Práce pojednává o automatizaci kartografické generalizace bodových znaků odsunem a výběrem v mapách velkého měřítka. V první části práce jsou představeny teoretické základy tohoto procesu s důrazem na práce autorů, kteří se zabývali jeho automatizací. V další části práce jsou popsána a analyzována data ZABAGED, ze kterých vzniká státní mapové dílo a na kterých byl testován algoritmus vytvořený v praktické části této práce. Hlavním cílem práce pak bylo zjistit, zda je reálné v praxi využít řešení založené na postupném inteligentním zkoušení různých poloh odsunu překrývajících se bodových znaků. V rámci toho jsou identifikovány a popsány strategie, jakými lze generovat a prohledávat stavový prostor daný možnými pozicemi odsunutých bodů. Tyto strategie poté byly implementovány nad datovou sadou ZABAGED, byly otestovány, porovnány a byl vysloven závěr o použitelnosti tohoto přístupu v automatické generalizaci. Klíčová slova digitální kartografie, kartografická generalizace, generalizace výběrem, generalizace posunutím, bodové znaky, heuristikaThe paper deals with automation of cartographic generalization of point features by displacement and selection in large scale maps. In the first part of the thesis, the theoretical foundations of this process are introduced with emphasis on the works of authors who have dealt with its automation. The next part of the thesis describes and analyses the ZABAGED data from which the state map work is produced and on which the algorithm developed in the practical part of this thesis was tested. The main objective of the work was then to determine whether it is realistic to use a solution based on sequential intelligent testing of different offset positions of overlapping point features in practice. Within this framework, the strategies by which the state space given by the possible offset point positions can be generated and searched are identified and described. These strategies were then implemented over the ZABAGED dataset, tested and compared, and a conclusion was drawn on the applicability of this approach in automatic generalization. Key words digital cartography, cartographic generalization, generalization by selection, generalization by displacement, point features, heuristicsKatedra aplikované geoinformatiky a kartografieDepartment of Applied Geoinformatics and CartographyPřírodovědecká fakultaFaculty of Scienc

    An algorithm for point cluster generalization based on the Voronoi diagram

    Full text link
    This paper presents an algorithm for point cluster generalization. Four types of information, i.e. statistical, thematic, topological, and metric information are considered, and measures are selected to describe corresponding types of information quantitatively in the algorithm, i.e. the number of points for statistical information, the importance value for thematic information, the Voronoi neighbors for topological information, and the distribution range and relative local density for metric information. Based on these measures, an algorithm for point cluster generalization is developed. Firstly, point clusters are triangulated and a border polygon of the point clusters is obtained. By the border polygon, some pseudo points are added to the original point clusters to form a new point set and a range polygon that encloses all original points is constructed. Secondly, the Voronoi polygons of the new point set are computed in order to obtain the so-called relative local density of each point. Further, the selection probability of each point is computed using its relative local density and importance value, and then mark those will-be-deleted points as ‘deleted’ according to their selection probabilities and Voronoi neighboring relations. Thirdly, if the number of retained points does not satisfy that computed by the Radical Law, physically delete the points marked as ‘deleted’ forming a new point set, and the second step is repeated; else physically deleted pseudo points and the points marked as ‘deleted’, and the generalized point clusters are achieved. Owing to the use of the Voronoi diagram the algorithm is parameter free and fully automatic. As our experiments show, it can be used in the generalization of point features arranged in clusters such as thematic dot maps and control points on cartographic maps

    Visualization of implicit geographic information through map-like graphics

    Get PDF
    Viele von Web 2.0-Benutzern gesammelte Daten sind ortsbezogen, wobei der Ort meist nur eine Information unter vielen ist und dem Ortsbezug häufig keine besondere Bedeutung zugesprochen wird. Jedoch werden auch mehr und mehr geographische Informationen von Laien gesammelt und im Netz veröffentlicht. Trotz der technischen Möglichkeiten, die im Web-2.0 geboten werden, ist es für Nutzer ohne entsprechendes Expertenwissen meist nicht möglich, gut lesbare und ansprechende Karten zu erzeugen. Dieses Problem besteht, da der Nutzer den darzustellenden Inhalt bestimmt, ohne dass überprüft wird, ob die daraus resultierende Karte den kartographischen Ansprüchen genügt. Des Weiteren hat der Nutzer keinerlei Möglichkeiten, die Standardkartenbilder zu bearbeiten, um beispielsweise für seine Thematik irrelevante Objekte auszublenden oder durch Verdrängung relevante Objekte freizustellen. Besonders bei der Darstellung von POIs treten Überlappungen zwischen Signaturen häufig auf. Trotz des Bedarfs existiert aus verschiedenen Gründen keine etablierte Methode zur Verdrängung von Punktdaten. Daher liegt der Schwerpunkt dieser Arbeit auf der Entwicklung von Verfahren zur Verdrängung von Punktsignaturen. Als Hilfsstrukturen werden dazu Voronoi-Diagramme benutzt und als nutzergenerierte Information werden Sentiments visualisiert. Für den Entwurf der Visualisierungen werden relevante kartographische Bedingungen berücksichtigt und durch zugehörige Qualitätsmaße bewertet. Für die Darstellung von Sentiments werden neben der Verwendung von Punktsignaturen zwei weitere Darstellungsarten erstellt: Anpassung gegebener Signaturen und die Darstellung von Sentiments als Kontinua. Es werden Verdrängungsverfahren für Punktsignaturen entworfen und implementiert. Zur Bestimmung der Verschiebungsrichtung werden zwei verschieden Heuristiken vorgeschlagen und untersucht. Des Weiteren wird eine Möglichkeit zur Steigerung der Effizienz durch Aufteilung der Punktmenge aufgezeigt. Die Bewertung der entworfenen Punktsignaturen erfolgt durch eine Umfrage. Anschließend wird das realisierte Verfahren für gleich große Kreissignaturen in drei Aspekten evaluiert: Grad der Reduzierung der Iterationsschritte durch Zerlegung der Punktmenge, erreichte Verminderung der Überlappungsfläche und Veränderung der relativen Lage der Punkte.A lot of user generated information accumulated in the web is related to a place, with the location usually being just one piece of information among many, which gets no special attention. However, more and more geographic information is collected by laymen and published on the web. Despite the technical possibilities that are offered in Web 2.0, it is usually not possible for users without expert knowledge to produce legible and appealing maps. This problem exists because the user determines the content to be displayed without checking whether the resulting map meets the cartographic requirements. Furthermore, the user has no possibilities to edit the standard map images, for example, hide for his subject irrelevant objects or reducing overlap of relevant objects by displacement. Especially when displaying POIs, overlaps between point symbols often occur. Despite the need, there is no established method for displacing point data for various reasons. Therefore, the focus of this work is the development of methods for the displacement of point signatures. Voronoi diagrams are used as auxiliary structures and sentiment is visualized as user-generated information. For the design of the visualizations relevant cartographic requirements are taken into account and evaluated by quality measures. For the depiction of sentiments, in addition to the use of point symbols, two further types of visualizations are created: adaptation of given map symbols and the representation of sentiments as continua. Displacement techniques for point symbols are designed and implemented. To determine the direction of displacement two different heuristics are proposed and examined. Furthermore, a way to increase the efficiency by dividing the point set is shown. The evaluation of the designed point symbols is done by a survey. Subsequently, the realized method for circular symbols with equal size is evaluated in three aspects: degree of reduction of the iteration steps by decomposition of the point set, achieved reduction of the overlap area and change of the relative position of the points

    Definição e representação da corpora geoespacial para mapas indoor 3D a partir de nuvem de pontos

    Get PDF
    Orientador: Prof. Dr. Daniel Rodrigues dos SantosTese (doutorado) - Universidade Federal do Paraná, Setor de Ciências da Terra, Programa de Pós-Graduação em Ciências Geodésicas. Defesa : Curitiba, 15/12/2022Inclui referências: p. 149-156Resumo: O termo corpora geoespaciais é definido como um conjunto de dados geoespaciais, sistematizados segundo determinados critérios de maneira que sejam representativos do espaço que se deseja mapear, e pode, a partir de concepções probabilísticas e combinatórias analisar a colocabilidade de um dado geoespacial em decorrência da descrição do ambiente de interesse, buscando revelar respostas a partir de observações estatísticas e identificação de padrões de uso para uma coleção de amostras digitais, fazendo parte da Programação da Linguagem Natural, subárea do Aprendizado de Máquina. No contexto tridimensional, corpora geoespaciais 3D podem ser formulados por conjuntos de nuvens de pontos LiDAR e apresentam alto custo computacional para armazenamento, manipulação e visualização. Além disso, não é um conjunto de dados estruturado e que apresente semântica. Para estruturar e extrair conhecimento formalizado destes corpora geoespaciais 3D é proposto um método de generalização cartográfica 3D usando aprendizado de máquina. A metodologia é baseada na Hipótese da Naturalidade aplicada sobre corpora geoespaciais 3D e na parcimônia da descrição geométrica de ambientes indoor. O método é dividido em 4 etapas. Primeiramente, a partir da análise de autossimilaridade estatística de corpora geoespaciais 3D foram propostos operadores de simplificação. Segundo, foram definidos aspectos semânticos dos corpora analisados para extração de corpora 3D especializados de elementos construtivos empregando uma estratégia de aprendizado de máquina. A terceira etapa consistiu na agregação e simplificação dos corpora conforme feições planas usando o algoritmo RANSAC. Finalmente, é aplicado uma estratégia de agregação de superfícies planas em subespaços, baseada na parcimônia da descrição. Os experimentos foram realizados em seis conjuntos de dados provenientes de um sistema LiDAR terrestre, no modo estático. Os resultados obtidos demonstraram que o aprendizado de máquina e a parcimônia da descrição, explorando a Hipótese da Naturalidade e o contexto da Linguagem Natural, auxiliaram na definição e representação estruturada de corpora geoespaciais 3D para ambientes indoor. Os operadores de simplificação reduziram a massividade do conjunto de pontos em 86%. Enquanto a sua aplicação associada ao processo de agregação estruturou e reduziu a massa de dados em 94%.Abstract: The term geospatial corpora is defined as a set of geospatial data, systematized according to certain criteria so that they are representative of the space to be mapped, and can, based on probabilistic and combinatorial concepts, analyze the collocation of a geospatial data in the occurrence of the description of the environment of interest, seeking to reveal answers from statistical observations and identification of usage patterns for a collection of fingerprints, as part of Natural Language Programming, subarea of Machine Learning. In the three-dimensional context, 3D geospatial corpora can be formulated by sets of LiDAR point clouds and present a high computational cost for storage, manipulation and visualization. Furthermore, it is not a structured and semantic dataset. To address this problem, a method for 3D cartographic generalization of LiDAR points clouds using deep learning is proposed. The proposed method is based on the naturalness hypothesis centered on LiDAR point clouds and parsimony of the geometric description of indoor environments. The contribution of the proposed four-fold. First, a set of operators for simplification tasks is defined from a LiDAR point clouds correlation statistical technique. Second, a deep learning technique is used for LiDAR point clouds semantic segmentation. Third, the RANSAC algorithm is executed to fit planar surfaces. Finally, a parsimony descriptor-based aggregation strategy is investigated. The proposed method was tested on six sets of LiDAR point clouds. The experimental results have demonstrated that by exploring the naturalness hypothesis centered on LiDAR point clouds, indoor environment modeling was successfully obtained for LoD2. The operators for simplification reduced the high volume of the LiDAR data by around 86%. On the other hand, the aggregation task showed that the LiDAR data can be reduced by around 94%

    Theory of Spatial Similarity Relations and Its Applications in Automated Map Generalization

    Get PDF
    Automated map generalization is a necessary technique for the construction of multi-scale vector map databases that are crucial components in spatial data infrastructure of cities, provinces, and countries. Nevertheless, this is still a dream because many algorithms for map feature generalization are not parameter-free and therefore need human’s interference. One of the major reasons is that map generalization is a process of spatial similarity transformation in multi-scale map spaces; however, no theory can be found to support such kind of transformation. This thesis focuses on the theory of spatial similarity relations in multi-scale map spaces, aiming at proposing the approaches and models that can be used to automate some relevant algorithms in map generalization. After a systematic review of existing achievements including the definitions and features of similarity in various communities, a classification system of spatial similarity relations, and the calculation models of similarity relations in the communities of psychology, computer science, music, and geography, as well as a number of raster-based approaches for calculating similarity degrees between images, the thesis achieves the following innovative contributions. First, the fundamental issues of spatial similarity relations are explored, i.e. (1) a classification system is proposed that classifies the objects processed by map generalization algorithms into ten categories; (2) the Set Theory-based definitions of similarity, spatial similarity, and spatial similarity relation in multi-scale map spaces are given; (3) mathematical language-based descriptions of the features of spatial similarity relations in multi-scale map spaces are addressed; (4) the factors that affect human’s judgments of spatial similarity relations are proposed, and their weights are also obtained by psychological experiments; and (5) a classification system for spatial similarity relations in multi-scale map spaces is proposed. Second, the models that can calculate spatial similarity degrees for the ten types of objects in multi-scale map spaces are proposed, and their validity is tested by psychological experiments. If a map (or an individual object, or an object group) and its generalized counterpart are given, the models can be used to calculate the spatial similarity degrees between them. Third, the proposed models are used to solve problems in map generalization: (1) ten formulae are constructed that can calculate spatial similarity degrees by map scale changes in map generalization; (2) an approach based on spatial similarity degree is proposed that can determine when to terminate a map generalization system or an algorithm when it is executed to generalize objects on maps, which may fully automate some relevant algorithms and therefore improve the efficiency of map generalization; and (3) an approach is proposed to calculate the distance tolerance of the Douglas-Peucker Algorithm so that the Douglas-Peucker Algorithm may become fully automatic. Nevertheless, the theory and the approaches proposed in this study possess two limitations and needs further exploration. • More experiments should be done to improve the accuracy and adaptability of the proposed models and formulae. The new experiments should select more typical maps and map objects as samples, and find more subjects with different cultural backgrounds. • Whether it is feasible to integrate the ten models/formulae for calculating spatial similarity degrees into an identical model/formula needs further investigation. In addition, it is important to find out the other algorithms, like the Douglas-Peucker Algorithm, that are not parameter-free and closely related to spatial similarity relation, and explore the approaches to calculating the parameters used in these algorithms with the help of the models and formulae proposed in this thesis

    Formalising cartographic generalisation knowledge in an ontology to support on-demand mapping

    Get PDF
    This thesis proposes that on-demand mapping - where the user can choose the geographic features to map and the scale at which to map them - can be supported by formalising, and making explicit, cartographic generalisation knowledge in an ontology. The aim was to capture the semantics of generalisation, in the form of declarative knowledge, in an ontology so that it could be used by an on-demand mapping system to make decisions about what generalisation algorithms are required to resolve a given map condition, such as feature congestion, caused by a change in scale. The lack of a suitable methodology for designing an application ontology was identified and remedied by the development of a new methodology that was a hybrid of existing domain ontology design methodologies. Using this methodology an ontology that described not only the geographic features but also the concepts of generalisation such as geometric conditions, operators and algorithms was built. A key part of the evaluation phase of the methodology was the implementation of the ontology in a prototype on-demand mapping system. The prototype system was used successfully to map road accidents and the underlying road network at three different scales. A major barrier to on-demand mapping is the need to automatically provide parameter values for generalisation algorithms. A set of measure algorithms were developed to identify the geometric conditions in the features, caused by a change in scale. From this a Degree of Generalisation (DoG) is calculated, which represents the “amount” of generalisation required. The DoG is used as an input to a number of bespoke generalisation algorithms. In particular a road network pruning algorithm was developed that respected the relationship between accidents and road segments. The development of bespoke algorithms is not a sustainable solution and a method for employing the DoG concept with existing generalisation algorithms is required. Consideration was given to how the ontology-driven prototype on-demand mapping system could be extended to use cases other than mapping road accidents and a need for collaboration with domain experts on an ontology for generalisation was identified. Although further testing using different uses cases is required, this work has demonstrated that an ontological approach to on-demand mapping has promise
    corecore