228 research outputs found

    Automatic Retrieval of Skeletal Structures of Trees from Terrestrial Laser Scanner Data

    Get PDF
    Research on forest ecosystems receives high attention, especially nowadays with regard to sustainable management of renewable resources and the climate change. In particular, accurate information on the 3D structure of a tree is important for forest science and bioclimatology, but also in the scope of commercial applications. Conventional methods to measure geometric plant features are labor- and time-intensive. For detailed analysis, trees have to be cut down, which is often undesirable. Here, Terrestrial Laser Scanning (TLS) provides a particularly attractive tool because of its contactless measurement technique. The object geometry is reproduced as a 3D point cloud. The objective of this thesis is the automatic retrieval of the spatial structure of trees from TLS data. We focus on forest scenes with comparably high stand density and with many occlusions resulting from it. The varying level of detail of TLS data poses a big challenge. We present two fully automatic methods to obtain skeletal structures from scanned trees that have complementary properties. First, we explain a method that retrieves the entire tree skeleton from 3D data of co-registered scans. The branching structure is obtained from a voxel space representation by searching paths from branch tips to the trunk. The trunk is determined in advance from the 3D points. The skeleton of a tree is generated as a 3D line graph. Besides 3D coordinates and range, a scan provides 2D indices from the intensity image for each measurement. This is exploited in the second method that processes individual scans. Furthermore, we introduce a novel concept to manage TLS data that facilitated the researchwork. Initially, the range image is segmented into connected components. We describe a procedure to retrieve the boundary of a component that is capable of tracing inner depth discontinuities. A 2D skeleton is generated from the boundary information and used to decompose the component into sub components. A Principal Curve is computed from the 3D point set that is associated with a sub component. The skeletal structure of a connected component is summarized as a set of polylines. Objective evaluation of the results remains an open problem because the task itself is ill-defined: There exists no clear definition of what the true skeleton should be w.r.t. a given point set. Consequently, we are not able to assess the correctness of the methods quantitatively, but have to rely on visual assessment of results and provide a thorough discussion of the particularities of both methods. We present experiment results of both methods. The first method efficiently retrieves full skeletons of trees, which approximate the branching structure. The level of detail is mainly governed by the voxel space and therefore, smaller branches are reproduced inadequately. The second method retrieves partial skeletons of a tree with high reproduction accuracy. The method is sensitive to noise in the boundary, but the results are very promising. There are plenty of possibilities to enhance the method’s robustness. The combination of the strengths of both presented methods needs to be investigated further and may lead to a robust way to obtain complete tree skeletons from TLS data automatically.Die Erforschung des ÖkosystemsWald spielt gerade heutzutage im Hinblick auf den nachhaltigen Umgang mit nachwachsenden Rohstoffen und den Klimawandel eine große Rolle. Insbesondere die exakte Beschreibung der dreidimensionalen Struktur eines Baumes ist wichtig für die Forstwissenschaften und Bioklimatologie, aber auch im Rahmen kommerzieller Anwendungen. Die konventionellen Methoden um geometrische Pflanzenmerkmale zu messen sind arbeitsintensiv und zeitaufwändig. Für eine genaue Analyse müssen Bäume gefällt werden, was oft unerwünscht ist. Hierbei bietet sich das Terrestrische Laserscanning (TLS) als besonders attraktives Werkzeug aufgrund seines kontaktlosen Messprinzips an. Die Objektgeometrie wird als 3D-Punktwolke wiedergegeben. Basierend darauf ist das Ziel der Arbeit die automatische Bestimmung der räumlichen Baumstruktur aus TLS-Daten. Der Fokus liegt dabei auf Waldszenen mit vergleichsweise hoher Bestandesdichte und mit zahlreichen daraus resultierenden Verdeckungen. Die Auswertung dieser TLS-Daten, die einen unterschiedlichen Grad an Detailreichtum aufweisen, stellt eine große Herausforderung dar. Zwei vollautomatische Methoden zur Generierung von Skelettstrukturen von gescannten Bäumen, welche komplementäre Eigenschaften besitzen, werden vorgestellt. Bei der ersten Methode wird das Gesamtskelett eines Baumes aus 3D-Daten von registrierten Scans bestimmt. Die Aststruktur wird von einer Voxelraum-Repräsentation abgeleitet indem Pfade von Astspitzen zum Stamm gesucht werden. Der Stamm wird im Voraus aus den 3D-Punkten rekonstruiert. Das Baumskelett wird als 3D-Liniengraph erzeugt. Für jeden gemessenen Punkt stellt ein Scan neben 3D-Koordinaten und Distanzwerten auch 2D-Indizes zur Verfügung, die sich aus dem Intensitätsbild ergeben. Bei der zweiten Methode, die auf Einzelscans arbeitet, wird dies ausgenutzt. Außerdem wird ein neuartiges Konzept zum Management von TLS-Daten beschrieben, welches die Forschungsarbeit erleichtert hat. Zunächst wird das Tiefenbild in Komponenten aufgeteilt. Es wird eine Prozedur zur Bestimmung von Komponentenkonturen vorgestellt, die in der Lage ist innere Tiefendiskontinuitäten zu verfolgen. Von der Konturinformation wird ein 2D-Skelett generiert, welches benutzt wird um die Komponente in Teilkomponenten zu zerlegen. Von der 3D-Punktmenge, die mit einer Teilkomponente assoziiert ist, wird eine Principal Curve berechnet. Die Skelettstruktur einer Komponente im Tiefenbild wird als Menge von Polylinien zusammengefasst. Die objektive Evaluation der Resultate stellt weiterhin ein ungelöstes Problem dar, weil die Aufgabe selbst nicht klar erfassbar ist: Es existiert keine eindeutige Definition davon was das wahre Skelett in Bezug auf eine gegebene Punktmenge sein sollte. Die Korrektheit der Methoden kann daher nicht quantitativ beschrieben werden. Aus diesem Grund, können die Ergebnisse nur visuell beurteiltwerden. Weiterhinwerden die Charakteristiken beider Methoden eingehend diskutiert. Es werden Experimentresultate beider Methoden vorgestellt. Die erste Methode bestimmt effizient das Skelett eines Baumes, welches die Aststruktur approximiert. Der Detaillierungsgrad wird hauptsächlich durch den Voxelraum bestimmt, weshalb kleinere Äste nicht angemessen reproduziert werden. Die zweite Methode rekonstruiert Teilskelette eines Baums mit hoher Detailtreue. Die Methode reagiert sensibel auf Rauschen in der Kontur, dennoch sind die Ergebnisse vielversprechend. Es gibt eine Vielzahl von Möglichkeiten die Robustheit der Methode zu verbessern. Die Kombination der Stärken von beiden präsentierten Methoden sollte weiter untersucht werden und kann zu einem robusteren Ansatz führen um vollständige Baumskelette automatisch aus TLS-Daten zu generieren

    Hierarchical and Adaptive Filter and Refinement Algorithms for Geometric Intersection Computations on GPU

    Get PDF
    Geometric intersection algorithms are fundamental in spatial analysis in Geographic Information System (GIS). This dissertation explores high performance computing solution for geometric intersection on a huge amount of spatial data using Graphics Processing Unit (GPU). We have developed a hierarchical filter and refinement system for parallel geometric intersection operations involving large polygons and polylines by extending the classical filter and refine algorithm using efficient filters that leverage GPU computing. The inputs are two layers of large polygonal datasets and the computations are spatial intersection on pairs of cross-layer polygons. These intersections are the compute-intensive spatial data analytic kernels in spatial join and map overlay operations in spatial databases and GIS. Efficient filters, such as PolySketch, PolySketch++ and Point-in-polygon filters have been developed to reduce refinement workload on GPUs. We also showed the application of such filters in speeding-up line segment intersections and point-in-polygon tests. Programming models like CUDA and OpenACC have been used to implement the different versions of the Hierarchical Filter and Refine (HiFiRe) system. Experimental results show good performance of our filter and refinement algorithms. Compared to standard R-tree filter, on average, our filter technique can still discard 76% of polygon pairs which do not have segment intersection points. PolySketch filter reduces on average 99.77% of the workload of finding line segment intersections. Compared to existing Common Minimum Bounding Rectangle (CMBR) filter that is applied on each cross-layer candidate pair, the workload after using PolySketch-based CMBR filter is on average 98% smaller. The execution time of our HiFiRe system on two shapefiles, namely USA Water Bodies (contains 464K polygons) and USA Block Group Boundaries (contains 220K polygons), is about 3.38 seconds using NVidia Titan V GPU

    MPI-Vector-IO: Parallel I/O and Partitioning for Geospatial Vector Data

    Get PDF
    In recent times, geospatial datasets are growing in terms of size, complexity and heterogeneity. High performance systems are needed to analyze such data to produce actionable insights in an efficient manner. For polygonal a.k.a vector datasets, operations such as I/O, data partitioning, communication, and load balancing becomes challenging in a cluster environment. In this work, we present MPI-Vector-IO 1 , a parallel I/O library that we have designed using MPI-IO specifically for partitioning and reading irregular vector data formats such as Well Known Text. It makes MPI aware of spatial data, spatial primitives and provides support for spatial data types embedded within collective computation and communication using MPI message-passing library. These abstractions along with parallel I/O support are useful for parallel Geographic Information System (GIS) application development on HPC platforms

    GIS based modelling for fuel reduction using controlled burn in Australia : case study : Logan City, Queensland

    Get PDF
    Bushfire problem is a long-lasting problem which is a big threat and environmental problem in Australia. Planning to control bushfire is very important for Australian Environment. One of the most effective methods to fight bushfire disasters is planning for controlled burns in order to reduce the risk of unwanted bushfire events. Controlled burns management and planning has been always considered as important by town planners. In this study the aim is to produce a tool for prioritizing burn blocks based on diffract criteria in order to help planners have a sound scientific basis for choosing the most important blocks to have controlled burn on. In this study the following research tasks have been considered 1. Investigate criteria related to prescribed burn management and their usability to design a model for analysing long term geospatial suitability of bushfire prescribed burns. 2. Finding out suitable model for scoring blocks designated as fuel reduction bushfire prescribed burns blocks in long term 3. Testing model in a pilot area Several criteria for building up a multi-criteria analysis with GIS model were studied and the corresponding importance weight for them were debated. Research methodology used in this section was investigating literature and methods for determining weights and possibly, using experts’ ideas by interviews or small surveys or running focus groups in a stakeholder organization to find out the most relevant and the most important criteria. Finally eleven most important criteria were chosen and compared to each other by interviewees to find out their importance weight. The model developed considers all the criteria which is usable to plan and prioritize burn blocks selected in the criteria analysis phase. This model works as a basis for having a sound and robust decision on which blocks are most suitable to be burnt in long term point of view. GIS database used in this model were acquired from the pilot area’s relevant authorities. Model was developed based on the ESRI’s ArcGIS analysis tools as well as ArcGIS Spatial Analyst extension. In this model Analytical Hierarchical Process Methodology was used for combining criteria importance and develop a unified value-based solution to the study’s Multi Criteria Analysis problem based on two main themes of ‘Implementation’ and ‘Safety’. Model was tested on Logan City Area in south of Queensland, Australia. The case study is an administration area within Australia that all the criteria data has been prepared and acquired from. Results: As combining the final results by overlaying can cause some bias as some blocks show a good match for safety theme but not a good match for implementation and vice versa, two main themes results were combined using an optimization methodology based on probabilistic principles for generating final prioritized blocks. The usability test of the result generated by this model was done by Logan City Council managers and Parks Department bushfire experts. The suitability of the blocks was very close to what experts had in their minds and this model results were validated completely satisfactory by them. All of the blocks ranked by the model were according to what they had a practical perception from the field visit and field knowledge. In overall and in general, the tool created by this study, will help decision makers has a good basis for deciding about long term priorities to plan for controlled burn activities. Decision makers could use this model to have a long term outlook for the budget and resources needed to be allocated to fuel reduction controlled burn practices. This will facilitate short term planning as well.Bushfire problem is a long-lasting problem which is a big threat and environmental problem in Australia. Planning to control bushfire is very important for Australian Environment. One of the most effective methods to fight bushfire disasters is planning for controlled burns in order to reduce the risk of unwanted bushfire events. In controlled burn, some patches or blocks which are risky to cause threat to environment and humans are selected and burned deliberately under a very safe and controlled condition. This way it is ensured that in real situations the ready-to-burn barks and tree canopy or simply ‘fuel load’ are eliminated from the area. This research aims to investigate different approaches to build up spatial model to aid decision makers have a rational justifications for planning controlled burns in long term. This includes finding out suitable model for scoring blocks designated as bushfire prescribed burns blocks. The target of this research is to investigate suitability criteria related to prescribed burn management and use them to design a model for analysing spatial suitability for bushfire prescribed burns. In the process of this research, first it is tried to find out how prescribed burn programs work, what characteristics a burn plan has and how different criteria may contribute in forming suitability for performing a prescribed burn. Then a model has been developed for this purpose. The model output is the prioritized blocks based on two main themes of ‘Safety’ and ‘Implementation’. A combination of these two themes has been used in order to generate prioritized blocks. In this output the higher is the rank of a block it means that it has higher priority to be burn first in long term planning. The model was tested in Logan City area in South East Queensland Australia. Finally the outcome showed a good agreement between planners suitability choice which was based on field visits and the prioritized blocks generated by model. This agreement was investigated gathering different decision makers’ opinions regarding different blocks and comparing it with the actual model outcome. In overall and in general, the tool created by this study, will help decision makers has a good basis for deciding about long term priorities to plan for controlled burn activities. Decision makers could use this model to have a long term outlook for the budget and resources needed to be allocated to fuel reduction controlled burn practices. This will facilitate short term planning as well

    Context-Based classification of objects in topographic data

    Get PDF
    Large-scale topographic databases model real world features as vector data objects. These can be point, line or area features. Each of these map objects is assigned to a descriptive class; for example, an area feature might be classed as a building, a garden or a road. Topographic data is subject to continual updates from cartographic surveys and ongoing quality improvement. One of the most important aspects of this is assignment and verification of class descriptions to each area feature. These attributes can be added manually, but, due to the vast volume of data involved, automated techniques are desirable to classify these polygons. Analogy is a key thought process that underpins learning and has been the subject of much research in the field of artificial intelligence (AI). An analogy identifies structural similarity between a well-known source domain and a less familiar target domain. In many cases, information present in the source can then be mapped to the target, yielding a better understanding of the latter. The solution of geometric analogy problems has been a fruitful area of AI research. We observe that there is a correlation between objects in geometric analogy problem domains and map features in topographic data. We describe two topographic area feature classification tools that use descriptions of neighbouring features to identify analogies between polygons: content vector matching (CVM) and context structure matching (CSM). CVM and CSM classify an area feature by matching its neighbourhood context against those of analogous polygons whose class is known. Both classifiers were implemented and then tested on high quality topographic polygon data supplied by Ordnance Survey (Great Britain). Area features were found to exhibit a high degree of variation in their neighbourhoods. CVM correctly classified 85.38% of the 79.03% of features it attempted to classify. The accuracy for CSM was 85.96% of the 62.96% of features it tried to identify. Thus, CVM can classify 25.53% more features than CSM, but is slightly less accurate. Both techniques excelled at identifying the feature classes that predominate in suburban data. Our structure-based classification approach may also benefit other types of spatial data, such as topographic line data, small-scale topographic data, raster data, architectural plans and circuit diagrams

    Laajan taajama-alueen hydrologinen mallinnus SWMM-hulevesimallilla

    Get PDF
    Stormwater modeling has a major role in preventing issues such as flash floods and urban water-quality problems. However, in-detail modeling of large urban areas is time-consuming as it typically involves model calibration based on highly detailed input data. Stormwater models of a lowered spatial resolution would thus appear valuable if only their ability to provide realistic results could be proved. This study proposes a methodology for rapid catchment delineation and stormwater management model (SWMM) parameterization in a large urban area, without calibration. The effect of spatial resolution on the accuracy of modeling results is also being discussed. A catchment delineation and SWMM parameterization is conducted for an urban area in the city of Lahti in southern Finland. GIS methodology is utilized for simultaneous processing of data representing large areas. Literature values are also of importance where no spatial data is available. To evaluate the parameterization results, the SWMM application is run using an hourly data series of meteorological observations covering a period of four years.Hulevesimallinnus on tärkeä työkalu muun muassa taajamatulvien sekä kaupunkivesien laatuongelmien välttämiseksi. Yksityiskohtainen hulevesimallinnus vaatii kuitenkin paljon aikaa, sillä se edellyttää yleensä mallin kalibrointia tarkkojen lähtötietojen perusteella. Tämän vuoksi olisi hyödyllistä, mikäli myös vähemmän hajautettujen hulevesimallien voitaisiin todistaa tuottavan todenmukaisia tuloksia. Tässä työssä esitetään menetelmiä tehokasta taajamien valuma-alueiden rajaamista ja kalibroimattoman SWMM-hulevesimallin parametrisointia varten. Myös hulevesimallien maantieteellisen resoluution merkitystä tarkastellaan. Lahden kaupungin keskusta-alueelle tehdään valuma-aluejako sekä SWMM-hulevesimallin (stormwater management model) parametrisointi. Laajoja alueita koskevan tiedon käsittelemiseen kerralla käytetään paikkatietomenetelmiä. Myös kirjallisuusarvoja hyödynnetään niiltä osin, kuin paikkatietoja ei ole saatavilla. Parametrisoinnin tarkistamiseksi SWMM-hulevesimallia sovelletaan käyttäen tunneiltaista, neljän vuoden jakson kattavaa säähavaintoaineistoa. Valuma-aluejako ja osavaluma-aluejako voidaan tehdä työssä kehitetyillä menetelmillä nopeasti ja tarkasti, vaikkakaan prosessia ei voida kokonaan automatisoida lähtöaineistojen epätäydellisyyden vuoksi. Toisaalta osavaluma-alueiden parametrisointi on haastavaa ja sisältää suurempia epävarmuuksia kuin valuma-aluejako. Niistäkin huolimatta hulevesimallin sovellus tuottaa järkeviä tuloksia kirjallisuuteen ja muihin samalla alueella tehtyihin tutkimuksiin verrattuna. Tämän perusteella myös kalibroimaton, pienen maantieteellisen resoluution SWMM-malli voi olla riittävä joihinkin hulevesima1linnuksen sovelluksiin. Työssä kehitetyt menetelmät tarjoavat kaiken kaikkiaan toimivan tavan laajaa taajama-aluetta kuvaavan SWMM-hulevesimallin parametrisointiin

    Dualgrid : a closed representation space for consistent spatial databases

    Get PDF
    [Abstract] In the past decades, much effort has been devoted to the integration of spatial information within more traditional information systems. To support such integration, spatial data representation technology has been intensively improved, from conceptual and discrete models for data representation and query languages, to indexing and visualization technologies and interoperability standards. As a result of all these efforts, Geographic Information Systems (GIS) are nowadays a widely used technology. The existing spatial databases technology provides standardized data models and operations [OGC06], based on conceptually solid spatial algebras. However, translating such conceptual models into physical models suitable for their implementation on computers, where only finite precision representations of the space can be used, becomes a difficult task. As a result, the current implementations of physical models are generally severely limited when compared to their conceptual counterparts. They attempt to provide an implementation fulfilling the original conceptual algebras, but at the physical level they cannot further ignore the problems of robustness and topological correctness arising from the use of finite precision numbers for representing spatial coordinates. This results in deceptive physical algebra implementations because they break most of the properties of the conceptual algebra that they rely on. More specifically, the physical models do not remain closed under the data types and operations of the algebra, and the solutions applied to address this problem, usually some kind of approximated result, do not fulfill the properties expected from the affected operation. The consequence is that the physical models fail to provide consistent implementations for the spatial operations. This makes development of applications that rely on the properties of the conceptual model (e.g., spatial analysis applications) much more complex, if not impossible. Moreover, even the implementation of the physical model itself becomes more complex, as it can not rely anymore on the theoretical basis of the conceptual model it is supposed to implement. The main goal of this research work is to provide a framework to develop spatial database extensions capable of fulfilling the key properties of the conceptual spatial algebra they implement. At the same time, the proposed framework meets the constraints imposed by nowadays real world GIS applications in terms of performance and resource requirements, as well as interoperability with existing applications and standards. To achieve this goal, we first analyze the current state of the art in spatial information representation. The main focus is on the way the different approaches deal with the limitations imposed by computers and the effects that these solutions have in the properties of the conceptual model they intend to implement. Second, we study the sources of these problems and propose a well-grounded physical model framework (called Dualgrid) to guarantee that the implementations of spatial algebras keep their key properties from the perspective of the user application. We also provide an example of such an implementation and experimental results on how such a framework solves the consistency and even the implementation problems of an existing and widely used spatial database extension. Third, we revisit our framework to extend its properties (DualgridFF) so that it is able to meet the additional restrictions imposed by current spatial applications, tools and interoperability standards (OGC).[Resumen] En las últimas décadas se ha dedicado un significativo esfuerzo a la integración de las tecnologías de Sistemas de Información Geográfica (SIG) con sistemas de información más tradicionales. Para dar soporte a esa integración la tecnología de representación de datos espaciales ha sido mejorada en múltiples aspectos, desde los modelos (conceptuales y discretos) de representación de datos y lenguajes de consulta a las tecnologías de indexación y visualización y a los estándares de interoperabilidad. Como resultado de estos esfuerzos, la tecnología de Sistemas de Información Geográfica es ampliamente utilizada en la actualidad en todo tipo de aplicaciones. Las tecnologías de bases de datos espaciales actuales ofrecen modelos de datos y operaciones estandarizados [OGC06], inspirados en álgebras espaciales con unas bases conceptuales sólidas. En contraste, las implementaciones existentes en la actualidad sufren severas limitaciones (en comparación con los modelos conceptuales que pretenden soportar), resultantes de las dificultades inherentes a traducir esos modelos conceptuales en modelos físicos susceptibles de su implementación en ordenadores, donde es necesario usar espacios de representación de precisión finita. A pesar del esfuerzo por ofrecer implementaciones que cumplan con el álgebra conceptual original, no es posible seguir ignorando a nivel físico los problemas de robustez y corrección topológica que surgen del uso de números de precisión finita para la representación de las coordenadas espaciales. El resultado son implementaciones que sólo cumplen en apariencia con las álgebra conceptuales originales, pero que en realidad incumplen la mayor parte de las propiedades en que están basadas esas álgebras. Más específicamente, los modelos físicos no mantienen sus propiedades de cierre bajo el conjunto de tipos de datos y operaciones implementados, y las soluciones aplicadas para solventarlo, normalmente algún tipo de resultado aproximado, no cumplen con las propiedades esperadas de la operación en cuestión. En consecuencia, el modelo físico resultante no es capaz de ofrecer una implementación consistente de las operaciones espaciales ofrecidas a los usuarios. Como resultado, el desarrollo de aplicaciones basadas en las propiedades del modelo conceptual (por ejemplo, aplicaciones de análisis espacial) se vuelve mucho más difícil, si no imposible. De hecho, incluso la implementación del propio modelo físico se vuelve mucho más compleja, al no poder apoyarse ni siquiera en las bases teóricas del modelo conceptual que se supone se está implementando. El objetivo principal de esta tesis es sentar las bases para el desarrollo de extensiones de bases de datos espaciales capaces de cumplir las propiedades clave del álgebra espacial conceptual en la que se basan, teniendo en cuenta además las restricciones impuestas por la realidad de las aplicaciones GIS actuales en términos de rendimiento y consumo de recursos y de interoperabilidad con las aplicaciones y estándares existentes. Para alcanzar dicho objetivo, se analiza primero el estado del arte actual en representación de información espacial, prestando especial atención a las limitaciones impuestas por los ordenadores y los efectos que esas soluciones tienen en el (in)cumplimiento de las propiedades del modelo conceptual. En segundo lugar, se estudian las raíces de esos problemas y se propone un marco teórico para el diseño de modelos físicos (Dualgrid) que garantiza que las implementaciones de álgebras espaciales basadas en él mantienen las propiedades clave desde el punto de vista de las aplicaciones de usuario. Como prueba de concepto, se muestra un ejemplo de una implementación basada en Dualgrid y resultados experimentales mostrando cómo su uso soluciona los problemas de consistencia y (incluso) de implementación de una extensión de bases de datos espaciales ampliamente utilizada. En tercer lugar, se revisita dicho modelo para extender sus propiedades (DualgridFF) con el fin de hacer posible el cumplimento de las restricciones adicionales (en términos de rendimiento, espacio de almacenamiento e interoperabilidad) impuestas por las aplicaciones, tecnologías GIS y estándares de interoperabilidad (OGC) existentes.[Resumo] Nas últimas décadas tense adicado un esforzo significativo á integración das tecnoloxías de Sistemas de Información Xeográfica (SIX) con sistemas de información mais tradicionais. Para dar soporte a esa integración a tecnoloxía de representación de datos espaciais ten sido mellorada en numerosos aspectos, dende os modelos (conceptuais e discretos) de representación de datos e linguaxes de procura ata as tecnoloxías de indexación e visualización e os estándares de interoperabilidade. Como resultado destes esforzos, as tecnoloxías de Sistemas de Información Xeográfica son amplamente utilizadas na actualidade en todo tipo de aplicacións. As tecnoloxías de bases de datos espaciais actuais ofrecen modelos de datos e operacións estandarizados [OGC-SFS], inspirados en álxebras espaciais con unhas bases conceptuais sólidas. Por contra, as implementacións existentes na actualidade sofren severas limitacións (en comparación cos modelos conceptuais que pretenden soportar), resultantes das dificultades inherentes a traducir eses modelos conceptuais en modelos físicos susceptíbeis da súa implementación en ordenadores, onde é preciso usar espazos de representación de precisión finita. A pesar dos esforzos por ofrecer implementacións que cumpran coas álxebras conceptuais orixinais, non é posible seguir ignorando a nivel físico os problemas de robustez e corrección topolóxica que xorden do uso de números de precisión finita para a representación das coordenadas espaciais. O resultado son implementacións que sómente cumpren en aparencia coas álxebra conceptuais orixinais, pero que en realidade incumpren a maior parte das propiedades en que están baseadas esas álxebras. Mais especificamente, os modelos físicos non manteñen as súas propiedades de peche baixo o conxunto de tipos de datos e operacións implementados, e as solucións aplicadas para solventalo, normalmente algún tipo de resultado aproximado, non cumpren coas propiedades esperadas da operación en cuestión. En consecuencia, o modelo físico resultante no é capaz de ofrecer unha implementación consistente das operacións espaciais ofrecidas aos usuarios. Como resultado, o desenvolvemento de aplicacións baseadas nas propiedades do modelo conceptual (por exemplo, aplicacións de análise espacial) tornase moito mais difícil, se non imposible. De feito, incluso a implementación do propio modelo físico se fai moito mais complexa, ao non poder apoiarse nin sequera nas bases teóricas do modelo conceptual que se supón se está implementando. O obxectivo principal de esta teses é sentar as bases para o desenvolvemento de extensións de bases de datos espaciais capaces de cumprir coas propiedades clave da álxebra espacial conceptual na que se basean, tendo en conta ademais as restricións impostas por a realidade das aplicacións SIX actuais en termos de rendemento e consumo de recursos e de interoperabilidade coas aplicacións e estándares existentes. Para acadar o devandito obxectivo, analizase primeiro o estado da arte actual en representación de información espacial, prestando especial atención as limitacións impostas por os ordenadores e os efectos que esas solucións teñen no (in)cumprimento das propiedades do modelo conceptual. En segundo lugar, estúdanse as raices de eses problemas e proponse un marco teórico para o deseño de modelos físicos (Dualgrid) que garante que as implementacións de álxebras espaciais baseadas en el manteñen as propiedades clave dende o punto de vista das aplicacións do usuario. Como proba de concepto, amosase un exemplo de unha implementación baseada en Dualgrid e resultados experimentais mostrando como o seu uso soluciona os problemas de consistencia e (incluso) de implementación de unha extensión de bases de datos espaciais amplamente utilizada. En terceiro lugar, revisítase o devandito modelo para estender as súas propiedades (DualgridFF) coa fin de facer posible o cumprimento das restricións adicionais (en termos de rendemento, espazo de almacenamento e interoperabilidade) impostas por as aplicacións, tecnoloxías SIX e estándares de interoperabilidade (OGC) existentes

    A Prototype Method for Storing Symbols for Multiple Maps in a Single Geodatabase Using ArcGIS Cartographic Representations

    Get PDF
    ArcGIS 9.2 software, released in late 2006, introduced a new way for ESRI users to store symbology in the geodatabase. This new method, called cartographic representations, presents new challenges for those individuals involved in producing high-quality maps from the GIS. These challenges include developing new workflows which incorporate the new technology. The project methodology used an existing geodatabase and a test set of hard copy maps as a base from which to develop a prototype methodology to implement cartographic representations. The main purpose of the project was to discover how feature symbols for multiple map products could be stored within a single geodatabase. In the course of the research, new techniques and functionality available with cartographic representations were evaluated against the standard ArcMap symbol management tools
    corecore