48 research outputs found

    A storage and access architecture for efficient query processing in spatial database systems

    Get PDF
    Due to the high complexity of objects and queries and also due to extremely large data volumes, geographic database systems impose stringent requirements on their storage and access architecture with respect to efficient query processing. Performance improving concepts such as spatial storage and access structures, approximations, object decompositions and multi-phase query processing have been suggested and analyzed as single building blocks. In this paper, we describe a storage and access architecture which is composed from the above building blocks in a modular fashion. Additionally, we incorporate into our architecture a new ingredient, the scene organization, for efficiently supporting set-oriented access of large-area region queries. An experimental performance comparison demonstrates that the concept of scene organization leads to considerable performance improvements for large-area region queries by a factor of up to 150

    Increased Functionality of Floodplain Mapping Automation: Utah Inundation Mapping System (UTIMS)

    Get PDF
    Flood plain mapping has become an increasingly important part of flood plain management. Flood plain mapping employs mapping software and hydraulic calculation packages to efficiently map flood plains. Modelers often utilize automation software to develop the complex geometries required to reduce the time to develop hydraulic models. The Utah Inundation Mapping System (UTIMS) is designed to reduce the time required to develop complex geometries for use in flood plain mapping studies. The automated geometries developed by UTIMS include: flood specific river centerlines, bank lines, flow path lines, cross sections and areal averaged n-value polygons. UTIMS thus facilitates developing automated input to US Army Corps of Engineer\u27s HEC-RAS software. Results from HEC-RAS can be imported back to UTIMS for display and mapping. The user can also specify convergence criteria for water surface profile at selected locations along the river and thus run UTIMS and HEC-RAS iteratively till the convergence criterion is met. UTIMS develops a new flood specific geometry file for each iteration, enabling an accurate modeling of flood-plain. Utilizing this robust and easy to operate software within the GIS environment modelers can significantly reduce the time required to develop accurate flood plain maps. The time thus saved in developing the geometries allows modelers to spend more time doing the actual modeling and analyzing results. The time thus saved can also result in faster turn around and potential cost cutting in flood-plain modeling work. In this paper the authors describe UTIMS capabilities, compare them with other available software, and demonstrate the UTIMS flood plain automation process using a case study

    Integrated stability mapping system for mines

    Get PDF
    The Integrated Stability Mapping System (ISMS) was developed as an engineering tool to quantify the geologic and geo-mechanical information of mines, and to integrate the critical stability influence factors into an overall stability index for use in mine planning and support design. It is generally understood that the inherent underground roof stability is determined by the interaction of both the given geologic characteristics and the local stress influences. Form this perspective, in this dissertation, the need for an integrated stability mapping system is established through investigating the traditional and current hazard mapping practices. In order to fulfill this need, computer aided hazard mapping techniques and popular numerical methods for geo-mechanical analysis are reviewed. Then, an integrated stability mapping system incorporating geology hazard mapping, geologic structural feature impacts, and advanced numerical stress analysis techniques into one solution has been developed.;The stability system is implemented inside the de-facto standard drawing environment, AutoCAD, and in compatible with widely used geology modeling software SurvCADD. This feature allows one to access numerous existing geologic data and mining information from present mine maps easily and directly. The LaModel stress calculation, a boundary element method, integrated within the mapping system can produce realistic and accurate stress and displacement analysis with its distinguished features such as the laminated overburden model, the true topography consideration and actual irregular pillar matching.;After the stability mapping system was developed, two case studies were performed to check for coding errors, calculation accuracy, and for demonstrating the functionalities and usefulness of the system. In the case studies, the composite stability index was compared with field observations. A good correlation has been found although only a few influence factors have been considered.;In the conclusion of this dissertation, it is suggested that the stability mapping system provides mining engineers with the ability to perform comprehensive, rapid and accurate multiple-factor stability mapping analysis. Then the resultant stability map can be a valuable guide to safer support designing and better mine planning, and ultimately increase the safety of mine design and reduce the injuries and fatalities associated with ground fall in underground mines

    Hierarchical and Adaptive Filter and Refinement Algorithms for Geometric Intersection Computations on GPU

    Get PDF
    Geometric intersection algorithms are fundamental in spatial analysis in Geographic Information System (GIS). This dissertation explores high performance computing solution for geometric intersection on a huge amount of spatial data using Graphics Processing Unit (GPU). We have developed a hierarchical filter and refinement system for parallel geometric intersection operations involving large polygons and polylines by extending the classical filter and refine algorithm using efficient filters that leverage GPU computing. The inputs are two layers of large polygonal datasets and the computations are spatial intersection on pairs of cross-layer polygons. These intersections are the compute-intensive spatial data analytic kernels in spatial join and map overlay operations in spatial databases and GIS. Efficient filters, such as PolySketch, PolySketch++ and Point-in-polygon filters have been developed to reduce refinement workload on GPUs. We also showed the application of such filters in speeding-up line segment intersections and point-in-polygon tests. Programming models like CUDA and OpenACC have been used to implement the different versions of the Hierarchical Filter and Refine (HiFiRe) system. Experimental results show good performance of our filter and refinement algorithms. Compared to standard R-tree filter, on average, our filter technique can still discard 76% of polygon pairs which do not have segment intersection points. PolySketch filter reduces on average 99.77% of the workload of finding line segment intersections. Compared to existing Common Minimum Bounding Rectangle (CMBR) filter that is applied on each cross-layer candidate pair, the workload after using PolySketch-based CMBR filter is on average 98% smaller. The execution time of our HiFiRe system on two shapefiles, namely USA Water Bodies (contains 464K polygons) and USA Block Group Boundaries (contains 220K polygons), is about 3.38 seconds using NVidia Titan V GPU

    Design and development of a system for vario-scale maps

    Get PDF
    Nowadays, there are many geo-information data sources available such as maps on the Internet, in-car navigation devices and mobile apps. All datasets used in these applications are the same in principle, and face the same issues, namely: Maps of different scales are stored separately. With many separate fixed levels, a lot of information is the same, but still needs to be included, which leads to duplication. With many redundant data throughout the scales, features are represented again and again, which may lead to inconsistency. Currently available maps contain significantly more levels of detail (twenty map scales on average) than in the past. These levels must be created, but the optimal strategy to do so is not known. For every user’s data request, a significant part of the data remains the same, but still needs to be included. This leads to more data transfer, and slower response. The interactive Internet environment is not used to its full potential for user navigation. It is common to observe lagging, popping features or flickering of a newly retrieved map scale feature while using the map. This research develops principles of variable scale (vario-scale) maps to address these issues. The vario-scale approach is an alternative for obtaining and maintaining geographical data sets at different map scales. It is based on the specific topological structure called tGAP (topological Generalized Area Partitioning) which addresses the main open issues of current solutions for managing spatial data sets of different scales such as: redundancy data, inconsistency of map scales and dynamic transfer. The objective of this thesis is to design, to develop and to extend the variable-scale data structures and it is expressed as the following research question: How to design and develop a system for vario-scale maps?  To address the above research question, this research has been conducted using the following outline: 1) Investigate state-of-the-art in map generalization. 2) Study development of vario-scale structure done so far. 3) Propose techniques for generating better vario-scale map content. 4) Implement strategies to process really massive datasets. 5) Research smooth representation of map features and their impact on user interaction. Results of our research led to new functionality, were addressed in prototype developments and were tested against real world data sets. Throughout this research we have made following main contributions to the design and development of a system of vario-scale maps. We have: studied vario-scale development in the past and we have identified the most urgent needs of the research. designed the concept of granularity and presented our strategy where changes in map content should be as small and as gradual as possible (e. g. use groups, maintain road network, support line feature representation). introduced line features in the solution and presented a fully-automated generalization process that preserves a road network features throughout all scales. proposed an approach to create a vario-scale data structure of massive datasets. demonstrated a method to generate an explicit 3D representation from the structure which can provide smoother user experience. developed a software prototype where a 3D vario-scale dataset can be used to its full potential. conducted initial usability test. All aspects together with already developed functionality provide a more complex and more unified solution for vario-scale mapping. Based on our research, design and development of a system for vario-scale maps should be clearer now. In addition, it is easier to identified necessary steps which need to be taken towards an optimal solution. Our recommendations for future work are: One of the contributions has been an integration of the road features in the structure and their automated generalization throughout the process. Integrating more map features besides roads deserve attention. We have investigated how to deal with massive datasets which do not fit in the main memory of the computer. Our experiences consisted of dataset of one province or state with records in order of millions. To verify our findings, it will be interesting to process even bigger dataset with records in order of billions (a whole continent). We have introduced representation where map content changes as gradually as possible. It is based on process where: 1) explicit 3D geometry from the structure is generated. 2) A slice of the geometry is calculated. 3) Final maps based on the slice is constructed. Investigation of how to integrate this in a server-client pipeline on the Internet is another point of further research. Our research focus has been mainly on one specific aspect of the concept at a time. Now all aspects may be brought together where integration, tuning and orchestration play an important role is another interesting research that desire attention. Carry out more user testing including; 1) maps of sufficient cartographic quality, 2) a large testing region, and 3) the finest version of visualization prototype. &nbsp

    Design and development of a system for vario-scale maps

    Get PDF
    Nowadays, there are many geo-information data sources available such as maps on the Internet, in-car navigation devices and mobile apps. All datasets used in these applications are the same in principle, and face the same issues, namely: Maps of different scales are stored separately. With many separate fixed levels, a lot of information is the same, but still needs to be included, which leads to duplication. With many redundant data throughout the scales, features are represented again and again, which may lead to inconsistency. Currently available maps contain significantly more levels of detail (twenty map scales on average) than in the past. These levels must be created, but the optimal strategy to do so is not known. For every user’s data request, a significant part of the data remains the same, but still needs to be included. This leads to more data transfer, and slower response. The interactive Internet environment is not used to its full potential for user navigation. It is common to observe lagging, popping features or flickering of a newly retrieved map scale feature while using the map. This research develops principles of variable scale (vario-scale) maps to address these issues. The vario-scale approach is an alternative for obtaining and maintaining geographical data sets at different map scales. It is based on the specific topological structure called tGAP (topological Generalized Area Partitioning) which addresses the main open issues of current solutions for managing spatial data sets of different scales such as: redundancy data, inconsistency of map scales and dynamic transfer. The objective of this thesis is to design, to develop and to extend the variable-scale data structures and it is expressed as the following research question: How to design and develop a system for vario-scale maps? To address the above research question, this research has been conducted using the following outline:  To address the above research question, this research has been conducted using the following outline: 1) Investigate state-of-the-art in map generalization. 2) Study development of vario-scale structure done so far. 3) Propose techniques for generating better vario-scale map content. 4) Implement strategies to process really massive datasets. 5) Research smooth representation of map features and their impact on user interaction. Results of our research led to new functionality, were addressed in prototype developments and were tested against real world data sets. Throughout this research we have made following main contributions to the design and development of a system of vario-scale maps. We have: studied vario-scale development in the past and we have identified the most urgent needs of the research. designed the concept of granularity and presented our strategy where changes in map content should be as small and as gradual as possible (e. g. use groups, maintain road network, support line feature representation). introduced line features in the solution and presented a fully-automated generalization process that preserves a road network features throughout all scales. proposed an approach to create a vario-scale data structure of massive datasets. demonstrated a method to generate an explicit 3D representation from the structure which can provide smoother user experience. developed a software prototype where a 3D vario-scale dataset can be used to its full potential. conducted initial usability test. All aspects together with already developed functionality provide a more complex and more unified solution for vario-scale mapping. Based on our research, design and development of a system for vario-scale maps should be clearer now. In addition, it is easier to identified necessary steps which need to be taken towards an optimal solution. Our recommendations for future work are: One of the contributions has been an integration of the road features in the structure and their automated generalization throughout the process. Integrating more map features besides roads deserve attention. We have investigated how to deal with massive datasets which do not fit in the main memory of the computer. Our experiences consisted of dataset of one province or state with records in order of millions. To verify our findings, it will be interesting to process even bigger dataset with records in order of billions (a whole continent). We have introduced representation where map content changes as gradually as possible. It is based on process where: 1) explicit 3D geometry from the structure is generated. 2) A slice of the geometry is calculated. 3) Final maps based on the slice is constructed. Investigation of how to integrate this in a server-client pipeline on the Internet is another point of further research. Our research focus has been mainly on one specific aspect of the concept at a time. Now all aspects may be brought together where integration, tuning and orchestration play an important role is another interesting research that desire attention. Carry out more user testing including; 1) maps of sufficient cartographic quality, 2) a large testing region, and 3) the finest version of visualization prototype

    Reasoning with Mixed Qualitative-Quantitative Representations of Spatial Knowledge

    Get PDF
    Drastic transformations in human settlements are caused by extreme events. As a consequence, descriptions of an environment struck by an extreme event, based on spatial data collected before the event, become suddenly unreliable. On the other hand, time critical actions taken for responding to extreme events require up-to-date spatial information. Traditional methods for spatial data collection are not able to provide updated information rapidly enough, calling for the development of new data collection methods. Reports provided by actors involved in the response operations can be considered as an alternative source of spatial information. Indeed, reports often convey spatial descriptions of the environment. The extraction of spatial descriptions from such reports can serve a fundamental role to update existing information which is usually maintained within, and by means of, Geographic Information Systems. However, spatial information conveyed by human reports has qualitative characteristics, that strongly differ from the quantitative nature of spatial information stored in Geographic Information Systems. Methodologies for integrating qualitative and quantitative spatial information are required in order to exploit human reports for updating existing descriptions of spatial knowledge. Although a significant amount of research has been carried on how to represent and reason on qualitative data and qualitative information, relatively little work exists on developing techniques to combine the different methodologies. The work presented in this thesis extends previous works by introducing a hybrid reasoning system--able to deal with mixed qualitative-quantitative representations of spatial knowledge--combining techniques developed separately for qualitative spatial reasoning and quantitative data analysis. The system produces descriptions of the spatial extent of those entities that have been modified by the event (such as collapsed buildings), or that were not existing before the event (such as fire or ash clouds). Furthermore, qualitative descriptions are produced for all entities in the environment. The former descriptions allow for overlaying on a map the information interpreted from human reports, while the latter triggers warning messages to people involved in decision making operations. Three main system functionalities are investigated in this work: The first allows for translating qualitative information into quantitative descriptions. The second aims at translating quantitative information into qualitative relations. Finally, the third allows for performing inference operations with information given partly qualitatively and partly quantitatively for boosting the spatial knowledge the system is able to produce

    Overview of database projects

    Get PDF
    The use of entity and object oriented data modeling techniques for managing Computer Aided Design (CAD) is explored

    Комп’ютерна графіка: Навчальний посібник (Англ. мовою).

    Get PDF
    Професійна робота з графікою є невід’ємною властивістю майбутнього IT спеціаліста. Навчальний посібник охоплює основні питання з програмування у AutoCAD. Кожний розділ містить вправи та питання для самоперевірки. У додатку подані завдання для самостійної роботи студентів. Посібник рекомендується для засвоєння курсу «Комп’ютерна графіка» студентами комп’ютерних спеціальностей університету

    Methods to process low-level CAD plans and creative Building Information Models (BIM)

    Get PDF
    [ES] La tesis profundiza en la obtención de elementos semánticos a partir de planos CAD de edificios para la creación de modelos de información de edificios (BIM). Se puede dividir en dos bloques principales. En un primer bloque se describen un conjunto de algoritmos que permiten obtener semiautomáticamente elementos semánticos desde planos CAD arquitectónicos: Paredes, puertas y ventanas, escaleras, habitaciones cerradas. En un segundo bloque se estudia la gestión de la información topológica de los elementos semánticos detectados utilizando los algoritmos descritos en el primer bloque. Se propone un modelo topológicamente correcto que incluye información en 2D (grafo de topología) y 3D (obtenida mediante un algoritmo llamado triple extrusión). La información de dicho modelo combina elementos geométricos, topológicos y semánticos, y permite la obtención de modelos CityGML[EN] This dissertation goes in depth to the detection of semantic elements from CAD building floor plans in order to create Building Information Models (BIM). The dissertation can be divided into two main parts. In the first part we describe a number of algorithms which allow obtaining semiautomatically semantic elements from CAD architectural floor plans: walls, doors and windows, staircases, closed rooms. In the second part, we study the management of the topological information of the semantic elements which were detected by using the algorithms described in the first part. We propose a topologically correct model which includes 2D information (topology graph) and 3D information (obtained using an algorithm called triple extrusion). The information of this model combines geometric, topological and semantic elements, and allow obtaining CityGML models.Tesis Univ. Jaén. Departamento de Informática. Leída el 17 de diciembre de 201
    corecore