362 research outputs found

    A PostgreSQL/PostGIS Implementation for the Sightseeing Tour Planning Problem

    Get PDF
    This article discusses a procedure for finding the best multi stops route for sightseeing tour through a road network. The procedure involves building a database containing nodes and road network in PostgreSQL, calculating the shortest distance between a pair of nodes using pgDijkstra module, and solving the tour problem using a function written in PL/pgSQL. The function was developed based on the Nearest Insertion Algorithm for solving the Travelling Salesman Problem. The algorithm inserts a sightseeing attraction (node) at the best position in the existing route, which is between a pair of nodes that yields the minimum difference between the total tour time before and after the new node was inserted. The test result shows that the function can solve the problem within acceptable runtime for web application for total destination nodes of 22. It is concluded that the whole procedure was suitable for developing Web GIS application that solve the sightseeing tour planning problem

    The Analysis of Open Source Software and Data for Establishment of GIS Services Throughout the Network in a Mapping Organization at National or International Level

    Get PDF
    Federal agencies and their partners collect and manage large amounts of geospatial data but it is often not easily found when needed, and sometimes data is collected or purchased multiple times. In short, the best government data is not always organized and managed efficiently to support decision making in a timely and cost effective manner. National mapping agencies, various Departments responsible for collection of different types of Geospatial data and their authorities cannot, for very long, continue to operate, as they did a few years ago like people living in an island. Leaders need to look at what is now possible that was not possible before, considering capabilities such as cloud computing, crowd sourced data collection, available Open source remotely sensed data and multi source information vital in decision-making as well as new Web-accessible services that provide, sometimes at no cost. Many of these services previously could be obtained only from local GIS experts. These authorities need to consider the available solution and gather information about new capabilities, reconsider agency missions and goals, review and revise policies, make budget and human resource for decisions, and evaluate new products, cloud services, and cloud service providers. To do so, we need, choosing the right tools to rich the above-mentioned goals. As we know, Data collection is the most cost effective part of the mapping and establishment of a Geographic Information system. However, it is not only because of the cost for the data collection task but also because of the damages caused by the delay and the time that takes to provide the user with proper information necessary for making decision from the field up to the user’s hand. In fact, the time consumption of a project for data collection, processing, and presentation of geospatial information has more effect on the cost of a bigger project such as disaster management, construction, city planning, environment, etc. Of course, with such a pre-assumption that we provide all the necessary information from the existing sources directed to user’s computer. The best description for a good GIS project optimization or improvement is finding a methodology to reduce the time and cost, and increase data and service quality (meaning; Accuracy, updateness, completeness, consistency, suitability, information content, integrity, integration capability, and fitness for use as well as user’s specific needs and conditions that must be addressed with a special attention). Every one of the above-mentioned issues must be addressed individually and at the same time, the whole solution must be provided in a global manner considering all the criteria. In this thesis at first, we will discuss about the problem we are facing and what is needed to be done as establishment of National Spatial Data Infra-Structure (NSDI), the definition and related components. Then after, we will be looking for available Open Source Software solutions to cover the whole process to manage; Data collection, Data base management system, data processing and finally data services and presentation. The first distinction among Software is whether they are, Open source and free or commercial and proprietary. It is important to note that in order to make distinction among softwares it is necessary to define a clear specification for this categorization. It is somehow very difficult to distinguish what software belongs to which class from legal point of view and therefore, makes it necessary to clarify what is meant by various terms. With reference to this concept there are 2 global distinctions then, inside each group, we distinguish another classification regarding their functionalities and applications they are made for in GIScience. According to the outcome of the second chapter, which is the technical process for selection of suitable and reliable software according to the characteristics of the users need and required components, we will come to next chapter. In chapter 3, we elaborate in to the details of the GeoNode software as our best candidate tools to take responsibilities of those issues stated before. In Chapter 4, we will discuss the existing Open Source Data globally available with the predefined data quality criteria (Such as theme, data content, scale, licensing, and coverage) according to the metadata statement inside the datasets by mean of bibliographic review, technical documentation and web search engines. We will discuss in chapter 5 further data quality concepts and consequently define sets of protocol for evaluation of all datasets according to the tasks that a mapping organization in general, needed to be responsible to the probable users in different disciplines such as; Reconnaissance, City Planning, Topographic mapping, Transportation, Environment control, disaster management and etc
 In Chapter 6, all the data quality assessment and protocols will be implemented into the pre-filtered, proposed datasets. In the final scores and ranking result, each datasets will have a value corresponding to their quality according to the sets of rules that are defined in previous chapter. In last steps, there will be a vector of weight that is derived from the questions that has to be answered by user with reference to the project in hand in order to finalize the most appropriate selection of Free and Open Source Data. This Data quality preference has to be defined by identifying a set of weight vector, and then they have to be applied to the quality matrix in order to get a final quality scores and ranking. At the end of this chapter there will be a section presenting data sets utilization in various projects such as “ Early Impact Analysis” as well as “Extreme Rainfall Detection System (ERDS)- version 2” performed by ITHACA. Finally, in conclusion, the important criteria, as well as future trend in GIS software are discussed and at the end recommendations will be presented

    Heterogeneous sensor database framework for the sensor observation service: integrating remote and in-situ sensor observations at the database backend

    Get PDF
    Dissertation submitted in partial fulfillment of the requirements for the Degree of Master of Science in Geospatial Technologies.Environmental monitoring and management systems in most cases deal with models and spatial analytics that involve the integration of in-situ and remote sensor observations. In-situ sensor observations and those gathered by remote sensors are usually provided by different databases and services in real-time dynamic service systems like the Geo-Web Services. Thus, data have to be pulled from different databases and transferred over the web before they are fused and processed on the service middleware. This process is very massive and unnecessary communication and work load on the service, especially when retrieving massive raster coverage data. Thus in this research, we propose a database model for heterogeneous sensortypes that enables geo-scientific processing and spatial analytics involving remote and in-situ sensor observations at the database level of the Sensor Observation Service, SOS. This approach would be used to reduce communication and massive workload on the Geospatial Web Service, as well make query request from the user end a lot more flexible. Hence the challenging task is to develop a heterogeneous sensor database model that enables geoprocessing and spatial analytics at the database level and how this could be integrated with the geo-web services to reduce communication and workload on the service and as well make query request from the client end more flexible through the use of SQL statements

    Improving Data Acquisition Processes for Geospatial Building Information Applications

    Get PDF
    This study presents different technologies for processing geospatial building information in 2D models and discusses potential problems with an enormous growth in the data volume and availability of GIS software. The problems arise from collecting data from multiple data sources (e.g., mobile devices, websites, sensors, computers, GPS, or WFS) with different context problems (e.g., missing data, data formats, invalid values) and inefficient pre-processing data pipelines for examining the complex structure of spatial datasets. Thus, there is a need for a system that can manage such data automation issues in this case. One needs a data processing pipeline and data modeling for geospatial datasets. This process allows faster examination and visualization of the map to detect patterns. We present different GIS tools with various functionalities in handling geometric objects and introduce efficient data acquisition processing for these platforms. We conduct several experiments with these GIS applications to explore possibilities and program capabilities in terms of performance. The study analyzes the workflows for data collection, integration, and spatial data processing based on different formats, tools, and methods. The thesis studies and combines many techniques from GIS technologies to improve practices for software development teams and geospatial management systems. Data acquisition and integration apply these techniques to gain better optimization based on tool experiments and the user perspective. The findings provide the foundation for future work to have a standard methodology or processes for working with geospatial applications in file conversion, loading, processing, and exporting

    Geospatial Data Modeling to Support Energy Pipeline Integrity Management

    Get PDF
    Several hundred thousand miles of energy pipelines span the whole of North America -- responsible for carrying the natural gas and liquid petroleum that power the continent\u27s homes and economies. These pipelines, so crucial to everyday goings-on, are closely monitored by various operating companies to ensure they perform safely and smoothly. Happenings like earthquakes, erosion, and extreme weather, however -- and human factors like vehicle traffic and construction -- all pose threats to pipeline integrity. As such, there is a tremendous need to measure and indicate useful, actionable data for each region of interest, and operators often use computer-based decision support systems (DSS) to analyze and allocate resources for active and potential hazards. We designed and implemented a geospatial data service, REST API for Pipeline Integrity Data (RAPID) to improve the amount and quality of data available to DSS. More specifically, RAPID -- built with a spatial database and the Django web framework -- allows third-party software to manage and query an arbitrary number of geographic data sources through one centralized REST API. Here, we focus on the process and peculiarities of creating RAPID\u27s model and query interface for pipeline integrity management; this contribution describes the design, implementation, and validation of that model, which builds on existing geospatial standards

    A reference data access service in support of emergency management

    Get PDF
    In the field of natural disasters recovery and reduction and of emergency management georeferenced information is strongly needed. In my personal experience obtained in the three years period spent at ITHACA, during the shorter at GFDRR Labs and through the work done indirectly with UN-WFP, after a natural disaster occurs, the experts in geomatics are often asked to provide answers to questions such as: where did it occur? How many people have been involved? How many infrastructures have been damaged and to what extent? How much is the economical loss? Geomatics can give answer to all these questions or give significant help in addressing operations in order to get the answers. The goal can be reached both with the use of base reference data, the ones usually contained in the classic cartography, and by exploiting value added information coming from satellite and aerial data processing, classic surveys and GPS acquisition on the fiel

    Towards a digital mine: a spatial database for accessing historical geospatial data on mining and related activities

    Get PDF
    A Research Report submitted to the Faculty of Science, University of the Witwatersrand, Johannesburg, in partial fulfilment of the requirements for the degree of Master of Science. Johannesburg, 2016.Countries around the world are recognising the importance of geospatial data in answering questions related to spatially varying industries such as mining activities (ongoing and discontinued). This is becoming increasingly evident with countries such as Canada, Australia, and the United Kingdom working towards establishing Abandoned Mine Lands (AML) inventories. However, the increasing need for data on mining activities is not paralleled by an increase in the availability of such data. The aim of this research therefore is to design a database for accessing historical and current geospatial data that can be used to support research, environmental management efforts as well as support decision making at all levels. A user needs survey was conducted. Two sampling methods were employed, convenient sampling and snowball sampling method. The convenient sampling method was used mostly with all the WDMP group members and the latter was employed with the respondents from institutions and organisations outside of the university respectively. The data were then categorised so as to make analysis easier and data could be evaluated on the same basis. An evaluation of the data collected showed that although the WDMP required different types of data (spatial and non- spatial) the data feed into each other and as such it is important that there is a central repository in which to store them. Furthermore investigation also shows that there is a wealth of data on current mining activities, but not so much on historical mining activities. Although data on mining activities exists, accessibility to these data is hindered by various factors such as copyright infringements, data costs, discrepancies in the data request process. The outcome of this research has been that of a physical database PostgreSQL database (PostGIS) and one mounted on an online platform (GeoServer). The databases can be visualised on PostgreSQL using select statements or visualisation through establishing a connection with QGIS, alternatively the database may be accessed on GeoServer. The database is expected to be of use to at least all members of the Wits Digital Mine Project (WDMP) and stakeholders involved in the project. The database can be used for baseline studies and also as a basis for the framework used to analyse, remedy as well as predict future challenges in the mining industry. Moreover, the database can act as a central repository for all data produced from the WDMP.LG201

    Fachlich erweiterbare 3D-Stadtmodelle – Management, Visualisierung und Interaktion

    Get PDF
    Domain-extendable semantic 3D city models are complex mappings and inventories of the urban environment which can be utilized as an integrative information backbone to facilitate a range of application fields like urban planning, environmental simulations, disaster management, and energy assessment. Today, more and more countries and cities worldwide are creating their own 3D city models based on the CityGML specification which is an international standard issued by the Open Geospatial Consortium (OGC) to provide an open data model and XML-based format for describing the relevant urban objects with regards to their 3D geometry, topology, semantics, and appearance. It especially provides a flexible and systematic extension mechanism called “Application Domain Extension (ADE)” which allows third parties to dynamically extend the existing CityGML definitions with additional information models from different application domains for representing the extended or newly introduced geographic object types within a common framework. However, due to the consequent large size and high model complexity, the practical utilization of country-wide CityGML datasets has posed a tremendous challenge regarding the setup of an extensive application system to support the efficient data storage, analysis, management, interaction, and visualization. These requirements have been partly solved by the existing free 3D geo-database solution called ‘3D City Database (3DCityDB)’ which offers a rich set of functionalities for dealing with standard CityGML data models, but lacked the support for CityGML ADEs. The key motivation of this thesis is to develop a reliable approach for extending the existing database solution to support the efficient management, visualization, and interaction of large geospatial data elements of arbitrary CityGML ADEs. Emphasis is first placed on answering the question of how to dynamically extend the relational database schema by parsing and interpreting the XML schema files of the ADE and dynamically create new database tables accordingly. Based on a comprehensive survey of the related work, a new graph-based framework has been proposed which uses typed and attributed graphs for semantically representing the object-oriented data models of CityGML ADEs and utilizes graph transformation systems to automatically generate compact table structures extending the 3DCityDB. The transformation process is performed by applying a series of fine-grained graph transformation rules which allow users to declaratively describe the complex mapping rules including the optimization concepts that are employed in the development of the 3DCityDB database schema. The second major contribution of this thesis is the development of a new multi-level system which can serve as a complete and integrative platform for facilitating the various analysis, simulation, and modification operations on the complex-structured 3D city models based on CityGML and 3DCityDB. It introduces an additional application level based on a so-called ‘app-concept’ that allows for constructing a light-weight web application to reach a good balance between the high data model complexity and the specific application requirements of the end users. Each application can be easily built on top of a developed 3D web client whose functionalities go beyond the efficient 3D geo-visualization and interactive exploration, and also allows for performing collaborative modifications and analysis of 3D city models by taking advantage of the Cloud Computing technology. This multi-level system along with the extended 3DCityDB have been successfully utilized and evaluated by many practical projects.Fachlich erweiterbare semantische 3D-Stadtmodelle sind komplexe Abbildungen und DatenbestĂ€nde der stĂ€dtischen Umgebung, die als ein integratives InformationsrĂŒckgrat genutzt werden können, um eine Reihe von Anwendungsfeldern wie z. B. Stadtplanung, Umweltsimulationen, Katastrophenmanagement und Energiebewertung zu ermöglichen. Heute schaffen immer mehr LĂ€nder und StĂ€dte weltweit ihre eigenen 3D-Stadtmodelle auf Basis des internationalen Standards CityGML des Open Geospatial Consortium (OGC), um ein offenes Datenmodell und ein XML-basiertes Format zur Beschreibung der relevanten Stadtobjekte in Bezug auf ihre 3D-Geometrien, Topologien, Semantik und Erscheinungen zur VerfĂŒgung zu stellen. Es bietet insbesondere einen flexiblen und systematischen Erweiterungsmechanismus namens „Application Domain Extension“ (ADE), der es Dritten ermöglicht, die bestehenden CityGML-Definitionen mit zusĂ€tzlichen Informationsmodellen aus verschiedenen AnwendungsdomĂ€nen dynamisch zu erweitern, um die erweiterten oder neu eingefĂŒhrten Stadtobjekt-Typen innerhalb eines gemeinsamen Framework zu reprĂ€sentieren. Aufgrund der konsequent großen Datenmenge und hohen ModellkomplexitĂ€t bei der praktischen Nutzung der landesweiten CityGML-DatensĂ€tze wurden jedoch enorme Anforderungen an den Aufbau eines umfangreichen Anwendungssystems zur UnterstĂŒtzung der effizienten Speicherung, Analyse, Verwaltung, Interaktion und Visualisierung der Daten gestellt. Die bestehende kostenlose 3D-Geodatenbank-Lösung „3D City Database“ (3DCityDB) entsprach bereits teilweise diesen Anforderungen, indem sie zwar eine umfangreiche FunktionalitĂ€t fĂŒr den Umgang mit den Standard-CityGML-Datenmodellen, jedoch keine UnterstĂŒtzung fĂŒr CityGML-ADEs bietet. Die SchlĂŒsselmotivation fĂŒr diese Arbeit ist es, einen zuverlĂ€ssigen Ansatz zur Erweiterung der bestehenden Datenbanklösung zu entwickeln, um das effiziente Management, die Visualisierung und Interaktion großer DatensĂ€tze beliebiger CityGML-ADEs zu unterstĂŒtzen. Der Schwerpunkt liegt zunĂ€chst auf der Beantwortung der SchlĂŒsselfrage, wie man das relationale Datenbankschema dynamisch erweitern kann, indem die XML-Schemadateien der ADE analysiert und interpretiert und anschließend dem entsprechende neue Datenbanktabellen erzeugt werden. Auf Grundlage einer umfassenden Studie verwandter Arbeiten wurde ein neues graphbasiertes Framework entwickelt, das die typisierten und attributierten Graphen zur semantischen Darstellung der objektorientierten Datenmodelle von CityGML-ADEs verwendet und anschließend Graphersetzungssysteme nutzt, um eine kompakte Tabellenstruktur zur Erweiterung der 3DCityDB zu generieren. Der Transformationsprozess wird durch die Anwendung einer Reihe feingranularer Graphersetzungsregeln durchgefĂŒhrt, die es Benutzern ermöglicht, die komplexen Mapping-Regeln einschließlich der Optimierungskonzepte aus der Entwicklung des 3DCityDB-Datenbankschemas deklarativ zu formalisieren. Der zweite wesentliche Beitrag dieser Arbeit ist die Entwicklung eines neuen mehrstufigen Systemkonzepts, das auf CityGML und 3DCityDB basiert und gleichzeitig als eine komplette und integrative Plattform zur Erleichterung der Analyse, Simulationen und Modifikationen der komplex strukturierten 3D-Stadtmodelle dienen kann. Das Systemkonzept enthĂ€lt eine zusĂ€tzliche Anwendungsebene, die auf einem sogenannten „App-Konzept“ basiert, das es ermöglicht, eine leichtgewichtige Applikation bereitzustellen, die eine gute Balance zwischen der hohen ModellkomplexitĂ€t und den spezifischen Anwendungsanforderungen der Endbenutzer erreicht. Jede Applikation lĂ€sst sich ganz einfach mittels eines bereits entwickelten 3D-Webclients aufbauen, dessen FunktionalitĂ€ten ĂŒber die effiziente 3D-Geo-Visualisierung und interaktive Exploration hinausgehen und auch die DurchfĂŒhrung kollaborativer Modifikationen und Analysen von 3D-Stadtmodellen mit Hilfe von der Cloud-Computing-Technologie ermöglichen. Dieses mehrstufige System zusammen mit dem erweiterten 3DCityDB wurde erfolgreich in vielen praktischen Projekten genutzt und bewertet
    • 

    corecore