20 research outputs found

    An SDI for the GIS-education at the UGent Geography Department

    Get PDF
    The UGent Geography Department (GD) (ca. 200 students; 10 professors) has been teaching GIS since the mid 90’s. Ever since, GIS has evolved from Geographic Information Systems, to GIScience, to GIServices; implying that a GIS specialist nowadays has to deal with more than just desktop GIS. Knowledge about the interaction between different components of an SDI (spatial data, technologies, laws and policies, people and standards) is crucial for a graduated Master student. For its GIS education, the GD has until recently been using different sources of datasets, which were stored in a non-centralized system. In conformity with the INSPIRE Directive and the Flemish SDI Decree, the GD aims to set-up its own SDI using free and open source software components, to improve the management, user-friendliness, copyright protection and centralization of datasets and the knowledge of state of the art SDI structure and technology. The central part of the system is a PostGIS-database in which both staff and students can create and share information stored in a multitude of tables and schemas. A web-based application facilitates upper-level management of the database for administrators and staff members. Exercises in various courses not only focus on accessing and handling data from the SDI through common GIS-applications as QuantumGIS or GRASS, but also aim at familiarizing students with the set-up of widely used SDI-elements as WMS, WFS and WCS services. The (dis)advantages of the new SDI will be tested in a case study in which the workflow of a typical ‘GIS Applications’ exercise is elaborated. By solving a problem of optimal location, students interact in various ways with geographic data. A comparison is made between the situation before and after the implementation of the SDI

    Survey on geographic visual display techniques in epidemiology: Taxonomy and characterization

    Get PDF
    Many works have been done on the topic of Geographic Visual Display with different objectives and approaches. There are studies to compare the traditional cartography techniques (the traditional term of Geographic Visual Display (GVD) without Human-Computer Interaction (HCI)) to Modern GIS which are also known as Geo-visualization, some literature differentiates and highlight the commonalities of features and architectures of different Geographic Visual Display tools (from layers and clusters to dot and color and more). Furthermore, with the existence of more advanced tools which support data exploration, few tasks are done to evaluate how those tools are used to handle complex and multivariate spatial-temporal data. Several test on usability and interactivity of tools toward user's needs or preferences, some even develop frameworks that address user's concern in a wide array of tasks, and others prove how these tools are able to stimulate the visual thought process and help in decision making or event prediction amongst decision-makers. This paper surveyed and categorized these research articles into 2 categories: Traditional Cartography (TC) and Geo-visualization (G). This paper will classify each category by their techniques and tasks that contribute to the significance of data representation in Geographic Visual Display and develop perspectives of each area and evaluating trends of Geographic Visual Display Techniques. Suggestions and ideas on what mechanisms can be used to improve and diversify Geographic Visual Display Techniques are provided at the end of this survey

    Geospatial Data Management Research: Progress and Future Directions

    Get PDF
    Without geospatial data management, today´s challenges in big data applications such as earth observation, geographic information system/building information modeling (GIS/BIM) integration, and 3D/4D city planning cannot be solved. Furthermore, geospatial data management plays a connecting role between data acquisition, data modelling, data visualization, and data analysis. It enables the continuous availability of geospatial data and the replicability of geospatial data analysis. In the first part of this article, five milestones of geospatial data management research are presented that were achieved during the last decade. The first one reflects advancements in BIM/GIS integration at data, process, and application levels. The second milestone presents theoretical progress by introducing topology as a key concept of geospatial data management. In the third milestone, 3D/4D geospatial data management is described as a key concept for city modelling, including subsurface models. Progress in modelling and visualization of massive geospatial features on web platforms is the fourth milestone which includes discrete global grid systems as an alternative geospatial reference framework. The intensive use of geosensor data sources is the fifth milestone which opens the way to parallel data storage platforms supporting data analysis on geosensors. In the second part of this article, five future directions of geospatial data management research are presented that have the potential to become key research fields of geospatial data management in the next decade. Geo-data science will have the task to extract knowledge from unstructured and structured geospatial data and to bridge the gap between modern information technology concepts and the geo-related sciences. Topology is presented as a powerful and general concept to analyze GIS and BIM data structures and spatial relations that will be of great importance in emerging applications such as smart cities and digital twins. Data-streaming libraries and “in-situ” geo-computing on objects executed directly on the sensors will revolutionize geo-information science and bridge geo-computing with geospatial data management. Advanced geospatial data visualization on web platforms will enable the representation of dynamically changing geospatial features or moving objects’ trajectories. Finally, geospatial data management will support big geospatial data analysis, and graph databases are expected to experience a revival on top of parallel and distributed data stores supporting big geospatial data analysis

    Big Data Computing for Geospatial Applications

    Get PDF
    The convergence of big data and geospatial computing has brought forth challenges and opportunities to Geographic Information Science with regard to geospatial data management, processing, analysis, modeling, and visualization. This book highlights recent advancements in integrating new computing approaches, spatial methods, and data management strategies to tackle geospatial big data challenges and meanwhile demonstrates opportunities for using big data for geospatial applications. Crucial to the advancements highlighted in this book is the integration of computational thinking and spatial thinking and the transformation of abstract ideas and models to concrete data structures and algorithms

    A Tutorial on Geographic Information Systems: A Ten-year Update

    Get PDF
    This tutorial provides a foundation on geographic information systems (GIS) as they relate to and are part of the IS body of knowledge. The tutorial serves as a ten-year update on an earlier CAIS tutorial (Pick, 2004). During the decade, GIS has expanded with wider and deeper range of applications in government and industry, widespread consumer use, and an emerging importance in business schools and for IS. In this paper, we provide background information on the key ideas and concepts of GIS, spatial analysis, and latest trends and on the status and opportunities for incorporating GIS, spatial analysis, and locational decision making into IS research and in teaching in business and IS curricula

    Enriching the Digital Library Experience: Innovations With Named Entity Recognition and Geographic Information System Technologies

    Get PDF
    Digital libraries are seeking innovative ways to share their resources and enhance user experience. To this end, numerous openly available technologies can be exploited. For this project, NER technology was applied to a subset of the Documenting the American South (DocSouth) digital collections. Personal and location names were hand-annotated to achieve a gold standard, and GATE, a text engineering tool, was run under two conditions: a defaults baseline and a test run that included gazetteers built from DocSouth's Colonial and State Records collection. Overall, GATE performance is promising, and numerous strategies for improvement are discussed. Next, derived location annotations were georeferenced and stored in a geodatabase through automated processes, and a prototype for a web-based map search was developed using the Google Maps API. This project showcases innovations with automated NER coupled with GIS technologies, and strongly supports further investment in applying these techniques across DocSouth and other digital libraries

    A Data-driven, High-performance and Intelligent CyberInfrastructure to Advance Spatial Sciences

    Get PDF
    abstract: In the field of Geographic Information Science (GIScience), we have witnessed the unprecedented data deluge brought about by the rapid advancement of high-resolution data observing technologies. For example, with the advancement of Earth Observation (EO) technologies, a massive amount of EO data including remote sensing data and other sensor observation data about earthquake, climate, ocean, hydrology, volcano, glacier, etc., are being collected on a daily basis by a wide range of organizations. In addition to the observation data, human-generated data including microblogs, photos, consumption records, evaluations, unstructured webpages and other Volunteered Geographical Information (VGI) are incessantly generated and shared on the Internet. Meanwhile, the emerging cyberinfrastructure rapidly increases our capacity for handling such massive data with regard to data collection and management, data integration and interoperability, data transmission and visualization, high-performance computing, etc. Cyberinfrastructure (CI) consists of computing systems, data storage systems, advanced instruments and data repositories, visualization environments, and people, all linked together by software and high-performance networks to improve research productivity and enable breakthroughs that are not otherwise possible. The Geospatial CI (GCI, or CyberGIS), as the synthesis of CI and GIScience has inherent advantages in enabling computationally intensive spatial analysis and modeling (SAM) and collaborative geospatial problem solving and decision making. This dissertation is dedicated to addressing several critical issues and improving the performance of existing methodologies and systems in the field of CyberGIS. My dissertation will include three parts: The first part is focused on developing methodologies to help public researchers find appropriate open geo-spatial datasets from millions of records provided by thousands of organizations scattered around the world efficiently and effectively. Machine learning and semantic search methods will be utilized in this research. The second part develops an interoperable and replicable geoprocessing service by synthesizing the high-performance computing (HPC) environment, the core spatial statistic/analysis algorithms from the widely adopted open source python package – Python Spatial Analysis Library (PySAL), and rich datasets acquired from the first research. The third part is dedicated to studying optimization strategies for feature data transmission and visualization. This study is intended for solving the performance issue in large feature data transmission through the Internet and visualization on the client (browser) side. Taken together, the three parts constitute an endeavor towards the methodological improvement and implementation practice of the data-driven, high-performance and intelligent CI to advance spatial sciences.Dissertation/ThesisDoctoral Dissertation Geography 201

    An Agent-Based Variogram Modeller: Investigating Intelligent, Distributed-Component Geographical Information Systems

    Get PDF
    Geo-Information Science (GIScience) is the field of study that addresses substantive questions concerning the handling, analysis and visualisation of spatial data. Geo- Information Systems (GIS), including software, data acquisition and organisational arrangements, are the key technologies underpinning GIScience. A GIS is normally tailored to the service it is supposed to perform. However, there is often the need to do a function that might not be supported by the GIS tool being used. The normal solution in these circumstances is to go out and look for another tool that can do the service, and often an expert to use that tool. This is expensive, time consuming and certainly stressful to the geographical data analyses. On the other hand, GIS is often used in conjunction with other technologies to form a geocomputational environment. One of the complex tools in geocomputation is geostatistics. One of its functions is to provide the means to determine the extent of spatial dependencies within geographical data and processes. Spatial datasets are often large and complex. Currently Agent system are being integrated into GIS to offer flexibility and allow better data analysis. The theis will look into the current application of Agents in within the GIS community, determine if they are used to representing data, process or act a service. The thesis looks into proving the applicability of an agent-oriented paradigm as a service based GIS, having the possibility of providing greater interoperability and reducing resource requirements (human and tools). In particular, analysis was undertaken to determine the need to introduce enhanced features to agents, in order to maximise their effectiveness in GIS. This was achieved by addressing the software agent complexity in design and implementation for the GIS environment and by suggesting possible solutions to encountered problems. The software agent characteristics and features (which include the dynamic binding of plans to software agents in order to tackle the levels of complexity and range of contexts) were examined, as well as discussing current GIScience and the applications of agent technology to GIS, agents as entities, objects and processes. These concepts and their functionalities to GIS are then analysed and discussed. The extent of agent functionality, analysis of the gaps and the use these technologies to express a distributed service providing an agent-based GIS framework is then presented. Thus, a general agent-based framework for GIS and a novel agent-based architecture for a specific part of GIS, the variogram, to examine the applicability of the agent- oriented paradigm to GIS, was devised. An examination of the current mechanisms for constructing variograms, underlying processes and functions was undertaken, then these processes were embedded into a novel agent architecture for GIS. Once the successful software agent implementation had been achieved, the corresponding tool was tested and validated - internally for code errors and externally to determine its functional requirements and whether it enhances the GIS process of dealing with data. Thereafter, its compared with other known service based GIS agents and its advantages and disadvantages analysed

    Crowdsourcing Crisis Management Platforms: A Privacy and Data Protection Risk Assessment and Recommendations

    Get PDF
    Over the last few years, crowdsourcing have expanded rapidly allowing citizens to connect with each other, governments to connect with common mass, to coordinate disaster response work, to map political conflicts, acquiring information quickly and participating in issues that affect day-to- day life of citizens. As emerging tools and technologies offer huge potential to response quickly and on time during crisis, crisis responders do take support from these tools and techniques. The ‘Guiding Principles’ of the Sendai Framework for Disaster Risk Reduction 2015-2030 identifies that ‘disaster risk reduction requires a multi-hazard approach and inclusive risk-informed decision-making (RIDM) based on the open exchange and dissemination of disaggregated data, including by sex, age and disability, as well as on easily accessible, up-to-date, comprehensible, science-based, non-sensitive risk information, complemented by traditional knowledge. Addressing the ‘Priority Action’ 1 & 2, this PhD research aims to identify various risks and present recommendations for ‘RIDM Process’ in form of a general Privacy and Data Protection Risk Assessment and Recommendations for crowdsourcing crisis management. It includes legal, ethical and technical recommendations

    Experimental measurement system and neural network model to simulate photovoltaic modules

    Get PDF
    Para la simulación de la curva l-V de los módulos fotovoltaicos se ha propuesto la utilización de un perceptrón multicapa. Se han evaluado cómo contribuyen a esta simulación distintos parámetros de entrada, como son el ángulo de incidencia, el Índice de transparencia atmosférico y la distribución espectral de la radiación. Fecha de lectura de Tesis Doctoral: 30 de septiembre 2011.El objetivo de esta tesis es el desarrollo de una metodología de medida, caracterización y simulación de módulos fotovoltaicos que pueda ser de utilidad para los investigadores e ingenieros del campo de la tecnología solar fotovoltaica. Para la parte de medida se ha desarrollado un nuevo sistema de medida de curvas l-V para módulos fotovoltaicos. En la parte de caracterización y simulación, se ha propuesto un modelo basado en redes neuronales que permite extrapolar estas curvas a distintas condiciones reales de funcionamiento. El sistema de medida propuesto resuleve los problemas detectados en los métodos que se están utilizando en la actualidad. En concreto, y como más importante deficiencia a la que se da solución en esta tesis, está el problema de obtener los valores de los dos parámetros que configuran estas curvas, a saber, corriente y tensión, de manera simultánea. El sistema propuesto está basado en la utilización de una carga electrónica de cuatro cuadrantes y dos multimetros digitales sincronizados con un generador de ondas que crea una señal cuadrada para disparar ambos multímetros. Este método de sincronización asegura que las medidas de tensión y corriente se efectúan de manera simultánea: esto no se asegura con otros métodos previamente usados que utilizan una sincronización vía GPIB. Además, se hace una propuesta de utilizar esquemas XML para el formato de los datos registrados en laboratorios fotovoltaicos. Este formato puede contribuir a una estandarización de los datos que se utilizan para la caracterización de módulos fotovoltaicos por distintos laboratorios de medida. Esto facilitará el intercambio de información entre estos laboratorios
    corecore