8 research outputs found

    A Web based geospatial application for disaster preparedness in Uganda

    Get PDF
    Dissertation submitted in partial fulfillment of the requirements for the Degree of Master of Science in Geospatial Technologies.The power of Geospatial technologies in first world countries is increasingly being harnessed to manage disasters; this is not the same case for Uganda a disaster prone country in East Africa. For regions in the blankets of Mt. Rwenzori in the west and Mt. Elgon in the east, every year brings with it new challenges in the field of disaster management. When disasters happen in Uganda, government agencies and different Humanitarian Organizations run to the aid of the affected, relief is provided and rehabilitation activities carried out, but there remains little usable GI of the affected areas (Farthing & Ware, 2010). If disaster preparedness and management is dependent on accurate analysis and mapping of vulnerability and susceptibility of communities to risk (Office of the Prime Minister, 2010) and there exists data gaps and challenges (NEMA/UBOS, n.d.) in the country, how then can disaster managers carry out vulnerability assessment of communities? This thesis examines the possibility of providing disaster managers the means to assess the vulnerability of communities to risk by suggesting a solution for data acquisition and developing a web application based on interoperable modular components to act as a platform where this analysis can be performed

    Integrating spatial and spectral information for automatic feature identification in high -resolution remotely sensed images

    Get PDF
    This research used image objects, instead of pixels, as the basic unit of analysis in high-resolution imagery. Thus, not only spectral radiance and texture were used in the analysis, but also spatial context. Furthermore, the automated identification of attributed objects is potentially useful for integrating remote sensing with a vector-based GIS.;A study area in Morgantown, WV was chosen as a site for the development and testing of automated feature extraction methods with high-resolution data. In the first stage of the analysis, edges were identified using texture. Experiments with simulated data indicated that a linear operator identified curved and sharp edges more accurately than square shaped operators. Areas with edges that formed a closed boundary were used to delineate sub-patches. In the region growing step, the similarities of all adjacent subpatches were examined using a multivariate Hotelling T2 test that draws on the classes\u27 covariance matrices. Sub-patches that were not sufficiently dissimilar were merged to form image patches.;Patches were then classified into seven classes: Building, Road, Forest, Lawn, Shadowed Vegetation, Water, and Shadow. Six classification methods were compared: the pixel-based ISODATA and maximum likelihood approaches, field-based ECHO, and region based maximum likelihood using patch means, a divergence index, and patch probability density functions (pdfs). Classification with the divergence index showed the lowest accuracy, a kappa index of 0.254. The highest accuracy, 0.783, was obtained from classification using the patch pdf. This classification also produced a visually pleasing product, with well-delineated objects and without the distracting salt-and-pepper effect of isolated misclassified pixels. The accuracies of classification with patch mean, pixel based maximum likelihood, ISODATA and ECHO were 0.735, 0.687, 0.610, and 0.605, respectively.;Spatial context was used to generate aggregate land cover information. An Urbanized Rate Index, defined based on the percentage of Building and Road area within a local window, was used to segment the image. Five summary landcover classes were identified from the Urbanized Rate segmentation and the image object classification: High Urbanized Rate and large building sizes, Intermediate Urbanized Rate and intermediate building sizes, Low urbanized rate and small building sizes, Forest, and Water

    NCResSys: A Geospatial Modeling Information System for the Identification of Potential Municipal Water Supply Reservoir Locations across the State of North Carolina

    Get PDF
    The primary objective of this research is the development of a comprehensive, geospatial information system utilizing digital spatial datasets and technologies for the purpose of modeling potential municipal water supply reservoir sites across the State of North Carolina. To achieve this primary goal, a computational information system, NCResSys, has been designed by applying principles of software engineering, hydrology, and computational geography to conduct hydrologic and terrain analysis across the State in a spatially-explicit, Geographic Information System (GIS) environment. Potential reservoir sites are assessed based on the locational characteristics of physical storage and capacity. The terrain analysis component of this research examines the physical landscape of water storage capacity through a series of recursive algorithms for processing high-volume LiDAR DEM data. This research combines approaches from multiple disciplines in a computational environment to contribute value-added support to the decision-making process for public water supply planning, development, and management.Master of Art

    Object Oriented Geographic Databases

    Get PDF
    Tato práce demonstruje použití objektově orientovaných geografických databází. Vychází z geografických informačních systémů. Popisuje jejich základní vlastnosti a možnosti použití v praxi. V práci jsou popsány základní vlastnosti a metody objektově orientovaného použití. Ty jsou demonstrovány na geografických informačních systémech. Je zaveden pojem objektově orientovaný rastr a objektově orientovaný vektor a způsob uložení jednotlivých prvků v objektově orientované databázi. Návrh způsobu ukládání dat do geografické objektově-orientované databáze je podložen případovou studií.This paper demonstrates object oriented geographic databases's applying. It is based on geographic information systems. It describes properties of geographical information systems and their uses in practise. There are also described main properties and methods of object oriented standards in this paper. All of them are used in geographical oriented systems. New ideas are defined, object oriented raster, object oriented vector and some methods defining storing these items in object oriented database. Proposal for a method of storing data in a geographical object-oriented database is supported by case study.

    Dwelling on ontology - semantic reasoning over topographic maps

    Get PDF
    The thesis builds upon the hypothesis that the spatial arrangement of topographic features, such as buildings, roads and other land cover parcels, indicates how land is used. The aim is to make this kind of high-level semantic information explicit within topographic data. There is an increasing need to share and use data for a wider range of purposes, and to make data more definitive, intelligent and accessible. Unfortunately, we still encounter a gap between low-level data representations and high-level concepts that typify human qualitative spatial reasoning. The thesis adopts an ontological approach to bridge this gap and to derive functional information by using standard reasoning mechanisms offered by logic-based knowledge representation formalisms. It formulates a framework for the processes involved in interpreting land use information from topographic maps. Land use is a high-level abstract concept, but it is also an observable fact intimately tied to geography. By decomposing this relationship, the thesis correlates a one-to-one mapping between high-level conceptualisations established from human knowledge and real world entities represented in the data. Based on a middle-out approach, it develops a conceptual model that incrementally links different levels of detail, and thereby derives coarser, more meaningful descriptions from more detailed ones. The thesis verifies its proposed ideas by implementing an ontology describing the land use ‘residential area’ in the ontology editor Protégé. By asserting knowledge about high-level concepts such as types of dwellings, urban blocks and residential districts as well as individuals that link directly to topographic features stored in the database, the reasoner successfully infers instances of the defined classes. Despite current technological limitations, ontologies are a promising way forward in the manner we handle and integrate geographic data, especially with respect to how humans conceptualise geographic space

    The Development of a bi-level geographic information systems (GIS) database model for informal settlement upgrading

    Get PDF
    Bibliography : leaves 348-369.Existing Urban GIS models are faced with several limitations. Firstly, these models tend to be single-scale in nature. They are usually designed to operate at either metropolitan- or at the local-level. Secondly, they are generally designed to cater only for the needs of the formal and environmental sectors of the city system. These models do not cater for the "gaps" of data that exist in digital cadastres throughout the world. In the developed countries, these gaps correspond to areas of physical decay or economic decline. In the developing countries, they correspond to informal settlement areas. In this thesis, a new two-scale urban GIS database model, termed the "Bi-Ievel model" is proposed. This model has been specifically designed to address these gaps in the digital cadastre. Furthermore, the model addresses the short-comings facing current informal settlement upgrading models by providing mechanisms for community participation, project management, creating linkages to formal and environmental sectoral models, and for co-ordinating initiatives at a global-level. The Bi-Ievel model is comprised of a metropolitan-level and a series of local-level database components. These components are inter-linked through bi-directional database warehouse connections. While the model requires Internet-connectivity to achieve its full potential across a metropolitan region, it recognises the need for community participation-based methods at a local-level. Members of the community are actually involved in capturing and entering informal settlement data into the local-level database

    A process-oriented data model for fuzzy spatial objects

    Get PDF
    The complexity of the natural environment, its polythetic and dynamic character, requires appropriate new methods to represent it in GISs, if only because in the past there has been a tendency to force reality into sharp and static objects. A more generalized spatio-temporal data model is required to deal with fuzziness and dynamics of objects. This need is the motivation behind the research reported in this thesis. In particular, the objective of this research was to develop a spatio-temporal data model for objects with fuzzy spatial extent.This thesis discusses three aspects related to achieving this objective:identification of fuzzy objects,detection of dynamic changes in fuzzy objects, andrepresentation of objects and their dynamics in a spatio-temporal data model.For the identification of fuzzy objects, a six-step procedure was proposed to extract objects from field observation data: sampling, interpolation, classification, segmentation, merging and identification. The uncertainties involved in these six steps were investigated and their effect on the mapped objects was analyzed. Three fuzzy object models were proposed to represent fuzzy objects of different application contexts. The concepts of conditional spatial extent, conditional boundary and transition zones of fuzzy objects were put forward and formalized based upon the formal data structure (FDS). In this procedure, uncertainty was transferred from thematic aspects to geometric aspects of objects, i.e. the existential uncertainty was converted to extensional uncertainty. The spatial effect of uncertainty in thematic aspect was expressed by the relationship between uncertainty of a cell belonging to the spatial extent of an object and the uncertainty of the cell belonging to classes.To detect dynamic changes in fuzzy objects, a method was proposed to identify objects and their state transitions from fuzzy spatial extents (regions) at different epochs. Similarity indicators of fuzzy regions were calculated based upon overlap between regions at consecutive epochs. Different combinations of indicator values imply different relationships between regions. Regions that were very similar represent the consecutive states of one object. By linking the regions, the historic lifelines of objects are built automatically. Then the relationship between regions became the relationship or interactions between objects, which were expressed in terms of processes, such as shift, merge or split. By comparing the spatial extents of objects at consecutive epochs, the change of objects was detected. The uncertainty of the change was analyzed by a series of change maps at different certainty levels. These can provide decision makers with more accurate information about change.For the third, and last, a process-oriented spatio-temporal data model was proposed to represent change and interaction of objects. The model was conceptually designed based upon the formalized representation of state and process of objects and was represented by a star-styled extended entity relationship, which I have called the Star Model. The conceptual design of the Star Model was translated into a relational logical design since many commercial relational database management systems are available. A prototype of the process-oriented spatio-temporal data model was implemented in ArcView based upon the case of Ameland. The user interface and queries of the prototype were developed using Avenue, the programming language of ArcView.The procedure of identification of fuzzy objects, which extracts fuzzy object data from field observations, unifies the existing field-oriented and object-oriented approaches. Therefore a generalized object concept - object with fuzzy spatial extent - has been developed. This concept links the object-oriented and the field-oriented characteristics of natural phenomena. The objects have conditional boundaries, representing their object characteristics; the interiors of the objects have field properties, representing their gradual and continuous distribution. Furthermore, the concept can handle both fuzzy and crisp objects. In the fuzzy object case, the objects have fuzzy transition or boundary zones, in which conditional boundaries may be defined; whereas crisp objects can be considered as a special case, i.e. there are sharp boundaries for crisp objects. Beyond that, both the boundary-oriented approach and the pixel-oriented approach of object extraction can use this generalized object concept, since the uncertainties of objects are expressed in the formal data structures (FDSs), which is applicable for either approach.The proposed process-oriented spatio-temporal data model is a general one, from which other models can be derived. It can support analysis and queries of time series data from varying perspectives through location-oriented, time-oriented, feature-oriented and process-oriented queries, in order to understand the behavior of dynamic spatial complexes of natural phenomena. Multi-strands of time can also be generated in this Star Model, each representing the (spatio-temporal) lifeline of an object. The model can represent dynamic processes affecting the spatial and thematic aspects of individual objects and object complexes. Because the model explicitly stores change (process) relative to time, procedures for answering queries relating to temporal relationships, as well as analytical tasks for comparing different sequences of change, are facilitated.The research findings in this thesis contribute theoretically and practically to the development of spatio-temporal data models for objects with fuzzy spatial extent.</p

    Efficient Algorithms to Compute Hierarchical Summaries from Big Data Streams

    Full text link
    Many data stream applications have hierarchical data; containing time, geographic locations, product information, clickstreams, server logs, IP addresses. A hierarchical summary of such volumous data offers multiple advantages including compactness, quick understanding, and abstraction. The goal of this thesis is to design algorithmic approaches for summarizing hierarchical data streams. First, this thesis provides a theoretical analysis of the benchmark hierarchical heavy hitters' algorithms and uncovers their shortcomings such as requiring high theoretical memory, updates and coverage problem. To address these shortcomings, this thesis proposes efficient algorithms which offer deterministic estimation accuracy using O(η/ε) worst-case memory and O(η) worst-case time complexity per item, where ε ∈ [0,1] is a user defined parameter and η is a small constant derived from the data. The proposed hierarchical heavy hitters' algorithms are shown to have improved significantly over existing algorithms both theoretically as well as empirically. Next, this thesis introduces a new concept called hierarchically correlated heavy hitters, which is different from existing hierarchical summarization techniques. The thesis provides a formal definition of the proposed concept and compares it with existing hierarchical summarization approaches both at definition level and empirically. It also proposes an efficient hierarchy-aware algorithm for computing hierarchically correlated heavy hitters. The proposed algorithm offers deterministic estimation accuracy using O(η / (ε_p * ε_s )) worst-case memory and O(η) worst-case time complexity per item, where η is as defined previously, and ε_p ∈ [0,1], ε_s ∈ [0,1] are other user defined parameters. Finally, the thesis proposes a special hierarchical data structure and algorithm to summarize spatiotemporal data. It can be used to extract interesting and useful patterns from high-speed spatiotemporal data streams at multiple spatial and temporal granularities. Theoretical and empirical analysis are provided, which show that the proposed data structure is very efficient concerning data storage and response to queries. It updates a single item in O(1) time and responds to a point query in O(1) time. Importantly, the memory requirement of the proposed data structure is independent of the size of the data and only depends on user-supplied parameters ψ ⃗ and φ ⃗. In summary, this thesis provides a general framework consisting of a set of algorithms and data structures to compute hierarchical summaries of the big data streams. All of the proposed algorithms exploit a lattice structure built from the hierarchical attributes of the data to compute different hierarchical summaries, which can be used to address various data analytic issues in many emerging applications
    corecore