36 research outputs found

    Geospatial Standards for Web-enabled Environmental Models

    Get PDF
    Serving geographic information via standardized Web services has been widely accepted as a useful approach. Web-enabled environmental models simulating real-world phenomena are, however, rare. The models predict observations traditionally served by geospatial Web services compliant to well-defined standards. Using standardized Web services could support decoupling of models, comparison of similar models, and the automatic integration into existing geospatial workflows. Modeling experts face several open issues when migrating existing environmental computer models to the Web. The selection of the Web service interface depends on the input parameters required for the successful execution of the computer model. Losing control over the execution of the models, and consequently also the confidence in model results, can be addressed to a certain extent by using translucent and standardized workflow languages. Mechanisms and open problems for the implementation of geospatial Web service compositions are discussed. Two scenarios about oil spills and the exposure to air pollution illustrate the impact of unconfigured model parameters for standard-compliant spatial data clients

    Sensor web geoprocessing on the grid

    Get PDF
    Recent standardisation initiatives in the fields of grid computing and geospatial sensor middleware provide an exciting opportunity for the composition of large scale geospatial monitoring and prediction systems from existing components. Sensor middleware standards are paving the way for the emerging sensor web which is envisioned to make millions of geospatial sensors and their data publicly accessible by providing discovery, task and query functionality over the internet. In a similar fashion, concurrent development is taking place in the field of grid computing whereby the virtualisation of computational and data storage resources using middleware abstraction provides a framework to share computing resources. Sensor web and grid computing share a common vision of world-wide connectivity and in their current form they are both realised using web services as the underlying technological framework. The integration of sensor web and grid computing middleware using open standards is expected to facilitate interoperability and scalability in near real-time geoprocessing systems. The aim of this thesis is to develop an appropriate conceptual and practical framework in which open standards in grid computing, sensor web and geospatial web services can be combined as a technological basis for the monitoring and prediction of geospatial phenomena in the earth systems domain, to facilitate real-time decision support. The primary topic of interest is how real-time sensor data can be processed on a grid computing architecture. This is addressed by creating a simple typology of real-time geoprocessing operations with respect to grid computing architectures. A geoprocessing system exemplar of each geoprocessing operation in the typology is implemented using contemporary tools and techniques which provides a basis from which to validate the standards frameworks and highlight issues of scalability and interoperability. It was found that it is possible to combine standardised web services from each of these aforementioned domains despite issues of interoperability resulting from differences in web service style and security between specifications. A novel integration method for the continuous processing of a sensor observation stream is suggested in which a perpetual processing job is submitted as a single continuous compute job. Although this method was found to be successful two key challenges remain; a mechanism for consistently scheduling real-time jobs within an acceptable time-frame must be devised and the tradeoff between efficient grid resource utilisation and processing latency must be balanced. The lack of actual implementations of distributed geoprocessing systems built using sensor web and grid computing has hindered the development of standards, tools and frameworks in this area. This work provides a contribution to the small number of existing implementations in this field by identifying potential workflow bottlenecks in such systems and gaps in the existing specifications. Furthermore it sets out a typology of real-time geoprocessing operations that are anticipated to facilitate the development of real-time geoprocessing software.EThOS - Electronic Theses Online ServiceEngineering and Physical Sciences Research Council (EPSRC) : School of Civil Engineering & Geosciences, Newcastle UniversityGBUnited Kingdo

    RADGIS - an improved architecture for runtime-extensible, distributed GIS applications

    Get PDF
    A number of GIS architectures and technologies have emerged recently to facilitate the visualisation and processing of geospatial data over the Web. The work presented in this dissertation builds on these efforts and undertakes to overcome some of the major problems with traditional GIS client architectures, including application bloat, lack of customisability, and lack of interoperability between GIS products. In this dissertation we describe how a new client-side GIS architecture was developed and implemented as a proof-of-concept application called RADGIS, which is based on open standards and emerging distributed component-based software paradigms. RADGIS reflects the current trend in development focus from Web browser-based applications to customised clients, based on open standards, that make use of distributed Web services. While much attention has been paid to exposing data on the Web, there is growing momentum towards providing “value-added” services. A good example of this is the tremendous industry interest in the provision of location-based services, which has been discussed as a special use-case of our RADGIS architecture. Thus, in the near future client applications will not simply be used to access data transparently, but will also become facilitators for the location-transparent invocation of local and remote services. This flexible architecture will ensure that data can be stored and processed independently of the location of the client that wishes to view or interact with it. Our RADGIS application enables content developers and end-users to create and/or customise GIS applications dynamically at runtime through the incorporation of GIS services. This ensures that the client application has the flexibility to withstand changing levels of expertise or user requirements. These GIS services are implemented as components that execute locally on the client machine, or as remote CORBA Objects or EJBs. Assembly and deployment of these components is achieved using a specialised XML descriptor. This XML descriptor is written using a markup language that we developed specifically for this purpose, called DGCML, which contains deployment information, as well as a GUI specification and links to an XML-based help system that can be merged with the RADGIS client application’s existing help system. Thus, no additional requirements are imposed on object developers by the RADGIS architecture, i.e. there is no need to rewrite existing objects since DGCML acts as a runtime-customisable wrapper, allowing existing objects to be utilised by RADGIS. While the focus of this thesis has been on overcoming the above-mentioned problems with traditional GIS applications, the work described here can also be applied in a much broader context, especially in the development of highly customisable client applications that are able to integrate Web services at runtime

    Acquisition and Declarative Analytical Processing of Spatio-Temporal Observation Data

    Get PDF
    A generic framework for spatio-temporal observation data acquisition and declarative analytical processing has been designed and implemented in this Thesis. The main contributions of this Thesis may be summarized as follows: 1) generalization of a data acquisition and dissemination server, with great applicability in many scientific and industrial domains, providing flexibility in the incorporation of different technologies for data acquisition, data persistence and data dissemination, 2) definition of a new hybrid logical-functional paradigm to formalize a novel data model for the integrated management of entity and sampled data, 3) definition of a novel spatio-temporal declarative data analysis language for the previous data model, 4) definition of a data warehouse data model supporting observation data semantics, including application of the above language to the declarative definition of observation processes executed during observation data load, and 5) column-oriented parallel and distributed implementation of the spatial analysis declarative language. The huge amount of data to be processed forces the exploitation of current multi-core hardware architectures and multi-node cluster infrastructures

    A abordagem POESIA para a integração de dados e serviços na Web semantica

    Get PDF
    Orientador: Claudia Bauzer MedeirosTese (doutorado) - Universidade Estadual de Campinas, Instituto de ComputaçãoResumo: POESIA (Processes for Open-Ended Systems for lnformation Analysis), a abordagem proposta neste trabalho, visa a construção de processos complexos envolvendo integração e análise de dados de diversas fontes, particularmente em aplicações científicas. A abordagem é centrada em dois tipos de mecanismos da Web semântica: workflows científicos, para especificar e compor serviços Web; e ontologias de domínio, para viabilizar a interoperabilidade e o gerenciamento semânticos dos dados e processos. As principais contribuições desta tese são: (i) um arcabouço teórico para a descrição, localização e composição de dados e serviços na Web, com regras para verificar a consistência semântica de composições desses recursos; (ii) métodos baseados em ontologias de domínio para auxiliar a integração de dados e estimar a proveniência de dados em processos cooperativos na Web; (iii) implementação e validação parcial das propostas, em urna aplicação real no domínio de planejamento agrícola, analisando os benefícios e as limitações de eficiência e escalabilidade da tecnologia atual da Web semântica, face a grandes volumes de dadosAbstract: POESIA (Processes for Open-Ended Systems for Information Analysis), the approach proposed in this work, supports the construction of complex processes that involve the integration and analysis of data from several sources, particularly in scientific applications. This approach is centered in two types of semantic Web mechanisms: scientific workflows, to specify and compose Web services; and domain ontologies, to enable semantic interoperability and management of data and processes. The main contributions of this thesis are: (i) a theoretical framework to describe, discover and compose data and services on the Web, inc1uding mIes to check the semantic consistency of resource compositions; (ii) ontology-based methods to help data integration and estimate data provenance in cooperative processes on the Web; (iii) partial implementation and validation of the proposal, in a real application for the domain of agricultural planning, analyzing the benefits and scalability problems of the current semantic Web technology, when faced with large volumes of dataDoutoradoCiência da ComputaçãoDoutor em Ciência da Computaçã

    Automatic negotiation of multi-party contracts in agricultural supply chain

    Get PDF
    Orientador: Edmundo Roberto Mauro MadeiraTese (doutorado) - Universidade Estadual de Campinas, Instituto de ComputaçãoResumo: Uma cadeia produtiva agropecuária 'e constituída por diversos tipos de atores que estabelecem uma rede de relacionamentos bastante complexa. Estes relacionamentos variam de ad hoc e de curta duração até altamente estruturado e de longa duração. As cadeias produtivas agropecuárias possuem algumas particularidades, tais como, regulamentação estrita e dependência cultural, e possuem relevância social e econômica. A utilização de contratos 'e a forma natural para expressar os relacionamentos entre os membros de uma cadeia. Desta forma, contratos e a atividade de negociá-los são de grande importância numa cadeia produtiva. Esta tese propõe um modelo para cadeias produtivas agropecuárias que integra suas principais características, incluindo seus aspectos estruturais e sua dinâmica. Em particular, a tese propõe um formato para contratos multi-laterais e um protocolo de negociação que os constrói. Contratos multi-laterais são importantes neste contexto, pois vários atores de uma cadeia produtiva podem construir alianças que compreendem direitos e obrigações mútuos. Um conjunto de contratos bi-laterais não 'e adequado para tal propósito. A tese também apresenta uma implementação do protocolo de negócio baseado em serviços Web e numa máquina de workflow (YAWL)Abstract: An agricultural supply chain comprises several kinds of actors that establish a complex net of relationships. These relationships may range from ad hoc and short lasting ones to highly structured and long lasting. This kind of chain has a few particularities like strict regulations and cultural influences, and presents a quite relevant economical and social importance. Contracts are the natural way of expressing relationships among members of a chain. Thus, the contracts and the activity of negotiating them are of major importance within a supply chain. This thesis proposes a model for agricultural supply chains that integrates seamlessly their main features, including their structure and their dynamics. Specifically, the thesis proposes a multi-party contract format and a negotiation protocol that builds such kind of contracts. Multi-party contracts are important in this context because several actors of a supply chain may build alliances comprising mutual rights and obligations. A set of bilateral contracts is not well-fitted for such a purpose. The thesis also presents an implementation of the negotiation protocol that builds on Web services and a workflow engine (YAWL)DoutoradoSistemas de ComputaçãoDoutor em Ciência da Computaçã

    An Agent-Based Variogram Modeller: Investigating Intelligent, Distributed-Component Geographical Information Systems

    Get PDF
    Geo-Information Science (GIScience) is the field of study that addresses substantive questions concerning the handling, analysis and visualisation of spatial data. Geo- Information Systems (GIS), including software, data acquisition and organisational arrangements, are the key technologies underpinning GIScience. A GIS is normally tailored to the service it is supposed to perform. However, there is often the need to do a function that might not be supported by the GIS tool being used. The normal solution in these circumstances is to go out and look for another tool that can do the service, and often an expert to use that tool. This is expensive, time consuming and certainly stressful to the geographical data analyses. On the other hand, GIS is often used in conjunction with other technologies to form a geocomputational environment. One of the complex tools in geocomputation is geostatistics. One of its functions is to provide the means to determine the extent of spatial dependencies within geographical data and processes. Spatial datasets are often large and complex. Currently Agent system are being integrated into GIS to offer flexibility and allow better data analysis. The theis will look into the current application of Agents in within the GIS community, determine if they are used to representing data, process or act a service. The thesis looks into proving the applicability of an agent-oriented paradigm as a service based GIS, having the possibility of providing greater interoperability and reducing resource requirements (human and tools). In particular, analysis was undertaken to determine the need to introduce enhanced features to agents, in order to maximise their effectiveness in GIS. This was achieved by addressing the software agent complexity in design and implementation for the GIS environment and by suggesting possible solutions to encountered problems. The software agent characteristics and features (which include the dynamic binding of plans to software agents in order to tackle the levels of complexity and range of contexts) were examined, as well as discussing current GIScience and the applications of agent technology to GIS, agents as entities, objects and processes. These concepts and their functionalities to GIS are then analysed and discussed. The extent of agent functionality, analysis of the gaps and the use these technologies to express a distributed service providing an agent-based GIS framework is then presented. Thus, a general agent-based framework for GIS and a novel agent-based architecture for a specific part of GIS, the variogram, to examine the applicability of the agent- oriented paradigm to GIS, was devised. An examination of the current mechanisms for constructing variograms, underlying processes and functions was undertaken, then these processes were embedded into a novel agent architecture for GIS. Once the successful software agent implementation had been achieved, the corresponding tool was tested and validated - internally for code errors and externally to determine its functional requirements and whether it enhances the GIS process of dealing with data. Thereafter, its compared with other known service based GIS agents and its advantages and disadvantages analysed

    Automatic Geospatial Data Conflation Using Semantic Web Technologies

    Get PDF
    Duplicate geospatial data collections and maintenance are an extensive problem across Australia government organisations. This research examines how Semantic Web technologies can be used to automate the geospatial data conflation process. The research presents a new approach where generation of OWL ontologies based on output data models and presenting geospatial data as RDF triples serve as the basis for the solution and SWRL rules serve as the core to automate the geospatial data conflation processes

    Statistical modelling of species distributions using presence-only data:A semantic and graphical approach using the tree of life

    Get PDF
    Understanding the mechanisms that determine and differentiate the establishment of organisms in space is an old and fundamental question in ecology. The emergence of life’s spatial patterns is guided by the confluence of three forces: the environmental filtering, which unbalances the probability of establishment for organisms given their evolutionary adaptations to local environmental conditions; the biological interactions, which restrict their establishment according to the presence (or absence) of other organisms; the diversification of organisms’ strategies (traits) to migrate and adapt to changing environments. The main hypothesis in this research is that the accumulated knowledge of biodiversity occurrences, the species taxonomic classification and geospatial environmental data can be integrated into a unified modelling framework to characterise the joint effect of these three forces and, thus, contribute with more general, accurate and statistically sound species distributions models (SDM)s. The first part of this thesis describes the design and implementation of a knowledge engine capable to synthesise and integrate environmental geospatial data, taxonomic relationships and species occurrences. It uses semantic queries to instantiate complex data structures, represented as networks of concepts (knowledge graphs). Local taxonomic trees, distributed over a hierarchical spatial system of regular lattices are used as knowledge graphs to perform data synthesis, geoprocessing, and transformations. The implementation uses efficient call-by-need evaluations that facilitates spatial and scale analysis on large datasets. The second part of the thesis corresponds to the statistical specification and implementation of two modelling frameworks for species distribution models (one for single species and other for multiple species). These models are designed for presence-only observations; obtained from the knowledge engine. The common specification of these models are that presence-only observations are the joint effect of two latent processes: one, that defines the species presence (ecological suitability); and other, that defines the probability of being sampled (sampling effort). The single species framework uses an informative sample, chosen by the modeller, to account for the sampling effort. Three modelling strategies are proposed for accounting the joint effect of the ecological and sampling process (independent processes, a common spatial random effect and correlated processes). The tree models were compared to the maximum entropy model (MaxEnt), a popular algorithm used in SDMs. In all cases, at least one model showed a better predictive performance than MaxEnt. The multi-species modelling framework is a generalisation of the single species framework for developing a joint species distribution model for presence-only data. The specification is a multilevel hierarchical logistic model with a single spatial random effect, common to all species of interest. The sampling effort is modelled as a complementary sample obtained by complementary observations from the taxa of interest using a regional taxonomic tree. The model was tested against simulated data. All simulated parameters were covered by the credible intervals of the posterior sampling. A study case in Easter Mexico was presented as an application of the model. The results obtained in the case study were consistent with the macroecological theory. The model showed to be effective in removing bias and noise given by the sampling effort. This effect was particularly impressive in urban areas, where the sampling intensity is greater. The research presented here provides an interdisciplinary approach for modelling joint species distributions aided by the automated selection of biological, spatial and environmental context

    Proceedings of the International Workshop "Innovation Information Technologies: Theory and Practice": Dresden, Germany, September 06-10.2010

    Get PDF
    This International Workshop is a high quality seminar providing a forum for the exchange of scientific achievements between research communities of different universities and research institutes in the area of innovation information technologies. It is a continuation of the Russian-German Workshops that have been organized by the universities in Dresden, Karlsruhe and Ufa before. The workshop was arranged in 9 sessions covering the major topics: Modern Trends in Information Technology, Knowledge Based Systems and Semantic Modelling, Software Technology and High Performance Computing, Geo-Information Systems and Virtual Reality, System and Process Engineering, Process Control and Management and Corporate Information Systems
    corecore