203 research outputs found

    Exchanging uncertainty: interoperable geostatistics?

    Get PDF
    Traditionally, geostatistical algorithms are contained within specialist GIS and spatial statistics software. Such packages are often expensive, with relatively complex user interfaces and steep learning curves, and cannot be easily integrated into more complex process chains. In contrast, Service Oriented Architectures (SOAs) promote interoperability and loose coupling within distributed systems, typically using XML (eXtensible Markup Language) and Web services. Web services provide a mechanism for a user to discover and consume a particular process, often as part of a larger process chain, with minimal knowledge of how it works. Wrapping current geostatistical algorithms with a Web service layer would thus increase their accessibility, but raises several complex issues. This paper discusses a solution to providing interoperable, automatic geostatistical processing through the use of Web services, developed in the INTAMAP project (INTeroperability and Automated MAPping). The project builds upon Open Geospatial Consortium standards for describing observations, typically used within sensor webs, and employs Geography Markup Language (GML) to describe the spatial aspect of the problem domain. Thus the interpolation service is extremely flexible, being able to support a range of observation types, and can cope with issues such as change of support and differing error characteristics of sensors (by utilising descriptions of the observation process provided by SensorML). XML is accepted as the de facto standard for describing Web services, due to its expressive capabilities which allow automatic discovery and consumption by ‘naive’ users. Any XML schema employed must therefore be capable of describing every aspect of a service and its processes. However, no schema currently exists that can define the complex uncertainties and modelling choices that are often present within geostatistical analysis. We show a solution to this problem, developing a family of XML schemata to enable the description of a full range of uncertainty types. These types will range from simple statistics, such as the kriging mean and variances, through to a range of probability distributions and non-parametric models, such as realisations from a conditional simulation. By employing these schemata within a Web Processing Service (WPS) we show a prototype moving towards a truly interoperable geostatistical software architecture

    Automatic processing, quality assurance and serving of real-time weather data

    Get PDF
    Recent advances in technology have produced a significant increase in the availability of free sensor data over the Internet. With affordable weather monitoring stations now available to individual meteorology enthusiasts a reservoir of real time data such as temperature, rainfall and wind speed can now be obtained for most of the United States and Europe. Despite the abundance of available data, obtaining useable information about the weather in your local neighbourhood requires complex processing that poses several challenges. This paper discusses a collection of technologies and applications that harvest, refine and process this data, culminating in information that has been tailored toward the user. In this case we are particularly interested in allowing a user to make direct queries about the weather at any location, even when this is not directly instrumented, using interpolation methods. We also consider how the uncertainty that the interpolation introduces can then be communicated to the user of the system, using UncertML, a developing standard for uncertainty representation

    Development of grid frameworks for clinical trials and epidemiological studies

    Get PDF
    E-Health initiatives such as electronic clinical trials and epidemiological studies require access to and usage of a range of both clinical and other data sets. Such data sets are typically only available over many heterogeneous domains where a plethora of often legacy based or in-house/bespoke IT solutions exist. Considerable efforts and investments are being made across the UK to upgrade the IT infrastructures across the National Health Service (NHS) such as the National Program for IT in the NHS (NPFIT) [1]. However, it is the case that currently independent and largely non-interoperable IT solutions exist across hospitals, trusts, disease registries and GP practices – this includes security as well as more general compute and data infrastructures. Grid technology allows issues of distribution and heterogeneity to be overcome, however the clinical trials domain places special demands on security and data which hitherto the Grid community have not satisfactorily addressed. These challenges are often common across many studies and trials hence the development of a re-usable framework for creation and subsequent management of such infrastructures is highly desirable. In this paper we present the challenges in developing such a framework and outline initial scenarios and prototypes developed within the MRC funded Virtual Organisations for Trials and Epidemiological Studies (VOTES) project [2]

    Interoperability in the OpenDreamKit Project: The Math-in-the-Middle Approach

    Full text link
    OpenDreamKit --- "Open Digital Research Environment Toolkit for the Advancement of Mathematics" --- is an H2020 EU Research Infrastructure project that aims at supporting, over the period 2015--2019, the ecosystem of open-source mathematical software systems. From that, OpenDreamKit will deliver a flexible toolkit enabling research groups to set up Virtual Research Environments, customised to meet the varied needs of research projects in pure mathematics and applications. An important step in the OpenDreamKit endeavor is to foster the interoperability between a variety of systems, ranging from computer algebra systems over mathematical databases to front-ends. This is the mission of the integration work package (WP6). We report on experiments and future plans with the \emph{Math-in-the-Middle} approach. This information architecture consists in a central mathematical ontology that documents the domain and fixes a joint vocabulary, combined with specifications of the functionalities of the various systems. Interaction between systems can then be enriched by pivoting off this information architecture.Comment: 15 pages, 7 figure

    Ensuring Interoperable Digital Object Management Metadata in Scotland : Report of the SLIC-funded CMS Metadata Interoperability Project : Findings, Conclusions, and Guidelines for Best Practice

    Get PDF
    As in other parts of the developed world, digital resources are being created in ever increasing numbers by a growing range of archives, libraries, museums, and other organisations in the Scottish Common Information Environment (SCIE). Interoperability in respect of the often complex metadata required to manage digital materials is a prerequisite of providing seamless and long-term access to distributed resources for users, optimising resource re-usability, and maximising value from scarce funding and staffing resources. Recognising this, SLIC4 funded the CMS Metadata Interoperability Project5 to survey the Scottish scene, research and analyse the issues, identify a 'safe path' towards ensuring interoperability in the area, and formulate guidelines for best practice as a basis for implementing it. This report summarises the results of the study under four headings: 1. Study Findings and Conclusions. 2. Guidelines for Best Practice: National SCIE-wide Actions. 3. Guidelines for Best Practice: Institution or Sub-SCIE Group Actions. 4. Appendices (including lists of participants and references, and a glossary). The study concludes that a prescriptive approach to ensuring interoperability of digital object metadata in the SCIE is both difficult and inadvisable and proposes instead: 1. The development of an informed 'interoperability consciousness' in key staff as the best route forward, with the guidelines provided in the body of the report, the associated support website, the OSIAF6 infrastructure, and relevant training programmes, as key mechanisms. 2. Strengthening this through the publication and dissemination of a series of advisory notes on a range of key interoperability issues. These would be indicative rather than prescriptive, but would have the authority of the OSIAF-backed Cultural Technical Group (CTG)7 behind them. It also proposes the creation of a Scottish Metadata Registry as a tool to encourage, enhance, and support interoperability in this important area

    Knowledge Management with Multi-Agent System in BI Systems Integration

    Get PDF

    Developing Feature Types and Related Catalogues for the Marine Community - Lessons from the MOTIIVE project.

    Get PDF
    MOTIIVE (Marine Overlays on Topography for annex II Valuation and Exploitation) is a project funded as a Specific Support Action (SSA) under the European Commission Framework Programme 6 (FP6) Aeronautics and Space Programme. The project started in September 2005 and finished in October 2007. The objective of MOTIIVE was to examine the methodology and cost benefit of using non-proprietary data standards. Specifically it considered the harmonisation requirements between the INSPIRE data component ‘elevation’ (terrestrial, bathymetric and coastal) and INSPIRE marine thematic data for ‘sea regions’, ‘oceanic spatial features’ and ‘coastal zone management areas’. This was examined in context of the requirements for interoperable information systems as required to realise the objectives of GMES for ‘global services’. The work draws particular conclusions on the realisation of Feature Types (ISO 19109) and Feature Type Catalogues (ISO 19110) in this respect. More information on MOTIIVE can be found at www.motiive.net

    Requirements for a registry of electronic licences

    Get PDF
    Purpose: The paper presents a brief history of electronic licensing initiatives before considering current practices for managing licences to electronic resources. The intention was to obtain a detailed understanding of the requirements needed for a registry of electronic licences that would enable usage terms and conditions to be presented to end-users at point of use. Approach: Two extensive focus groups were held, each comprising representatives from the main stakeholder groups. These structured events considered existing and ongoing issues and approaches towards licence management and investigated a range of ‘use-cases’ where potential usages for a licence registry were outlined and discussed. Findings: The results form part of a requirements gathering and analysis process which will inform the development of a registry of electronic licences. This work forms part of the JISC-funded Registry of Electronic Licences (RELI) project. The paper finds that there are many complexities when dealing with electronic licences such as licence specificity, licence interpretation, definitions of authorised users and dissemination of usage terms and conditions. Implications: These issues and others are considered and the impact on a subsequent registry of electronic licences is discussed. It is clear from the findings that there is a real and immediate need for a licence registry. Originality: The paper provides a rich picture of the concerns and practices adopted both when managing licences and ensuring conformance with licences to electronic resources. The findings have enabled the scope of a licence registry to be determined. The registry is currently under development

    Cross-Platform Text Mining and Natural Language Processing Interoperability - Proceedings of the LREC2016 conference

    Get PDF
    No abstract available
    • 

    corecore