1,397 research outputs found

    Design of the shared Environmental Information System (SEIS) and development of a web-based GIS interface

    Get PDF
    Chapter 5The Shared Environmental Information System (SEIS) is a collaborative initiative of the European Commission (EC) and the European Environment Agency (EEA) aimed to establish an integrated and shared EU-wide environmental information system together with the Member States. SEIS presents the European vision on environmental information interoperability. It is a set of high-level principles & workflow-processes that organize the collection, exchange, and use of environmental data & information aimed to: • Modernise the way in which information required by environmental legislation is made available to member states or EC instruments; • Streamline reporting processes and repeal overlaps or obsolete reporting obligations; • Stimulate similar developments at international conventions; • Standardise according to INSPIRE when possible; and • Introduce the SDI (spatial database infrastructure) principle EU-wide. SEIS is a system and workflow of operations that offers technical capabilities geared to meet concept expectations. In that respect, SEIS shows the way and sets up the workflow effectively in a standardise way (e.g, INSPIRE) to: • Collect Data from Spatial Databases, in situ sensors, statistical databases, earth observation readings (e.g., EOS, GMES), marine observation using standard data transfer protocols (ODBC, SOS, ft p, etc). • Harmonise collected data (including data check/data integrity) according to best practices proven to perform well, according to the INSPIRE Directive 2007/2/EC (1) Annexes I: II: III: plus INSPIRE Implementation Rules for data not specified in above mentioned Annexes. • Harmonise collected data according to WISE (Water Information System from Europe) or Ozone-web. • Process, aggregate harmonise data so to extract information in a format understandable by wider audiences (e.g., Eurostat, enviro-indicators). • Document information to fulfi l national reporting obligations towards EU bodies (e.g., the JRC, EEA, DGENV, Eurostat) • Store and publish information for authorised end-users (e.g., citizens, institutions). This paper presents the development and integration of the SEIS-Malta Geoportal. The first section outlines EU Regulations on INSPIRE and Aarhus Directives. The second covers the architecture and the implementation of SEIS-Malta Geoportal. The third discusses the results and successful implementation of the Geoportal.peer-reviewe

    Initial experiences in developing e-health solutions across Scotland

    Get PDF
    The MRC funded Virtual Organisations for Trials and Epidemiological Studies (VOTES) project is a collaborative effort between e-Science, clinical and ethical research centres across the UK including the universities of Oxford, Glasgow, Imperial, Nottingham and Leicester. The project started in September 2005 and is due to run for 3 years. The primary goal of VOTES is to develop a reusable Grid framework through which a multitude of clinical trials and epidemiological studies can be supported. The National e-Science Centre (NeSC) at the University of Glasgow are looking at developing the Scottish components of this framework. This paper presents the initial experiences in developing this framework and in accessing and using existing data sets, services and software across the NHS in Scotland

    Development of grid frameworks for clinical trials and epidemiological studies

    Get PDF
    E-Health initiatives such as electronic clinical trials and epidemiological studies require access to and usage of a range of both clinical and other data sets. Such data sets are typically only available over many heterogeneous domains where a plethora of often legacy based or in-house/bespoke IT solutions exist. Considerable efforts and investments are being made across the UK to upgrade the IT infrastructures across the National Health Service (NHS) such as the National Program for IT in the NHS (NPFIT) [1]. However, it is the case that currently independent and largely non-interoperable IT solutions exist across hospitals, trusts, disease registries and GP practices – this includes security as well as more general compute and data infrastructures. Grid technology allows issues of distribution and heterogeneity to be overcome, however the clinical trials domain places special demands on security and data which hitherto the Grid community have not satisfactorily addressed. These challenges are often common across many studies and trials hence the development of a re-usable framework for creation and subsequent management of such infrastructures is highly desirable. In this paper we present the challenges in developing such a framework and outline initial scenarios and prototypes developed within the MRC funded Virtual Organisations for Trials and Epidemiological Studies (VOTES) project [2]

    An SDI for the GIS-education at the UGent Geography Department

    Get PDF
    The UGent Geography Department (GD) (ca. 200 students; 10 professors) has been teaching GIS since the mid 90’s. Ever since, GIS has evolved from Geographic Information Systems, to GIScience, to GIServices; implying that a GIS specialist nowadays has to deal with more than just desktop GIS. Knowledge about the interaction between different components of an SDI (spatial data, technologies, laws and policies, people and standards) is crucial for a graduated Master student. For its GIS education, the GD has until recently been using different sources of datasets, which were stored in a non-centralized system. In conformity with the INSPIRE Directive and the Flemish SDI Decree, the GD aims to set-up its own SDI using free and open source software components, to improve the management, user-friendliness, copyright protection and centralization of datasets and the knowledge of state of the art SDI structure and technology. The central part of the system is a PostGIS-database in which both staff and students can create and share information stored in a multitude of tables and schemas. A web-based application facilitates upper-level management of the database for administrators and staff members. Exercises in various courses not only focus on accessing and handling data from the SDI through common GIS-applications as QuantumGIS or GRASS, but also aim at familiarizing students with the set-up of widely used SDI-elements as WMS, WFS and WCS services. The (dis)advantages of the new SDI will be tested in a case study in which the workflow of a typical ‘GIS Applications’ exercise is elaborated. By solving a problem of optimal location, students interact in various ways with geographic data. A comparison is made between the situation before and after the implementation of the SDI

    Extracting, Transforming and Archiving Scientific Data

    Get PDF
    It is becoming common to archive research datasets that are not only large but also numerous. In addition, their corresponding metadata and the software required to analyse or display them need to be archived. Yet the manual curation of research data can be difficult and expensive, particularly in very large digital repositories, hence the importance of models and tools for automating digital curation tasks. The automation of these tasks faces three major challenges: (1) research data and data sources are highly heterogeneous, (2) future research needs are difficult to anticipate, (3) data is hard to index. To address these problems, we propose the Extract, Transform and Archive (ETA) model for managing and mechanizing the curation of research data. Specifically, we propose a scalable strategy for addressing the research-data problem, ranging from the extraction of legacy data to its long-term storage. We review some existing solutions and propose novel avenues of research.Comment: 8 pages, Fourth Workshop on Very Large Digital Libraries, 201

    Application of the GeoUML Tools for the Production and Validation of Inspire Datasets

    Get PDF
    The structure of INSPIRE datasets is oriented to the exchange of data, not to its storage and manipulation in a database. Therefore data transformation is required. This paper analyses the possibility of using in this context the tools developed by SpatialDBGroup at Politecnico di Milano in order to create and validate spatial databases. The considered scenario is the following one: - an organisation (data provider) is willing to provide WFS and GML conformant to INSPIRE specifications (services and data); - this organisation is hosting geodata related to one or more INSPIRE themes on a spatial relational database, called here Source Database - in order to facilitate the implementation of INSPIRE compliant GML data, the organisation implements a new "INSPIRE-structured" spatial database, called here INSPIRE Database - a Transformation Procedure is created which extracts the data from the Source Database and loads it into the INSPIRE Database - the INSPIRE Database is "validated" also using topological operators, in order to identify also topological constraints gaps. We assume that both the Source Database and the INSPIRE Database are SQL based and that their physical schemas have been generated by the GeoUML Catalogue tool from the corresponding conceptual schemas, called SCSOURCE and SCINSPIRE. In this scenario the availability of the conceptual schemas suggests different areas where the tools can provide a great benefit: 1. Creation of the GeoUML specification SCINSPIRE, automatic generation of the corresponding physical SQL structure and Validation of the INSPIRE Database with respect to the specification 2. (Semi)automatic generation of the Transformation Procedure using a set of correspondence rules between elements of SCSOURCE and SCINSPIRE 3. Automatic generation of the WFS configuration from the SCINSPIRE In this paper we describe the work which has already been done and the research directions which we are following in order to deal with these points

    Theory and Practice of Data Citation

    Full text link
    Citations are the cornerstone of knowledge propagation and the primary means of assessing the quality of research, as well as directing investments in science. Science is increasingly becoming "data-intensive", where large volumes of data are collected and analyzed to discover complex patterns through simulations and experiments, and most scientific reference works have been replaced by online curated datasets. Yet, given a dataset, there is no quantitative, consistent and established way of knowing how it has been used over time, who contributed to its curation, what results have been yielded or what value it has. The development of a theory and practice of data citation is fundamental for considering data as first-class research objects with the same relevance and centrality of traditional scientific products. Many works in recent years have discussed data citation from different viewpoints: illustrating why data citation is needed, defining the principles and outlining recommendations for data citation systems, and providing computational methods for addressing specific issues of data citation. The current panorama is many-faceted and an overall view that brings together diverse aspects of this topic is still missing. Therefore, this paper aims to describe the lay of the land for data citation, both from the theoretical (the why and what) and the practical (the how) angle.Comment: 24 pages, 2 tables, pre-print accepted in Journal of the Association for Information Science and Technology (JASIST), 201
    • …
    corecore