566 research outputs found

    D.2.1.2 First integrated Grid infrastructure

    No full text

    COLEG: Collaborative Learning Environment within Grid

    Get PDF
    The principal function of the CSCL environments is to provide to the various users (students, teachers, tutors
), the best activities with the best tools at the best time according to their needs. If a CSCL system is a collection of activities or learning process, we can cut out its functionalities in a certain number of autonomous functions which can then be carried out separately in the form of autonomous applications by using the technology of the Web/Grid services. The emerging technologies based on the Grid are increasingly being adopted to improve education and provide better services for learning. These services are offered to students who, regardless of their computer systems, can collaborate to improve their cognitive and social skills. This article presents COLEG (COllaborative Learning Environment within Grid), which aims to employ the capacities offered by the Grid to give the various actors, all the power of learning, collaboration and communication in an adaptable, heterogeneous and dynamic sight

    Developing Feature Types and Related Catalogues for the Marine Community - Lessons from the MOTIIVE project.

    Get PDF
    MOTIIVE (Marine Overlays on Topography for annex II Valuation and Exploitation) is a project funded as a Specific Support Action (SSA) under the European Commission Framework Programme 6 (FP6) Aeronautics and Space Programme. The project started in September 2005 and finished in October 2007. The objective of MOTIIVE was to examine the methodology and cost benefit of using non-proprietary data standards. Specifically it considered the harmonisation requirements between the INSPIRE data component ‘elevation’ (terrestrial, bathymetric and coastal) and INSPIRE marine thematic data for ‘sea regions’, ‘oceanic spatial features’ and ‘coastal zone management areas’. This was examined in context of the requirements for interoperable information systems as required to realise the objectives of GMES for ‘global services’. The work draws particular conclusions on the realisation of Feature Types (ISO 19109) and Feature Type Catalogues (ISO 19110) in this respect. More information on MOTIIVE can be found at www.motiive.net

    Geospatial Web Services, Open Standards, and Advances in Interoperability: A Selected, Annotated Bibliography

    Get PDF
    This paper is designed to help GIS librarians and information specialists follow developments in the emerging field of geospatial Web services (GWS). When built using open standards, GWS permits users to dynamically access, exchange, deliver, and process geospatial data and products on the World Wide Web, no matter what platform or protocol is used. Standards/specifications pertaining to geospatial ontologies, geospatial Web services and interoperability are discussed in this bibliography. Finally, a selected, annotated list of bibliographic references by experts in the field is presented

    IRS-III: A broker-based approach to semantic Web services

    Get PDF
    A factor limiting the take up of Web services is that all tasks associated with the creation of an application, for example, finding, composing, and resolving mismatches between Web services have to be carried out by a software developer. Semantic Web services is a combination of semantic Web and Web service technologies that promise to alleviate these problems. In this paper we describe IRS-III, a framework for creating and executing semantic Web services, which takes a semantic broker based approach to mediating between service requesters and service providers. We describe the overall approach and the components of IRS-III from an ontological and architectural viewpoint. We then illustrate our approach through an application in the eGovernment domain

    Towards optimising distributed data streaming graphs using parallel streams

    Full text link
    Modern scientific collaborations have opened up the op-portunity of solving complex problems that involve multi-disciplinary expertise and large-scale computational experi-ments. These experiments usually involve large amounts of data that are located in distributed data repositories running various software systems, and managed by different organi-sations. A common strategy to make the experiments more manageable is executing the processing steps as a work-flow. In this paper, we look into the implementation of fine-grained data-flow between computational elements in a scientific workflow as streams. We model the distributed computation as a directed acyclic graph where the nodes rep-resent the processing elements that incrementally implement specific subtasks. The processing elements are connected in a pipelined streaming manner, which allows task executions to overlap. We further optimise the execution by splitting pipelines across processes and by introducing extra parallel streams. We identify performance metrics and design a mea-surement tool to evaluate each enactment. We conducted ex-periments to evaluate our optimisation strategies with a real world problem in the Life Sciences—EURExpress-II. The paper presents our distributed data-handling model, the op-timisation and instrumentation strategies and the evaluation experiments. We demonstrate linear speed up and argue that this use of data-streaming to enable both overlapped pipeline and parallelised enactment is a generally applicable optimisation strategy

    Providing packages of relevant ATM information: An ontology-based approach

    Get PDF
    ATM information providers publish reports and notifications of different types using standardized information exchange models. For a typical information user, e.g., an aircraft pilot, only a fraction of the published information is relevant for a particular task. Filtering out irrelevant information from different information sources is in itself a challenging task, yet it is only a first step in providing relevant information, the challenges concerning maintenance, auditability, availability, integration, comprehensibility, and traceability. This paper presents the Semantic Container approach, which employs ontology-based faceted information filtering and allows for the packaging of filtered information and associated metadata in semantic containers, thus facilitating reuse of filtered information at different levels. The paper formally defines an abstract model of ontology-based information filtering and the structure of semantic containers, their composition, versioning, discovery, and replicated physical allocation. The paper further discusses different usage scenarios, the role of semantic containers in SWIM, an architecture for a semantic container management system, as well as a proof-of-concept prototype. Finally the paper discusses a blockchain-based notary service to realize tamper-proof version histories for semantic containers.acceptedVersio
    • 

    corecore