135 research outputs found

    Analysis of current middleware used in peer-to-peer and grid implementations for enhancement by catallactic mechanisms

    Get PDF
    This deliverable describes the work done in task 3.1, Middleware analysis: Analysis of current middleware used in peer-to-peer and grid implementations for enhancement by catallactic mechanisms from work package 3, Middleware Implementation. The document is divided in four parts: The introduction with application scenarios and middleware requirements, Catnets middleware architecture, evaluation of existing middleware toolkits, and conclusions. -- Die Arbeit definiert Anforderungen an Grid und Peer-to-Peer Middleware Architekturen und analysiert diese auf ihre Eignung fĂĽr die prototypische Umsetzung der Katallaxie. Eine Middleware-Architektur fĂĽr die Umsetzung der Katallaxie in Application Layer Netzwerken wird vorgestellt.Grid Computing

    Proof-of-Concept Application - Annual Report Year 1

    Get PDF
    In this document the Cat-COVITE Application for use in the CATNETS Project is introduced and motivated. Furthermore an introduction to the catallactic middleware and Web Services Agreement (WS-Agreement) concepts is given as a basis for the future work. Requirements for the application of Cat-COVITE with in catallactic systems are analysed. Finally the integration of the Cat-COVITE application and the catallactic middleware is described. --Grid Computing

    Development of a Grid Enabled Occupational Data Environment

    Get PDF
    The GEODE project is developing user-oriented Grid-based services, accessible via a portal, for social scientists who require and use 'occupational information' within their research. There are many complexities associated with social scientists’ use of data on individual occupations. These arise for example from the availability of numerous alternative occupational classifications, and the use of different occupational definitions across countries. This paper describes how the GEODE project is developing an online service which acts as a facility supporting access to numerous occupational information resources. This is achieved through an integrated Grid service which uses a Globus Toolkit 4 infrastructure and OGSA-DAI (Database Access and Integration) middleware to provide the necessary data indexing and matching services, accessed through a user-oriented front-end portal (using GridSphere). The paper discusses issues in the implementation and organization of these services

    A FRAMEWORK FOR BIOPROFILE ANALYSIS OVER GRID

    Get PDF
    An important trend in modern medicine is towards individualisation of healthcare to tailor care to the needs of the individual. This makes it possible, for example, to personalise diagnosis and treatment to improve outcome. However, the benefits of this can only be fully realised if healthcare and ICT resources are exploited (e.g. to provide access to relevant data, analysis algorithms, knowledge and expertise). Potentially, grid can play an important role in this by allowing sharing of resources and expertise to improve the quality of care. The integration of grid and the new concept of bioprofile represents a new topic in the healthgrid for individualisation of healthcare. A bioprofile represents a personal dynamic "fingerprint" that fuses together a person's current and past bio-history, biopatterns and prognosis. It combines not just data, but also analysis and predictions of future or likely susceptibility to disease, such as brain diseases and cancer. The creation and use of bioprofile require the support of a number of healthcare and ICT technologies and techniques, such as medical imaging and electrophysiology and related facilities, analysis tools, data storage and computation clusters. The need to share clinical data, storage and computation resources between different bioprofile centres creates not only local problems, but also global problems. Existing ICT technologies are inappropriate for bioprofiling because of the difficulties in the use and management of heterogeneous IT resources at different bioprofile centres. Grid as an emerging resource sharing concept fulfils the needs of bioprofile in several aspects, including discovery, access, monitoring and allocation of distributed bioprofile databases, computation resoiuces, bioprofile knowledge bases, etc. However, the challenge of how to integrate the grid and bioprofile technologies together in order to offer an advanced distributed bioprofile environment to support individualized healthcare remains. The aim of this project is to develop a framework for one of the key meta-level bioprofile applications: bioprofile analysis over grid to support individualised healthcare. Bioprofile analysis is a critical part of bioprofiling (i.e. the creation, use and update of bioprofiles). Analysis makes it possible, for example, to extract markers from data for diagnosis and to assess individual's health status. The framework provides a basis for a "grid-based" solution to the challenge of "distributed bioprofile analysis" in bioprofiling. The main contributions of the thesis are fourfold: A. An architecture for bioprofile analysis over grid. The design of a suitable aichitecture is fundamental to the development of any ICT systems. The architecture creates a meaiis for categorisation, determination and organisation of core grid components to support the development and use of grid for bioprofile analysis; B. A service model for bioprofile analysis over grid. The service model proposes a service design principle, a service architecture for bioprofile analysis over grid, and a distributed EEG analysis service model. The service design principle addresses the main service design considerations behind the service model, in the aspects of usability, flexibility, extensibility, reusability, etc. The service architecture identifies the main categories of services and outlines an approach in organising services to realise certain functionalities required by distributed bioprofile analysis applications. The EEG analysis service model demonstrates the utilisation and development of services to enable bioprofile analysis over grid; C. Two grid test-beds and a practical implementation of EEG analysis over grid. The two grid test-beds: the BIOPATTERN grid and PlymGRID are built based on existing grid middleware tools. They provide essential experimental platforms for research in bioprofiling over grid. The work here demonstrates how resources, grid middleware and services can be utilised, organised and implemented to support distributed EEG analysis for early detection of dementia. The distributed Electroencephalography (EEG) analysis environment can be used to support a variety of research activities in EEG analysis; D. A scheme for organising multiple (heterogeneous) descriptions of individual grid entities for knowledge representation of grid. The scheme solves the compatibility and adaptability problems in managing heterogeneous descriptions (i.e. descriptions using different languages and schemas/ontologies) for collaborated representation of a grid environment in different scales. It underpins the concept of bioprofile analysis over grid in the aspect of knowledge-based global coordination between components of bioprofile analysis over grid

    SIMDAT

    No full text

    Grid-based semantic integration of heterogeneous data resources : implementation on a HealthGrid

    Get PDF
    The semantic integration of geographically distributed and heterogeneous data resources still remains a key challenge in Grid infrastructures. Today's mainstream Grid technologies hold the promise to meet this challenge in a systematic manner, making data applications more scalable and manageable. The thesis conducts a thorough investigation of the problem, the state of the art, and the related technologies, and proposes an Architecture for Semantic Integration of Data Sources (ASIDS) addressing the semantic heterogeneity issue. It defines a simple mechanism for the interoperability of heterogeneous data sources in order to extract or discover information regardless of their different semantics. The constituent technologies of this architecture include Globus Toolkit (GT4) and OGSA-DAI (Open Grid Service Architecture Data Integration and Access) alongside other web services technologies such as XML (Extensive Markup Language). To show this, the ASIDS architecture was implemented and tested in a realistic setting by building an exemplar application prototype on a HealthGrid (pilot implementation). The study followed an empirical research methodology and was informed by extensive literature surveys and a critical analysis of the relevant technologies and their synergies. The two literature reviews, together with the analysis of the technology background, have provided a good overview of the current Grid and HealthGrid landscape, produced some valuable taxonomies, explored new paths by integrating technologies, and more importantly illuminated the problem and guided the research process towards a promising solution. Yet the primary contribution of this research is an approach that uses contemporary Grid technologies for integrating heterogeneous data resources that have semantically different. data fields (attributes). It has been practically demonstrated (using a prototype HealthGrid) that discovery in semantically integrated distributed data sources can be feasible by using mainstream Grid technologies, which have been shown to have some Significant advantages over non-Grid based approaches.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Supporting Quality of Service in Scientific Workflows

    Get PDF
    While workflow management systems have been utilized in enterprises to support businesses for almost two decades, the use of workflows in scientific environments was fairly uncommon until recently. Nowadays, scientists use workflow systems to conduct scientific experiments, simulations, and distributed computations. However, most scientific workflow management systems have not been built using existing workflow technology; rather they have been designed and developed from scratch. Due to the lack of generality of early scientific workflow systems, many domain-specific workflow systems have been developed. Generally speaking, those domain-specific approaches lack common acceptance and tool support and offer lower robustness compared to business workflow systems. In this thesis, the use of the industry standard BPEL, a workflow language for modeling business processes, is proposed for the modeling and the execution of scientific workflows. Due to the widespread use of BPEL in enterprises, a number of stable and mature software products exist. The language is expressive (Turingcomplete) and not restricted to specific applications. BPEL is well suited for the modeling of scientific workflows, but existing implementations of the standard lack important features that are necessary for the execution of scientific workflows. This work presents components that extend an existing implementation of the BPEL standard and eliminate the identified weaknesses. The components thus provide the technical basis for use of BPEL in academia. The particular focus is on so-called non-functional (Quality of Service) requirements. These requirements include scalability, reliability (fault tolerance), data security, and cost (of executing a workflow). From a technical perspective, the workflow system must be able to interface with the middleware systems that are commonly used by the scientific workflow community to allow access to heterogeneous, distributed resources (especially Grid and Cloud resources). The major components cover exactly these requirements: Cloud Resource Provisioner Scalability of the workflow system is achieved by automatically adding additional (Cloud) resources to the workflow system’s resource pool when the workflow system is heavily loaded. Fault Tolerance Module High reliability is achieved via continuous monitoring of workflow execution and corrective interventions, such as re-execution of a failed workflow step or replacement of the faulty resource. Cost Aware Data Flow Aware Scheduler The majority of scientific workflow systems only take the performance and utilization of resources for the execution of workflow steps into account when making scheduling decisions. The presented workflow system goes beyond that. By defining preference values for the weighting of costs and the anticipated workflow execution time, workflow users may influence the resource selection process. The developed multiobjective scheduling algorithm respects the defined weighting and makes both efficient and advantageous decisions using a heuristic approach. Security Extensions Because it supports various encryption, signature and authentication mechanisms (e.g., Grid Security Infrastructure), the workflow system guarantees data security in the transfer of workflow data. Furthermore, this work identifies the need to equip workflow developers with workflow modeling tools that can be used intuitively. This dissertation presents two modeling tools that support users with different needs. The first tool, DAVO (domain-adaptable, Visual BPEL Orchestrator), operates at a low level of abstraction and allows users with knowledge of BPEL to use the full extent of the language. DAVO is a software that offers extensibility and customizability for different application domains. These features are used in the implementation of the second tool, SimpleBPEL Composer. SimpleBPEL is aimed at users with little or no background in computer science and allows for quick and intuitive development of BPEL workflows based on predefined components
    • …
    corecore