802 research outputs found

    A Semantic-Agent Framework for PaaS Interoperability

    Get PDF
    Suchismita Hoare, Na Helian, and Nathan Baddoo, 'A Semantic-Agent Framework for PaaS Interoperability', in Proceedings of the The IEEE International Conference on Cloud and Big Data Computing, Toulouse, France, 18-21, July 2016. DOI: 10.1109/UIC-ATC-ScalCom-CBDCom-IoP-SmartWorld.2016.0126 © 2017 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.Cloud Platform as a Service (PaaS) is poised for a wider adoption by its relevant stakeholders, especially Cloud application developers. Despite this, the service model is still plagued with several adoption inhibitors, one of which is lack of interoperability between proprietary application infrastructure services of public PaaS solutions. Although there is some progress in addressing the general PaaS interoperability issue through various devised solutions focused primarily on API compatibility and platform-agnostic application design models, interoperability specific to differentiated services provided by the existing public PaaS providers and the resultant disparity owing to the offered services’ semantics has not been addressed effectively, yet. The literature indicates that this dimension of PaaS interoperability is awaiting evolution in the state-of-the-art. This paper proposes the initial system design of a PaaS interoperability (IntPaaS) framework to be developed through the integration of semantic and agent technologies to enable transparent interoperability between incompatible PaaS services. This will involve uniform description through semantic annotation of PaaS provider services utilizing the OWL-S ontology, creating a knowledgebase that enables software agents to automatically search for suitable services to support Cloud-based Greenfield application development. The rest of the paper discusses the identified research problem along with the proposed solution to address the issue.Submitted Versio

    Personalizable Service Discovery in Pervasive Systems

    Get PDF
    Today, telecom providers are facing changing challenges. To stay ahead in the competition and provide market leading offerings, carriers need to enable a global ecosystem of third party independent application developers to deliver converged services. This is the aim of leveraging a open standardsbased service delivery platform. To identify and to cope with those challenges is the main target of the EU funded project IST DAIDALOS II. And a central point to satisfy the changing user needs is the provision of a well working, user friendly and personalized service discovery. This paper describes our work in the project on a middleware in a framework for pervasive service usage. We have designed an architecture for it, that enables full transparency to the user, grants high compatibility and extendability by a modular and pluggable conception and allows for interoperability with most known service discovery protocols. Our Multi-Protocol Service Discovery and the Four Phases Service Filtering concept enabling personalization should allow for the best possible results in service discovery

    Grid service discovery with rough sets

    Get PDF
    Copyright [2008] IEEE. This material is posted here with permission of the IEEE. Such permission of the IEEE does not in any way imply IEEE endorsement of any of Brunel University's products or services. Internal or personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution must be obtained from the IEEE by writing to [email protected]. By choosing to view this document, you agree to all provisions of the copyright laws protecting it.The computational grid is evolving as a service-oriented computing infrastructure that facilitates resource sharing and large-scale problem solving over the Internet. Service discovery becomes an issue of vital importance in utilising grid facilities. This paper presents ROSSE, a Rough sets based search engine for grid service discovery. Building on Rough sets theory, ROSSE is novel in its capability to deal with uncertainty of properties when matching services. In this way, ROSSE can discover the services that are most relevant to a service query from a functional point of view. Since functionally matched services may have distinct non-functional properties related to Quality of Service (QoS), ROSSE introduces a QoS model to further filter matched services with their QoS values to maximise user satisfaction in service discovery. ROSSE is evaluated in terms of its accuracy and efficiency in discovery of computing services

    Incremental Integration of Microservices in Cloud Applications

    Get PDF
    Microservices have recently appeared as a new architectural style that is native to the cloud. The high availability and agility of the cloud demands organizations to migrate or design microservices, promoting the building of applications as a suite of small and cohesive services (microservices) that are independently developed, deployed and scaled. Current cloud development approaches do not support the incremental integration needed for microservice platforms, and the agility of getting new functionalities out to customers is consequently affected by the lack of support for the integration design and automation of the development and deployment tasks. This paper presents an approach for the incremental integration of microservices that will allow developers to specify and design microservice integration, and provide mechanisms with which to automatically obtain the implementation code for business logic and interoperation among microservices along with deployment and architectural reconfiguration scripts specific to the cloud environment in which the microservice will be deployed

    The Semantic Grid: A future e-Science infrastructure

    No full text
    e-Science offers a promising vision of how computer and communication technology can support and enhance the scientific process. It does this by enabling scientists to generate, analyse, share and discuss their insights, experiments and results in an effective manner. The underlying computer infrastructure that provides these facilities is commonly referred to as the Grid. At this time, there are a number of grid applications being developed and there is a whole raft of computer technologies that provide fragments of the necessary functionality. However there is currently a major gap between these endeavours and the vision of e-Science in which there is a high degree of easy-to-use and seamless automation and in which there are flexible collaborations and computations on a global scale. To bridge this practice–aspiration divide, this paper presents a research agenda whose aim is to move from the current state of the art in e-Science infrastructure, to the future infrastructure that is needed to support the full richness of the e-Science vision. Here the future e-Science research infrastructure is termed the Semantic Grid (Semantic Grid to Grid is meant to connote a similar relationship to the one that exists between the Semantic Web and the Web). In particular, we present a conceptual architecture for the Semantic Grid. This architecture adopts a service-oriented perspective in which distinct stakeholders in the scientific process, represented as software agents, provide services to one another, under various service level agreements, in various forms of marketplace. We then focus predominantly on the issues concerned with the way that knowledge is acquired and used in such environments since we believe this is the key differentiator between current grid endeavours and those envisioned for the Semantic Grid

    A hybrid human and machine resource curation pipeline for the Neuroscience Information Framework

    Get PDF
    The breadth of information resources available to researchers on the Internet continues to expand, particularly in light of recently implemented data-sharing policies required by funding agencies. However, the nature of dense, multifaceted neuroscience data and the design of contemporary search engine systems makes efficient, reliable and relevant discovery of such information a significant challenge. This challenge is specifically pertinent for online databases, whose dynamic content is ‘hidden’ from search engines. The Neuroscience Information Framework (NIF; http://www.neuinfo.org) was funded by the NIH Blueprint for Neuroscience Research to address the problem of finding and utilizing neuroscience-relevant resources such as software tools, data sets, experimental animals and antibodies across the Internet. From the outset, NIF sought to provide an accounting of available resources, whereas developing technical solutions to finding, accessing and utilizing them. The curators therefore, are tasked with identifying and registering resources, examining data, writing configuration files to index and display data and keeping the contents current. In the initial phases of the project, all aspects of the registration and curation processes were manual. However, as the number of resources grew, manual curation became impractical. This report describes our experiences and successes with developing automated resource discovery and semiautomated type characterization with text-mining scripts that facilitate curation team efforts to discover, integrate and display new content. We also describe the DISCO framework, a suite of automated web services that significantly reduce manual curation efforts to periodically check for resource updates. Lastly, we discuss DOMEO, a semi-automated annotation tool that improves the discovery and curation of resources that are not necessarily website-based (i.e. reagents, software tools). Although the ultimate goal of automation was to reduce the workload of the curators, it has resulted in valuable analytic by-products that address accessibility, use and citation of resources that can now be shared with resource owners and the larger scientific community

    Forum Session at the First International Conference on Service Oriented Computing (ICSOC03)

    Get PDF
    The First International Conference on Service Oriented Computing (ICSOC) was held in Trento, December 15-18, 2003. The focus of the conference ---Service Oriented Computing (SOC)--- is the new emerging paradigm for distributed computing and e-business processing that has evolved from object-oriented and component computing to enable building agile networks of collaborating business applications distributed within and across organizational boundaries. Of the 181 papers submitted to the ICSOC conference, 10 were selected for the forum session which took place on December the 16th, 2003. The papers were chosen based on their technical quality, originality, relevance to SOC and for their nature of being best suited for a poster presentation or a demonstration. This technical report contains the 10 papers presented during the forum session at the ICSOC conference. In particular, the last two papers in the report ere submitted as industrial papers

    DYAMAND: dynamic, adaptive management of networks and devices

    Get PDF
    Consumer devices increasingly are "smart" and hence offer services that can interwork with and/or be controlled by others. However, the full exploitation of the inherent opportunities this offers, is hurdled by a number of potential limitations. First of all, the interface towards the device might be vendor and even device specific, implying that extra effort is needed to support a specific device. Standardization efforts try to avoid this problem, but within a certain standard ecosystem the level of interoperability can vary (i.e. devices carrying the same standard logo are not necessarily interoperable). Secondly, different application domains (e.g. multimedia vs. energy management) today have their own standards, thus limiting trans-sector innovation because of the additional effort required to integrate devices from traditionally different domains into novel applications. In this paper, we discuss the basic components of current so-called service discovery protocols (SDPs) and present our DYAMAND (DYnamic, Adaptive MAnagement of Networks and Devices) framework. We position this framework as a middleware layer between applications and discoverable/controllable devices, and hence aim to provide the necessary tool to overcome the (intra- and inter-domain) interoperability gaps previously sketched. Thus, we believe it can act as a catalyst enabling trans-sector innovation
    corecore