60,442 research outputs found

    THE USE OF STANDARDS IN HELIO

    Get PDF
    HELIO [8] is a project funded under the FP7 program for the discovery and analysis of data for heliophysics. During its development, standards and common frameworks were adopted in three main areas of the project: query services, processing services, and the security infrastructure. After a ļ¬rst, proprietary implementation of the security service, it was suggested moving it to a standard security framework to simplify the enforcement of security on the diļ¬€erent sites. As the HELIO front end is built with Spring and the TAVERNA server (HELIO workļ¬‚ow engine) has a security framework compatible with Spring, it has been decided to move the CIS in Spring security [2]. HELIO has two diļ¬€erent processing services: one is a generic processing service called HELIO Processing Services (HPS), the other is called Context Service (CTX) and it runs speciļ¬c IDL procedures. The CTX implements the UWS [4] interface from the IVOA [5], a standard interface for job submission used in the helio and astro-physics community. In its ļ¬nal release, the HPS will expose an UWS compliant interface. Finally, some of the HELIO services perform queries, to simplify the implementation and usage of this services a single query interface (the HELIO Query Interface) has been designed for all these services. The use of these solutions for security, execution, and query allows for easier implementation of the original HELIO architecture and for a simpler deployment of the services

    Trusted Computing and Secure Virtualization in Cloud Computing

    Get PDF
    Large-scale deployment and use of cloud computing in industry is accompanied and in the same time hampered by concerns regarding protection of data handled by cloud computing providers. One of the consequences of moving data processing and storage off company premises is that organizations have less control over their infrastructure. As a result, cloud service (CS) clients must trust that the CS provider is able to protect their data and infrastructure from both external and internal attacks. Currently however, such trust can only rely on organizational processes declared by the CS provider and can not be remotely verified and validated by an external party. Enabling the CS client to verify the integrity of the host where the virtual machine instance will run, as well as to ensure that the virtual machine image has not been tampered with, are some steps towards building trust in the CS provider. Having the tools to perform such verifications prior to the launch of the VM instance allows the CS clients to decide in runtime whether certain data should be stored- or calculations should be made on the VM instance offered by the CS provider. This thesis combines three components -- trusted computing, virtualization technology and cloud computing platforms -- to address issues of trust and security in public cloud computing environments. Of the three components, virtualization technology has had the longest evolution and is a cornerstone for the realization of cloud computing. Trusted computing is a recent industry initiative that aims to implement the root of trust in a hardware component, the trusted platform module. The initiative has been formalized in a set of specifications and is currently at version 1.2. Cloud computing platforms pool virtualized computing, storage and network resources in order to serve a large number of customers customers that use a multi-tenant multiplexing model to offer on-demand self-service over broad network. Open source cloud computing platforms are, similar to trusted computing, a fairly recent technology in active development. The issue of trust in public cloud environments is addressed by examining the state of the art within cloud computing security and subsequently addressing the issues of establishing trust in the launch of a generic virtual machine in a public cloud environment. As a result, the thesis proposes a trusted launch protocol that allows CS clients to verify and ensure the integrity of the VM instance at launch time, as well as the integrity of the host where the VM instance is launched. The protocol relies on the use of Trusted Platform Module (TPM) for key generation and data protection. The TPM also plays an essential part in the integrity attestation of the VM instance host. Along with a theoretical, platform-agnostic protocol, the thesis also describes a detailed implementation design of the protocol using the OpenStack cloud computing platform. In order the verify the implementability of the proposed protocol, a prototype implementation has built using a distributed deployment of OpenStack. While the protocol covers only the trusted launch procedure using generic virtual machine images, it presents a step aimed to contribute towards the creation of a secure and trusted public cloud computing environment

    Flexible programmable networking: A reflective, component-based approach

    Get PDF
    The need for programmability and adaptability in networking systems is becoming increasingly important. More specifically, the challenge is in the ability to add services rapidly, and be able to deploy, configure and reconfigure them as easily as possible. Such demand is creating a considerable shift in the way networks are expected to operate in the future. This is the main aim of programmable networking research community, and in our project we are investigating a component-based approach to the structuring of programmable networking software. Our intention is to apply the notion of components, component frameworks and reflection ubiquitously, thus accommodating all the different elements that comprise a programmable networking system

    A look at cloud architecture interoperability through standards

    Get PDF
    Enabling cloud infrastructures to evolve into a transparent platform while preserving integrity raises interoperability issues. How components are connected needs to be addressed. Interoperability requires standard data models and communication encoding technologies compatible with the existing Internet infrastructure. To reduce vendor lock-in situations, cloud computing must implement universal strategies regarding standards, interoperability and portability. Open standards are of critical importance and need to be embedded into interoperability solutions. Interoperability is determined at the data level as well as the service level. Corresponding modelling standards and integration solutions shall be analysed

    A peer-to-peer infrastructure for resilient web services

    Get PDF
    This work is funded by GR/M78403 ā€œSupporting Internet Computation in Arbitrary Geographical Locationsā€ and GR/R51872 ā€œReflective Application Framework for Distributed Architecturesā€, and by Nuffield Grant URB/01597/G ā€œPeer-to-Peer Infrastructure for Autonomic Storage Architecturesā€This paper describes an infrastructure for the deployment and use of Web Services that are resilient to the failure of the nodes that host those services. The infrastructure presents a single interface that provides mechanisms for users to publish services and to find hosted services. The infrastructure supports the autonomic deployment of services and the brokerage of hosts on which services may be deployed. Once deployed, services are autonomically managed in a number of aspects including load balancing, availability, failure detection and recovery, and lifetime management. Services are published and deployed with associated metadata describing the service type. This same metadata may be used subsequently by interested parties to discover services. The infrastructure uses peer-to-peer (P2P) overlay technologies to abstract over the underlying network to deploy and locate instances of those services. It takes advantage of the P2P network to replicate directory services used to locate service instances (for using a service), Service Hosts (for deployment of services) and Autonomic Managers which manage the deployed services. The P2P overlay network is itself constructed using novel Web Services-based middleware and a variation of the Chord P2P protocol, which is self-managing.Postprin
    • ā€¦
    corecore