5,738 research outputs found

    Phenomenology Tools on Cloud Infrastructures using OpenStack

    Get PDF
    We present a new environment for computations in particle physics phenomenology employing recent developments in cloud computing. On this environment users can create and manage "virtual" machines on which the phenomenology codes/tools can be deployed easily in an automated way. We analyze the performance of this environment based on "virtual" machines versus the utilization of "real" physical hardware. In this way we provide a qualitative result for the influence of the host operating system on the performance of a representative set of applications for phenomenology calculations.Comment: 25 pages, 12 figures; information on memory usage included, as well as minor modifications. Version to appear in EPJ

    NFV Based Gateways for Virtualized Wireless Sensors Networks: A Case Study

    Full text link
    Virtualization enables the sharing of a same wireless sensor network (WSN) by multiple applications. However, in heterogeneous environments, virtualized wireless sensor networks (VWSN) raises new challenges such as the need for on-the-fly, dynamic, elastic and scalable provisioning of gateways. Network Functions Virtualization (NFV) is an emerging paradigm that can certainly aid in tackling these new challenges. It leverages standard virtualization technology to consolidate special-purpose network elements on top of commodity hardware. This article presents a case study on NFV based gateways for VWSNs. In the study, a VWSN gateway provider, operates and manages an NFV based infrastructure. We use two different brands of wireless sensors. The NFV infrastructure makes possible the dynamic, elastic and scalable deployment of gateway modules in this heterogeneous VWSN environment. The prototype built with Openstack as platform is described

    Trusted Computing and Secure Virtualization in Cloud Computing

    Get PDF
    Large-scale deployment and use of cloud computing in industry is accompanied and in the same time hampered by concerns regarding protection of data handled by cloud computing providers. One of the consequences of moving data processing and storage off company premises is that organizations have less control over their infrastructure. As a result, cloud service (CS) clients must trust that the CS provider is able to protect their data and infrastructure from both external and internal attacks. Currently however, such trust can only rely on organizational processes declared by the CS provider and can not be remotely verified and validated by an external party. Enabling the CS client to verify the integrity of the host where the virtual machine instance will run, as well as to ensure that the virtual machine image has not been tampered with, are some steps towards building trust in the CS provider. Having the tools to perform such verifications prior to the launch of the VM instance allows the CS clients to decide in runtime whether certain data should be stored- or calculations should be made on the VM instance offered by the CS provider. This thesis combines three components -- trusted computing, virtualization technology and cloud computing platforms -- to address issues of trust and security in public cloud computing environments. Of the three components, virtualization technology has had the longest evolution and is a cornerstone for the realization of cloud computing. Trusted computing is a recent industry initiative that aims to implement the root of trust in a hardware component, the trusted platform module. The initiative has been formalized in a set of specifications and is currently at version 1.2. Cloud computing platforms pool virtualized computing, storage and network resources in order to serve a large number of customers customers that use a multi-tenant multiplexing model to offer on-demand self-service over broad network. Open source cloud computing platforms are, similar to trusted computing, a fairly recent technology in active development. The issue of trust in public cloud environments is addressed by examining the state of the art within cloud computing security and subsequently addressing the issues of establishing trust in the launch of a generic virtual machine in a public cloud environment. As a result, the thesis proposes a trusted launch protocol that allows CS clients to verify and ensure the integrity of the VM instance at launch time, as well as the integrity of the host where the VM instance is launched. The protocol relies on the use of Trusted Platform Module (TPM) for key generation and data protection. The TPM also plays an essential part in the integrity attestation of the VM instance host. Along with a theoretical, platform-agnostic protocol, the thesis also describes a detailed implementation design of the protocol using the OpenStack cloud computing platform. In order the verify the implementability of the proposed protocol, a prototype implementation has built using a distributed deployment of OpenStack. While the protocol covers only the trusted launch procedure using generic virtual machine images, it presents a step aimed to contribute towards the creation of a secure and trusted public cloud computing environment

    HIL: designing an exokernel for the data center

    Full text link
    We propose a new Exokernel-like layer to allow mutually untrusting physically deployed services to efficiently share the resources of a data center. We believe that such a layer offers not only efficiency gains, but may also enable new economic models, new applications, and new security-sensitive uses. A prototype (currently in active use) demonstrates that the proposed layer is viable, and can support a variety of existing provisioning tools and use cases.Partial support for this work was provided by the MassTech Collaborative Research Matching Grant Program, National Science Foundation awards 1347525 and 1149232 as well as the several commercial partners of the Massachusetts Open Cloud who may be found at http://www.massopencloud.or

    Scaling Virtualized Smartphone Images in the Cloud

    Get PDF
    Üks selle Bakalaureuse töö eesmärkidest oli Android-x86 nutitelefoni platvormi juurutamine pilvekeskkonda ja välja selgitamine, kas valitud instance on piisav virtualiseeritud nutitelefoni platvormi juurutamiseks ning kui palju koormust see talub. Töös kasutati Amazoni instance'i M1 Small, mis oli piisav, et juurutada Androidi virtualiseeritud platvormi, kuid jäi kesisemaks kui mobiiltelefon, millel teste läbi viidi. M1 Medium instance'i tüüp oli sobivam ja näitas paremaid tulemusi võrreldes telefoniga. Teostati koormusteste selleks vastava tööriistaga Tsung, et näha, kui palju üheaegseid kasutajaid instance talub. Testi läbiviimiseks paigaldasime Dalviku instance'ile Tomcat serveri. Pärast teste ühe eksemplariga, juurutasime külge Elastic Load Balancing ja automaatse skaleerimise Amazon Auto Scaling tööriista. Esimene neist jaotas koormust instance'ide vahel. Automaatse skaleerimise tööriista kasutasime, et rakendada horisontaalset skaleerimist meie Android-x86 instance'le. Kui CPU tõusis üle 60% kauemaks kui üks minut, siis tehti eelmisele identne instance ja koormust saadeti edaspidi sinna. Seda protseduuri vajadusel korrati maksimum kümne instance'ini. Meie teostusel olid tagasilöögid, sest Elastic Load Balancer aegus 60 sekundi pärast ning me ei saanud kõikide välja saadetud päringutele vastuseid. Serverisse saadetud faili kirjutamine ja kompileerimine olid kulukad tegevused ja seega ei lõppenud kõik 60 sekundi jooksul. Me ei saanud koos Load Balancer'iga läbiviidud testidest piisavalt andmeid, et teha järeldusi, kas virtualiseeritud nutitelefoni platvorm Android on hästi või halvasti skaleeruv.In this thesis we deployed a smartphone image in an Amazon EC2 instance and ran stress tests on them to know how much users can one instance bear and how scalable it is. We tested how much time would a method run in a physical Android device and in a cloud instance. We deployed CyanogenMod and Dalvik for a single instance. We used Tsung for stress testing. For those tests we also made a Tomcat server on Dalvik instance that would take the incoming file, the file would be compiled with java and its class file would be wrapped into dex, a Dalvik executable file, that is later executed with Dalvik. Three instances made a Tsung cluster that sent load to a Dalvik Virtual Machine instance. For scaling we used Amazon Auto Scaling tool and Elastic Load Balancer that divided incoming load between the instances

    VXA: A Virtual Architecture for Durable Compressed Archives

    Full text link
    Data compression algorithms change frequently, and obsolete decoders do not always run on new hardware and operating systems, threatening the long-term usability of content archived using those algorithms. Re-encoding content into new formats is cumbersome, and highly undesirable when lossy compression is involved. Processor architectures, in contrast, have remained comparatively stable over recent decades. VXA, an archival storage system designed around this observation, archives executable decoders along with the encoded content it stores. VXA decoders run in a specialized virtual machine that implements an OS-independent execution environment based on the standard x86 architecture. The VXA virtual machine strictly limits access to host system services, making decoders safe to run even if an archive contains malicious code. VXA's adoption of a "native" processor architecture instead of type-safe language technology allows reuse of existing "hand-optimized" decoders in C and assembly language, and permits decoders access to performance-enhancing architecture features such as vector processing instructions. The performance cost of VXA's virtualization is typically less than 15% compared with the same decoders running natively. The storage cost of archived decoders, typically 30-130KB each, can be amortized across many archived files sharing the same compression method.Comment: 14 pages, 7 figures, 2 table
    corecore