37 research outputs found

    RAMP: RDMA Migration Platform

    Get PDF
    Remote Direct Memory Access (RDMA) can be used to implement a shared storage abstraction or a shared-nothing abstraction for distributed applications. We argue that the shared storage abstraction is overkill for loosely coupled applications and that the shared-nothing abstraction does not leverage all the benefits of RDMA. In this thesis, we propose an alternative abstraction for such applications using a shared-on-demand architecture, and present the RDMA Migration Platform (RAMP). RAMP is a lightweight coordination service for building loosely coupled distributed applications. This thesis describes the RAMP system, its programming model and operations, and evaluates the performance of RAMP using microbenchmarks. Furthermore, we illustrate RAMPs load balancing capabilities with a case study of a loosely coupled application that uses RAMP to balance a partition skew under load

    From Cloud to Edge: Seamless Software Migration at the Era of the Web of Things

    Get PDF
    open5noThis work was supported by the INAIL within the BRIC/2018, ID D 11D 11 framework, Project MAC4PRO (``Smart maintenance of industrial plants and civil structures via innovative monitoring technologies and prognostic approaches'').The Web of Things (WoT) standard recently promoted by the W3C constitutes a promising approach to devise interoperable IoT systems able to cope with the heterogeneity of software platforms and devices. The WoT architecture envisages interconnected IoT scenarios characterized by a multitude of Web Things (WTs) that interact according to well-defined software interfaces; at the same time, it assumes static allocations of WTs to hosting devices, and it does not cope with the intrinsic dynamicity of IoT environments in terms of time-varying network and computational loads. In this paper, we extend the WoT paradigm for cloud-edge continuum deployments, hence supporting dynamic orchestration and mobility of WTs among the available computational resources. Differently from state-of-art Mobile Edge Computing (MEC) approaches, we heavily exploit the W3C WoT, and specifically its capability to standardize the software interfaces of the WTs, in order to propose the concept of a Migratable WoT (M-WoT), in which WTs are seamlessly allocated to hosts according to their dynamic interactions. Three main contributions are proposed in this paper. First, we describe the architecture of the M-WoT framework, by focusing on the stateful migration of WTs and on the management of the WT handoff process. Second, we rigorously formulate the WT allocation as a multi-objective optimization problem, and propose a graph-based heuristic. Third, we describe a container-based implementation of M-WoT and a twofold evaluation, through which we assess the performance of the proposed migration policy in a distributed edge computing setup and in a real-world IoT monitoring scenario.openAguzzi C.; Gigli L.; Sciullo L.; Trotta A.; Di Felice M.Aguzzi C.; Gigli L.; Sciullo L.; Trotta A.; Di Felice M

    Migration of networks

    Get PDF
    Tese de mestrado em Engenharia Informática, Universidade de Lisboa, Faculdade de Ciências, 2021A forma como os recursos computacionais são geridos, mais propriamente os alojados nos grandes centros de dados, tem vindo, nos últimos anos, a evoluir. As soluções iniciais que passavam por aplicações a correr em grandes servidores físicos, comportavam elevados custos não só de aquisição, mas também, e principalmente, de manutenção. A razão chave por trás deste facto prendia-se em grande parte com uma utilização largamente ineficiente dos recursos computacionais disponíveis. No entanto, o surgimento de tecnologias de virtualização de servidores foi o volte-face necessário para alterar radicalmente o paradigma até aqui existente. Isto não só levou a que os operadores dos grandes centros de dados pudessem passar a alugar os seus recursos computacionais, criando assim uma interessante oportunidade de negócio, mas também permitiu potenciar (e facilitar) negócios dos clientes. Do ponto de vista destes, os benefícios são evidentes: poder alugar recursos, num modelo pay-as-you-go, evita os elevados custos de capital necessários para iniciar um novo serviço. A este novo conceito baseado no aluguer e partilha de recursos computacionais a terceiros dá-se o nome de computação em nuvem (“cloud computing”). Como referimos anteriormente, nada disto teria sido possível sem o aparecimento de tecnologias de virtualização, que permitem o desacoplamento dos serviços dos utilizadores do hardware que os suporta. Esta tecnologia tem-se revelado uma ferramenta fundamental na administração e manutenção dos recursos disponíveis em qualquer centro de dados. Por exemplo, a migração de máquinas virtuais facilita tarefas como a manutenção das infraestruturas, a distribuição de carga, a tolerância a faltas, entre outras primitivas operacionais, graças ao desacoplamento entre as máquinas virtuais e as máquinas físicas, e à consequente grande mobilidade que lhes é assim conferida. Atualmente, muitas aplicações e serviços alojados na nuvem apresentam dimensão e complexidade considerável. O serviço típico é composto por diversos componentes que se complementam de forma a cumprir um determinado propósito. Por exemplo, diversos serviços são baseados numa topologia de vários níveis, composta por múltiplos servidores web, balanceadores de carga e bases de dados distribuídas e replicadas. Daqui resulta uma forte ligação e dependência dos vários elementos deste sistema e das infraestruturas de comunicação e de rede que os suportam. Esta forte dependência da rede vem limitar grandemente a flexibilidade e mobilidade das máquinas virtuais, o que, por sua vez, restringe inevitavelmente o seu reconhecido potencial. Esta dependência é particularmente afetada pela reduzida flexibilidade que a gestão e o controlo das redes apresentam atualmente, levando a que o processo de migração de máquinas virtuais se torne num demorado processo que apresenta restrições que obrigam à reconfiguração da rede, operação esta que, muitas vezes, é assegurada por um operador humano (de que pode resultar, por exemplo, a introdução de falhas). Num cenário ideal, a infraestrutura de redes de que depende a comunicação entre as máquinas virtuais seria também ela virtual, abstraindo os recursos necessários à comunicação, o que conferiria à globalidade do sistema uma maior flexibilidade e mobilidade que, por sua vez, permitiria a realização de uma migração conjunta das referidas máquinas virtuais e da infraestrutura de rede que as suporta. Neste contexto, surgem as redes definidas por software (SDN) [34], uma nova abordagem às redes de computadores que propõe separar a infraestrutura responsável pelo encaminhamento do tráfego (o plano de dados) do plano de controlo, planos que, até aqui, se encontravam acoplados nos elementos de rede (switches e routers). O controlo passa assim para um grupo de servidores, o que permite criar uma centralização lógica do controlo da rede. Uma SDN consegue então oferecer uma visão global da rede e do seu respetivo estado, característica fundamental para permitir o desacoplamento necessário entre a infraestrutura física e virtual. Recentemente, várias soluções de virtualização de rede foram propostas (e.g., VMware NSX [5], Microsoft AccelNet [21] e Google Andromeda [2]), ancoradas na centralização oferecida por uma SDN. No entanto, embora estas plataformas permitam virtualizar a rede, nenhuma delas trata o problema da migração dos seus elementos, limitando a sua flexibilidade. O objetivo desta dissertação passa então por implementar e avaliar soluções de migração de redes recorrendo a SDNs. A ideia é migrar um dispositivo de rede (neste caso, um switch virtual), escolhido pelo utilizador, de modo transparente, quer para os serviços que utilizam a rede, evitando causar disrupção, quer para as aplicações de controlo SDN da rede. O desafio passa por migrar o estado mantido no switch de forma consistente e sem afetar o normal funcionamento da rede. Com esse intuito, implementámos e avaliámos três diferentes abordagens à migração ( freeze and copy, move e clone) e discutimos as vantagens e desvantagens de cada uma. É de realçar que a solução baseada em clonagem se encontra incorporada como um módulo do virtualizador de rede Sirius.The way computational resources are managed, specifically those in big data centers, has been evolving in the last few years. One of the big stepping-stones for this was the emergence of server virtualization technologies that, given their ability to decouple software from the hardware, allowed for big data center operators to rent their resources, which, in its turn, represented an interesting business opportunity for both the operators and their potential customers. This new concept that consists in renting computational resources is called cloud computing. Furthermore, with the possibility that later arose of live migrating virtual machines, be it by customer request (for example, to move their service closer to the target consumer) or by provider decision (for example, to execute scheduled rack maintenances without downtimes), this new paradigm presented really strong arguments in comparison with traditional hosting solutions. Today, most cloud applications have considerable dimension and complexity. This complexity results in a strong dependency between the system elements and the communication infrastructure that lays underneath. This strong network dependency greatly limits the flexibility and mobility of the virtual machines (VMs). This dependency is mainly due to the reduced flexibility of current network management and control, turning the VM migration process into a long and error prone procedure. From a network’s perspective however, software-defined networks (SDNs) [34] manage to provide tools and mechanisms that can go a long way to mitigate this limitation. SDN proposes the separation of the forwarding infrastructure from the control plane as a way to tackle the flexibility problem. Recently, several network virtualization solutions were proposed (e.g., VMware NSX [5], Microsoft AccelNet [21] and Google Andromeda [2]), all supported on the logical centralization offered by an SDN. However, while allowing for network virtualization, none of these platforms addressed the problem of migrating the virtual networks, which limits their functionality. The goal of this dissertation is to implement and evaluate network migration solutions using SDNs. These solutions should allow for the migration of a network element (a virtual switch), chosen by the user, transparently, both for the services that are actively using the network and for the SDN applications that control the network. The challenge is to migrate the virtual element’s state in a consistent manner, whilst not affecting the normal operation of the network. With that in mind, we implemented and evaluated three different migration approaches (freeze and copy, move and clone), and discussed their respective advantages and disadvantages. It is relevant to mention that the cloning approach we implemented and evaluated is incorporated as a module of the network virtualization platform Sirius

    Code management automation for Erlang remote actors

    Get PDF
    Distributed Erlang provides mechanisms for spawning actors remotely through its remote spawn BIF. However, for remote spawn to function properly, the node hosting the spawned actor must share the same codebase as that of the node launching the actor. This assumption turns out to be too strong for various distributed settings. We propose a higher-level framework for the remote spawn of side effect free actors, abstracting from and automating codebase migration and management.peer-reviewe

    Code management automation for Erlang remote actors

    Full text link

    Monitoraggio strutturale e ambientale con il Web delle cose

    Get PDF
    Structural health and Environmental monitoring are recently benefiting from the advancement in the digital industry. Thanks to the emergence of the Internet of Things (IoT) paradigm, monitoring systems are increasing their functionalities and reducing development costs. However, they are affected by a strong fragmentation in the solution proposed and technologies employed. This stale the overall benefits of the adoption of IoT frameworks or IoT devices since it limits the reusability and portability of the chosen platform. As in other IoT contexts, also the structural health and environmental monitoring domain is suffering from the negative effects of, what is called, an interoperability problem. Recently the World Wide Web Consortium (W3C) is joining the race in the definition of a standard for IoT unifying different solutions below a single paradigm. This new shift in the industry is called Web of Things or in short WoT. Together with other W3C technologies of the Semantic Web, the Web of Things unifies different protocols and data models thanks to a descriptive machine-understandable document called the Thing Description. This work wants to explore how this new paradigm can improve the quality of structural health and environmental monitoring applications. The goal is to provide a monitoring infrastructure solely based on WoT and Semantic technologies. The architecture is later tested and applied on two concrete use-cases taken from the industrial structural monitoring and the smart farming domains. Finally, this thesis proposes a layered structure for organizing the knowledge design of the two applications, and it provides evaluation comments on the results obtained.Le pratiche di monitoraggio strutturale e dell'ambiente stanno recemente beneficiando degli avanzamenti nella industria digitale. Grazie alla nascita di tecnologie basate sull'Internet of Things (IoT), i sistemi di monitoraggio hanno migliorato le loro funzionalità base e ridotto i costi di svilippo. Nonostante ciò, queste soluzioni hardware e software sono affette da una forte fragmentazione sia riguardo ai tipi dispositivo sia alle tecnologie usate. Questa fenomeno fa si che i benifici ottenuti utilizzando tecnologie IoT si riducano poichè spesso tali soluzioni mancano di portabilità e adattabilità. Come in altri contesti IoT, anche nel monitoraggio strutturale e ambintale possiamo incorre nel problema tipico della mancanza di interoperabilità tra diverse piattaforme. Recemenete il World Wide Web Consortium (W3C) ha iniziato a lavorare ad uno standard per unificare le maggiori tecnologie IoT sotto un unico paradigma. Questo nuova corrente è chiamata il Web of Things o in breve WoT. Assieme ad altre tecnologie del W3C come il Semantic Web, il Web of Things astrae differenti protocolli e middleware grazie ad un documento descritivo interpretabili dalle macchine chiamato Thing Description. Questo documento vuole esplorare come questo nuovo paradigma influenzi il mondo del monitoraggio strutturale e ambientale. In particolare vuole verificare se l'utilizzo di tecnologie puramente basate su WoT e Semantic Web possa migliorare la portabilità di un applicazione di monitoraggio. In concreto propone un architetuttura software poi implementata in due casi d'uso reali presi dal mondo dello smart farming e monitoraggio di strutture industriali. Infine, la tesi, propone un organizzazione a layer del modello dei dati e una valutazione dei risultati ottenuti

    Deletion of content in large cloud storage systems

    Get PDF
    This thesis discusses the practical implications and challenges of providing secure deletion of data in cloud storage systems. Secure deletion is a desirable functionality to some users, but a requirement to others. The term secure deletion describes the practice of deleting data in such a way, that it can not be reconstructed later, even by forensic means. This work discuss the practice of secure deletion as well as existing methods that are used today. When moving from traditional on-site data storage to cloud services, these existing methods are not applicable anymore. For this reason, it presents the concept of cryptographic deletion and points out the challenge behind implementing it in a practical way. A discussion of related work in the areas of data encryption and cryptographic deletion shows that a research gap exists in applying cryptographic deletion in an efficient, practical way to cloud storage systems. The main contribution of this thesis, the Key-Cascade method, solves this issue by providing an efficient data structure for managing large numbers of encryption keys. Secure deletion is practiced today by individuals and organizations, who need to protect the confidentiality of data, after it has been deleted. It is mostly achieved by means of physical destruction or overwriting in local hard disks or large storage systems. However, these traditional methods ofoverwriting data or destroying media are not suited to large, distributed, and shared cloud storage systems. The known concept of cryptographic deletion describes storing encrypted data in an untrusted storage system, while keeping the key in a trusted location. Given that the encryption is effective, secure deletion of the data can now be achieved by securely deleting the key. Whether encryption is an acceptable protection mechanism, must be decided either by legislature or the customers themselves. This depends on whether cryptographic deletion is done to satisfy legal requirements or customer requirements. The main challenge in implementing cryptographic deletion lies in the granularity of the delete operation. Storage encryption providers today either require deleting the master key, which deletes all stored data, or require expensive copy and re-encryption operations. In the literature, a few constructions can be found that provide an optimized key management. The contributions of this thesis, found in the Key-Cascade method, expand on those findings and describe data structures and operations for implementing efficient cryptographic deletion in a cloud object store. This thesis discusses the conceptual aspects of the Key-Cascade method as well as its mathematical properties. In order to enable production use of a Key-Cascade implementation, it presents multiple extensions to the concept. These extensions improve the performance and usability and also enable frictionless integration into existing applications. With SDOS, the Secure Delete Object Store, a working implementation of the concepts and extensions is given. Its design as an API proxy is unique among the existing cryptographic deletion systems and allows integration into existing applications, without the need to modify them. The results of performance evaluations, conducted with SDOS, show that cryptographic deletion is feasible in practice. With MCM, the Micro Content Management system, this thesis also presents a larger demonstrator system for SDOS. MCM provides insight into how SDOS can be integrated into and deployed as part of a cloud data management application

    A novel architecture to virtualise a hardware-bound trusted platform module

    Get PDF
    Security and trust are particularly relevant in modern softwarised infrastructures, such as cloud environments, as applications are deployed on platforms owned by third parties, are publicly accessible on the Internet and can share the hardware with other tenants. Traditionally, operating systems and applications have leveraged hardware tamper-proof chips, such as the Trusted Platform Modules (TPMs) to implement security workflows, such as remote attestation, and to protect sensitive data against software attacks. This approach does not easily translate to the cloud environment, wherein the isolation provided by the hypervisor makes it impractical to leverage the hardware root of trust in the virtual domains. Moreover, the scalability needs of the cloud often collide with the scarce hardware resources and inherent limitations of TPMs. For this reason, existing implementations of virtual TPMs (vTPMs) are based on TPM emulators. Although more flexible and scalable, this approach is less secure. In fact, each vTPM is vulnerable to software attacks both at the virtualised and hypervisor levels. In this work, we propose a novel design for vTPMs that provides a binding to an underlying physical TPM; the new design, akin to a virtualisation extension for TPMs, extends the latest TPM 2.0 specification. We minimise the number of required additions to the TPM data structures and commands so that they do not require a new, non-backwards compatible version of the specification. Moreover, we support migration of vTPMs among TPM-equipped hosts, as this is considered a key feature in a highly virtualised environment. Finally, we propose a flexible approach to vTPM object creation that protects vTPM secrets either in hardware or software, depending on the required level of assurance

    Single system image: A survey

    Get PDF
    Single system image is a computing paradigm where a number of distributed computing resources are aggregated and presented via an interface that maintains the illusion of interaction with a single system. This approach encompasses decades of research using a broad variety of techniques at varying levels of abstraction, from custom hardware and distributed hypervisors to specialized operating system kernels and user-level tools. Existing classification schemes for SSI technologies are reviewed, and an updated classification scheme is proposed. A survey of implementation techniques is provided along with relevant examples. Notable deployments are examined and insights gained from hands-on experience are summarized. Issues affecting the adoption of kernel-level SSI are identified and discussed in the context of technology adoption literature

    An Integrated Modeling Framework for Managing the Deployment and Operation of Cloud Applications

    Get PDF
    Cloud computing can help Software as a Service (SaaS) providers to take advantage of the sheer number of cloud benefits such as, agility, continuity, cost reduction, autonomy, and easy management of resources. To reap the benefits, SaaS providers should create their applications to utilize the cloud platform capabilities. However, this is a daunting task. First, it requires a full understanding of the service offerings from different providers, and the meta-data artifacts required by each provider to configure the platform to efficiently deploy, run and manage the application. Second, it involves complex decisions that are specified by different stakeholders. Examples include, financial decisions (e.g., selecting a platform to reduces costs), architectural decisions (e.g., partition the application to maximize scalability), and operational decisions (e.g., distributing modules to insure availability and porting the application to other platforms). Finally, while each stakeholder may conduct a certain type of change to address a specific concern, the impact of a change may span multiple models and influence the decisions of several stakeholders. These factors motivate the need for: (i) a new architectural view model that focuses on service operation and reflects the cloud stakeholder perspectives, and (ii) a novel framework that facilitates providing holistic as well as partial architectural views, and generating the required platform artifacts by fragmenting the model into artifacts that can be easily modified separately. This PhD research devises a novel architecture framework, "The 5+1 Architectural View Model", for cloud applications, in which each view corresponds to a different perspective on cloud application deployment. The architectural framework is realized as a cloud modeling framework, called "StratusML", which consists of a modeling language that uses layers to specify the cloud configuration space, and a transformation engine to generate the configuration space artifacts. The usefulness and practical applicability of StratusML to model multi-cloud and multi-tenant applications have been demonstrated though a representative domain example. Moreover, to automate the framework evolution as new concerns and cloud platforms emerge, this research work introduces also a novel schema matching technique, called "Liberate". Liberate supports the process of domain model creation, evolution, and transformations. Liberate helps solve the vendor lock-in problem by reducing the manual efforts required to map complex correspondences between cloud schemas whose domain concepts do not share linguistic similarities. The evaluation of Liberate shows its superiority in the cloud domain over existing schema matching approaches
    corecore