18 research outputs found

    Techniques to Protect Confidentiality and Integrity of Persistent and In-Memory Data

    Get PDF
    Today computers store and analyze valuable and sensitive data. As a result we need to protect this data against confidentiality and integrity violations that can result in the illicit release, loss, or modification of a user’s and an organization’s sensitive data such as personal media content or client records. Existing techniques protecting confidentiality and integrity lack either efficiency or are vulnerable to malicious attacks. In this thesis we suggest techniques, Guardat and ERIM, to efficiently and robustly protect persistent and in-memory data. To protect the confidentiality and integrity of persistent data, clients specify per-file policies to Guardat declaratively, concisely and separately from code. Guardat enforces policies by mediating I/O in the storage layer. In contrast to prior techniques, we protect against accidental or malicious circumvention of higher software layers. We present the design and prototype implementation, and demonstrate that Guardat efficiently enforces example policies in a web server. To protect the confidentiality and integrity of in-memory data, ERIM isolates sensitive data using Intel Memory Protection Keys (MPK), a recent x86 extension to partition the address space. However, MPK does not protect against malicious attacks by itself. We prevent malicious attacks by combining MPK with call gates to trusted entry points and ahead-of-time binary inspection. In contrast to existing techniques, ERIM efficiently protects frequently-used session keys of web servers, an in-memory reference monitor’s private state, and managed runtimes from native libraries. These use cases result in high switch rates of the order of 10 5 –10 6 switches/s. Our experiments demonstrate less then 1% runtime overhead per 100,000 switches/s, thus outperforming existing techniques.Computer speichern und analysieren wertvolle und sensitive Daten. Das hat zur Folge, dass wir diese Daten gegen Vertraulichkeits- und Integritätsverletzungen schützen müssen. Andernfalls droht die unerlaubte Freigabe, der Verlust oder die Modifikation der Daten. Existierende Methoden schützen die Vertraulichkeit und Integrität unzureichend, da sie ineffizient und anfällig für mutwillige Angriffe sind. In dieser Doktorarbeit stellen wir zwei Methoden, Guardat und ERIM, vor, die persistente Daten und Daten im Arbeitsspeicher effizient und widerstandsfähig beschützen. Um die Vertraulichkeit und Integrität persistenter Daten zu schützen, verknüpfen Nutzer für jede Datei Richtlinien in Guardat. Guardat überprüft diese Richtlinien für jeden Zugriff und setzt diese im Speichermedium durch. Im Gegensatz zu existierenden Methoden, beschützt Guardat vor mutwilligem Umgehen. Wir beschreiben die Methode, eine Implementierung und evaluieren die Effizienz von Beispielrichtlinien. Um die Vertraulichkeit und Integrität von Daten im Arbeitsspeicher zu schützen, isoliert ERIM sensitive Daten mit Hilfe von Intel Memory Protection Keys (MPK), eine neue x86 Erweiterung, um den Arbeitsspeicher aufzuteilen. Da MPK allerdings nicht gegen mutwillige Angriffe schützt, verhindert ERIM diese, indem es MPK mit widerstandsfähigen Wechseln der Speicherbereiche und einer Binärcodeüberprüfung kombiniert. Im Gegensatz zu existierenden Methoden, beschützt ERIM effizient häufig genutzte Sitzungsschlüssel, Zustandsvariablen eines Referenzmonitors und verwaltete Laufzeitumgebungen von nativen Bibliotheken. Unsere Experimente zeigen, dass weniger als 1% Laufzeitmehraufwand je 100.000 Wechseloperationen pro Sekunde notwendig sind

    Service-oriented models for audiovisual content storage

    No full text
    What are the important topics to understand if involved with storage services to hold digital audiovisual content? This report takes a look at how content is created and moves into and out of storage; the storage service value networks and architectures found now and expected in the future; what sort of data transfer is expected to and from an audiovisual archive; what transfer protocols to use; and a summary of security and interface issues

    Deliverable JRA1.1: Evaluation of current network control and management planes for multi-domain network infrastructure

    Get PDF
    This deliverable includes a compilation and evaluation of available control and management architectures and protocols applicable to a multilayer infrastructure in a multi-domain Virtual Network environment.The scope of this deliverable is mainly focused on the virtualisation of the resources within a network and at processing nodes. The virtualization of the FEDERICA infrastructure allows the provisioning of its available resources to users by means of FEDERICA slices. A slice is seen by the user as a real physical network under his/her domain, however it maps to a logical partition (a virtual instance) of the physical FEDERICA resources. A slice is built to exhibit to the highest degree all the principles applicable to a physical network (isolation, reproducibility, manageability, ...). Currently, there are no standard definitions available for network virtualization or its associated architectures. Therefore, this deliverable proposes the Virtual Network layer architecture and evaluates a set of Management- and Control Planes that can be used for the partitioning and virtualization of the FEDERICA network resources. This evaluation has been performed taking into account an initial set of FEDERICA requirements; a possible extension of the selected tools will be evaluated in future deliverables. The studies described in this deliverable define the virtual architecture of the FEDERICA infrastructure. During this activity, the need has been recognised to establish a new set of basic definitions (taxonomy) for the building blocks that compose the so-called slice, i.e. the virtual network instantiation (which is virtual with regard to the abstracted view made of the building blocks of the FEDERICA infrastructure) and its architectural plane representation. These definitions will be established as a common nomenclature for the FEDERICA project. Other important aspects when defining a new architecture are the user requirements. It is crucial that the resulting architecture fits the demands that users may have. Since this deliverable has been produced at the same time as the contact process with users, made by the project activities related to the Use Case definitions, JRA1 has proposed a set of basic Use Cases to be considered as starting point for its internal studies. When researchers want to experiment with their developments, they need not only network resources on their slices, but also a slice of the processing resources. These processing slice resources are understood as virtual machine instances that users can use to make them behave as software routers or end nodes, on which to download the software protocols or applications they have produced and want to assess in a realistic environment. Hence, this deliverable also studies the APIs of several virtual machine management software products in order to identify which best suits FEDERICA’s needs.Postprint (published version

    Cloud computing enhancements and private cloud management

    Get PDF
    Diseño e implementación de los circuitos electrónicos y software de un equipo para monitorización de fermentación Maleoláctica en la producción de vino. Se utiliza la medida de la variación de la velocidad de una onda de ultrasonido en el medio fermentado.The objective of this project is to implement a private cloud in a small datacenter network using MAAS server provisioning tool and Openstack software platform for cloud computing, leaving it ready to be interconnected it with an experimental SDN Network. The private cloud and Network will serve the telecommunications group undergraduate and post-graduate labs and it will be used both as a production Network and as a test bed for new research with the cloud being used to integrate several available computing resources in order to maximize the available computation power for research tasks.El objetivo de este proyecto es implementar una nube privada en una pequeña red de centro de datos usando la herramienta provisionadora de servidores MAAS y el proyecto de computación en la nuve Openstack para que posteriormente esta sea interconectada con una red SDN experimental. La nube privada y la red servirán a los laboratorios de pregrado y posgrado del grupo de telecomunicaciones de la Universidad y se utilizará tanto como una red de producción como un banco de pruebas para nuevas investigaciones, usando la nuve para integrar vários recursos informáticos disponibles para maximizar la computación disponible para las tareas de investigación.L'objectiu d'aquest projecte és implementar un núvol privat en una petita xarxa de centre de dades fent servir la eina d'aprovisionament de servidors MAAS el projecte de computació en el núvol Openstack per a que posteriorment aquesta sigui interconectada amb una xarxa SDN experimental. El núvol privat i la xarxa serviràn als laboratoris de pregrau i postgrau del grup de telecomunicacions de la Universitat i s'utilitzarà tant com una xarxa de producció com un banc de proves per a noves investigacions, fent servir el núvol per integrar diversos recursos informàtics disponibles per maximitzar la computació disponible per a tasques d'investigació

    Enhancing the Programmability of Cloud Object Storage

    Get PDF
    En un món que depèn cada vegada més de la tecnologia, les dades digitals es generen a una escala sense precedents. Això fa que empreses que requereixen d'un gran espai d'emmagatzematge, com Netflix o Dropbox, utilitzin solucions d'emmagatzematge al núvol. Mes concretament, l'emmagatzematge d'objectes, donada la seva simplicitat, escalabilitat i alta disponibilitat. No obstant això, aquests magatzems s'enfronten a tres desafiaments principals: 1) Gestió flexible de càrregues de treball de múltiples usuaris. Normalment, els magatzems d'objectes són sistemes multi-usuari, la qual cosa significa que tots ells comparteixen els mateixos recursos, el que podria ocasionar problemes d'interferència. A més, és complex administrar polítiques d'emmagatzematge heterogènies a gran escala en ells. 2) Autogestió de dades. Els magatzems d'objectes no ofereixen molta flexibilitat pel que fa a l'autogestió de dades per part dels usuaris. Típicament, són sistemes rígids, la qual cosa impedeix gestionar els requisits específics dels objectes. 3) Còmput elàstic prop de les dades. Situar els càlculs prop de les dades pot ser útil per reduir la transferència de dades. Però, el desafiament aquí és com aconseguir la seva elasticitat sense provocar contenció de recursos i interferències en la capa d'emmagatzematge. En aquesta tesi presentem tres contribucions innovadores que resolen aquests desafiaments. En primer lloc, presentem la primera arquitectura d'emmagatzematge definida per programari (SDS) per a magatzems d'objectes que separa les capes de control i de dades. Això permet gestionar les càrregues de treball de múltiples usuaris d'una manera flexible i dinàmica. En segon lloc, hem dissenyat una nova abstracció de polítiques anomenada "microcontrolador" que transforma els objectes comuns en objectes intel·ligents, permetent als usuaris programar el seu comportament. Finalment, presentem la primera plataforma informàtica "serverless" guiada per dades i elàstica, que mitiga els problemes de col·locar el càlcul prop de les dades.En un mundo que depende cada vez más de la tecnología, los datos digitales se generan a una escala sin precedentes. Esto hace que empresas que requieren de un gran espacio de almacenamiento, como Netflix o Dropbox, usen soluciones de almacenamiento en la nube. Mas concretamente, el almacenamiento de objectos, dada su escalabilidad y alta disponibilidad. Sin embargo, estos almacenes se enfrentan a tres desafíos principales: 1) Gestión flexible de cargas de trabajo de múltiples usuarios. Normalmente, los almacenes de objetos son sistemas multi-usuario, lo que significa que todos ellos comparten los mismos recursos, lo que podría ocasionar problemas de interferencia. Además, es complejo administrar políticas de almacenamiento heterogéneas a gran escala en ellos. 2) Autogestión de datos. Los almacenes de objetos no ofrecen mucha flexibilidad con respecto a la autogestión de datos por parte de los usuarios. Típicamente, son sistemas rígidos, lo que impide gestionar los requisitos específicos de los objetos. 3) Cómputo elástico cerca de los datos. Situar los cálculos cerca de los datos puede ser útil para reducir la transferencia de datos. Pero, el desafío aquí es cómo lograr su elasticidad sin provocar contención de recursos e interferencias en la capa de almacenamiento. En esta tesis presentamos tres contribuciones que resuelven estos desafíos. En primer lugar, presentamos la primera arquitectura de almacenamiento definida por software (SDS) para almacenes de objetos que separa las capas de control y de datos. Esto permite gestionar las cargas de trabajo de múltiples usuarios de una manera flexible y dinámica. En segundo lugar, hemos diseñado una nueva abstracción de políticas llamada "microcontrolador" que transforma los objetos comunes en objetos inteligentes, permitiendo a los usuarios programar su comportamiento. Finalmente, presentamos la primera plataforma informática "serverless" guiada por datos y elástica, que mitiga los problemas de colocar el cálculo cerca de los datos.In a world that is increasingly dependent on technology, digital data is generated in an unprecedented way. This makes companies that require large storage space, such as Netflix or Dropbox, use cloud object storage solutions. This is mainly thanks to their built-in characteristics, such as simplicity, scalability and high-availability. However, cloud object stores face three main challenges: 1) Flexible management of multi-tenant workloads. Commonly, cloud object stores are multi-tenant systems, meaning that all tenants share the same system resources, which could lead to interference problems. Furthermore, it is now complex to manage heterogeneous storage policies in a massive scale. 2) Data self-management. Cloud object stores themselves do not offer much flexibility regarding data self-management by tenants. Typically, they are rigid, which prevent tenants to handle the specific requirements of their objects. 3) Elastic computation close to the data. Placing computations close to the data can be useful to reduce data transfers. But, the challenge here is how to achieve elasticity in those computations without provoking resource contention and interferences in the storage layer. In this thesis, we present three novel research contributions that solve the aforementioned challenges. Firstly, we introduce the first Software-defined Storage (SDS) architecture for cloud object stores that separates the control plane from the data plane, allowing to manage multi-tenant workloads in a flexible and dynamic way. For example, by applying different service levels of bandwidth to different tenants. Secondly, we designed a novel policy abstraction called microcontroller that transforms common objects into smart objects, enabling tenants to programmatically manage their behavior. For example, a content-level access control microcontroller attached to an specific object to filter its content depending on who is accessing it. Finally, we present the first elastic data-driven serverless computing platform that mitigates the resource contention problem of placing computation close to the data

    Autonomic Management of Cloud Virtual Infrastructures

    Get PDF
    The new model of interaction suggested by Cloud Computing has experienced a significant diffusion over the last years thanks to its capability of providing customers with the illusion of an infinite amount of reliable resources. Nevertheless, the challenge of efficiently manage a large collection of virtual computing nodes has just been partially moved from the customer's private datacenter to the larger provider's infrastructure that we generally address as “the cloud”. A lot of effort - in both academic and industrial field - is therefore concentrated on policies for the efficient and autonomous management of virtual infrastructures. The research on this topic is further encouraged by the diffusion of cheap and portable sensors and the availability of almost ubiquitous Internet connectivity that are constantly creating large flows of information about the environment we live in. The need for fast and reliable mechanisms to process these considerable volumes of data has inevitably pushed the evolution from the initial scenario of a single (private or public) cloud towards cloud interoperability, giving birth to several forms of collaboration between clouds. The efficient resource management is further complicated in these heterogeneous environments, making autonomous administration more and more desirable. In this thesis, we initially focus on the challenges of autonomic management in a single-cloud scenario, considering the benefits and shortcomings of centralized and distributed solutions and proposing an original decentralized model. Later in this dissertation, we face the challenge of autonomic management in large interconnected cloud environments, where the movement of virtual resources across the infrastructure nodes is further complicated by the intrinsic heterogeneity of the scenario and difficulties introduced by the higher latency medium between datacenters. According to that, we focus on the cost model for the execution of distributed data-intensive application on multiple clouds and we propose different management policies leveraging cloud interoperability

    A generic artifact-driven approach for provisioning, configuring, and managing infrastructure resources in the cloud

    Get PDF
    Provisioning, configuration, and management of infrastructure resources in the cloud is difficult due to diverse APIs offered by cloud providers. Because approaches for a common API are still in an early stage and may not be broadly accepted, individual artifacts can be used to interact with different providers. They require generic properties to describe the configuration of infrastructure resources and combine them with provider-specific information provided by the user. Such generic properties are determined in this thesis by looking at the infrastructure offerings of 14 different providers. The artifacts can be made available in public repositories similar to configuration management scripts originating in the DevOps community. However, trust in their good nature is a challenge because in contrast to configuration management scripts they are executed in a shared management environment. To control and restrict the actions they are performing in this shared environment, a method to confine their execution has been developed. The Linux security module Tomoyo has been chosen as a foundation for this. A policy associated with each artifact describes the artifact's permissions in detail. The artifacts are used in the context of the OASIS Topology and Orchestration Specifiction for Cloud Applications (TOSCA), an emerging standard supported by a number of industry partners. This standard allows to model a topology of resources to be provisioned at a provider. Each infrastructure resource, such as a virtual machine, gets an artifact assigned for provisioning purposes. Based on this standard, two simple tools as well as artifacts for four providers were developed. They show the viability of this artifact-driven approach

    Дослідження та вдосконалення алгоритмів адміністрування мережевої бази даних “Navi”

    Get PDF
    Робота публікується згідно наказу ректора від 29.12.2020 р. №580/од "Про розміщення кваліфікаційних робіт вищої освіти в репозиторії НАУ". Керівник проекту: к.т.н., доцент Проценко Микола МихайловичLong before the advent of computerized databases, humanity already then had a need to store information in a structured form. Since at a time when computers either did not exist at all, or they were just entering the market and were at the stage of their formation, peculiar databases existed in written or physical form, for example, data archives, reference centers, libraries, ledgers, telephone directories etc. And since now there is a need to store huge amounts of information, while taking up as little storage space and resources as possible, databases are an integral stage in the development of opportunities to simplify the life of mankind. The database is both a tool and the very subject of a collection of data, which shows the state of certain objects and their relationship in a certain subject area.Задовго до появи комп'ютеризованих баз даних людство вже тоді мало потребу зберігати інформацію у структурованому вигляді. Оскільки в той час, коли комп’ютери або взагалі не існували, або вони тільки виходили на ринок і були на стадії свого формування, своєрідні бази даних існували в письмовій або фізичній формі, наприклад, архіви даних, довідкові центри, бібліотеки, книги , телефонні довідники тощо. І оскільки зараз існує потреба зберігати величезні обсяги інформації, займаючи при цьому якомога менше місця для зберігання та ресурсів, бази даних є невід’ємним етапом у розвитку можливостей спрощення життя людства. База даних є одночасно інструментом і самим предметом збору даних, який показує стан певних об’єктів та їх взаємозв’язок у певній предметній області

    Trustworthy Knowledge Planes For Federated Distributed Systems

    Full text link
    In federated distributed systems, such as the Internet and the public cloud, the constituent systems can differ in their configuration and provisioning, resulting in significant impacts on the performance, robustness, and security of applications. Yet these systems lack support for distinguishing such characteristics, resulting in uninformed service selection and poor inter-operator coordination. This thesis presents the design and implementation of a trustworthy knowledge plane that can determine such characteristics about autonomous networks on the Internet. A knowledge plane collects the state of network devices and participants. Using this state, applications infer whether a network possesses some characteristic of interest. The knowledge plane uses attestation to attribute state descriptions to the principals that generated them, thereby making the results of inference more trustworthy. Trustworthy knowledge planes enable applications to establish stronger assumptions about their network operating environment, resulting in improved robustness and reduced deployment barriers. We have prototyped the knowledge plane and associated devices. Experience with deploying analyses over production networks demonstrate that knowledge planes impose low cost and can scale to support Internet-scale networks

    Intermediador de serviços na Nuvem

    Get PDF
    Mestrado em Engenharia de Computadores e TelemáticaDe acordo com história dos sistemas informáticos, os engenheiros têm vindo a remodelar infraestruturas para melhorar a eficiência das organizações, visando o acesso partilhado a recursos computacionais. O advento da computação em núvem desencadeou um novo paradigma, proporcionando melhorias no alojamento e entrega de serviços através da Internet. Quando comparado com abordagens tradicionais, este apresenta vantajens por disponibilizar acesso ubíquo, escalável e sob demanda, a determinados conjuntos de recursos computacionais partilhados. Ao longo dos últimos anos, observou-se a entrada de novos operadores que providenciam serviços na núvem, a preços competitivos e diferentes acordos de nível de serviço (“Service Level Agreements”). Com a adoção crescente e sem precedentes da computação em núvem, os fornecedores da área estão se a focar na criação e na disponibilização de novos serviços, com valor acrescentado para os seus clientes. A competitividade do mercado e a existência de inúmeras opções de serviços e de modelos de negócio gerou entropia. Por terem sido criadas diferentes terminologias para conceitos com o mesmo significado e o facto de existir incompatibilidade de Interfaces de Programação Aplicacional (“Application Programming Interface”), deu-se uma restrição de fornecedores de serviços específicos na núvem a utilizadores. A fragmentação na faturação e na cobrança ocorreu quando os serviços na núvem passaram a ser contratualizados com diferentes fornecedores. Posto isto, seria uma mais valia existir uma entidade, que harmonizasse a relação entre os clientes e os múltiplos fornecedores de serviços na núvem, por meio de recomendação e auxílio na intermediação. Esta dissertação propõe e implementa um Intermediador de Serviços na Núvem focado no auxílio e motivação de programadores para recorrerem às suas aplicações na núvem. Descrevendo as aplicações de modo facilitado, um algoritmo inteligente recomendará várias ofertas de serviços na núvem cumprindo com os requisitos aplicacionais. Desta forma, é prestado aos utilizadores formas de submissão, gestão, monitorização e migração das suas aplicações numa núvem de núvens. A interação decorre a partir de uma única interface de programação que orquestrará todo um processo juntamente com outros gestores de serviços na núvem. Os utilizadores podem ainda interagir com o Intermediador de Serviços na Núvem a partir de um portal Web, uma interface de linha de comandos e bibliotecas cliente.Throughout the history of computer systems, experts have been reshaping IT infrastructure for improving the efficiency of organizations by enabling shared access to computational resources. The advent of cloud computing has sparked a new paradigm providing better hosting and service delivery over the Internet. It offers advantages over traditional solutions by providing ubiquitous, scalable and on-demand access to shared pools of computational resources. Over the course of these last years, we have seen new market players offering cloud services at competitive prices and different Service Level Agreements. With the unprecedented increasing adoption of cloud computing, cloud providers are on the look out for the creation and offering of new and valueadded services towards their customers. Market competitiveness, numerous service options and business models led to gradual entropy. Mismatching cloud terminology got introduced and incompatible APIs locked-in users to specific cloud service providers. Billing and charging become fragmented when consuming cloud services from multiple vendors. An entity recommending cloud providers and acting as an intermediary between the cloud consumer and providers would harmonize this interaction. This dissertation proposes and implements a Cloud Service Broker focusing on assisting and encouraging developers for running their applications on the cloud. Developers can easily describe their applications, where an intelligent algorithm will be able to recommend cloud offerings that better suit application requirements. In this way, users are aided in deploying, managing, monitoring and migrating their applications in a cloud of clouds. A single API is required for orchestrating the whole process in tandem with truly decoupled cloud managers. Users can also interact with the Cloud Service Broker through a Web portal, a command-line interface, and client libraries
    corecore