9 research outputs found

    Single system image: A survey

    Get PDF
    Single system image is a computing paradigm where a number of distributed computing resources are aggregated and presented via an interface that maintains the illusion of interaction with a single system. This approach encompasses decades of research using a broad variety of techniques at varying levels of abstraction, from custom hardware and distributed hypervisors to specialized operating system kernels and user-level tools. Existing classification schemes for SSI technologies are reviewed, and an updated classification scheme is proposed. A survey of implementation techniques is provided along with relevant examples. Notable deployments are examined and insights gained from hands-on experience are summarized. Issues affecting the adoption of kernel-level SSI are identified and discussed in the context of technology adoption literature

    Privacy in cloud computing

    Get PDF
    Tese de mestrado em Segurança Informática, apresentada à Universidade de Lisboa, através da Faculdade de Ciências, 2010O paradigma cloud computing está progressivamente a integrar-se nas tecnologias de informação e é também visto por muitos como a próxima grande viragem na indústria da computação. A sua integração significa grandes alterações no modo como olhamos para a segurança dos dados de empresas que decidem confiar informação confidencial aos fornecedores de serviços cloud. Esta alteração implica um nível muito elevado de confiança no fornecedor do serviço. Ao mudar para a cloud, uma empresa relega para o fornecedor do serviço controlo sobre os seus dados, porque estes vão executar em hardware que é propriedade do fornecedor e sobre o qual a empresa não tem qualquer controlo. Este facto irá pesar muito na decisão, de mudar para a cloud, de empresas que tratam informação delicada (p.ex., informação médica ou financeira). Neste trabalho propomos demonstrar de que forma um administrador malicioso, com acesso ao hardware do fornecedor, consegue violar a privacidade dos dados que o utilizador da cloud confiou ao prestador desses serviços. Definimos como objectivo uma análise detalhada de estratégias de ataque que poderão ajudar um administrador malicioso a quebrar a privacidade de clientes da cloud, bem como a eficácia demonstrada contra esses mesmos ataques por mecanismos de protecção já propostos para a cloud. Pretendemos que este trabalho seja capaz de alertar a comunidade científica para a gravidade dos problemas de segurança que actualmente existem na cloud e, que ao mesmo tempo, sirva como motivação para uma acção célere desta, de forma a encontrar soluções para esses problemas.The paradigm of cloud computing is progressively integrating itself in the Information Technology industry and it is also seen by many experts as the next big shift in this industry. This integration implies considerable alterations in the security schemes used to ensure that the privacy of confidential information, companies entrust to the cloud provider, is kept. It also means that the level of trust in the cloud provider must be considerably high. When moving to the cloud, a company relinquishes control over its data to the cloud provider. This happens because, when operating in the cloud, the data is going to execute on top of the hardware owned by the cloud provider and, in this scenario, the client has no control over that hardware. Companies that deal with sensitive data (e.g., medical or financial records) have to weigh the importance of this problem when considering moving their data to the cloud. In this work, we provide a demonstration of how a malicious administrator, with access to the hardware of the cloud provider, is capable of violating the privacy of the data entrusted to the cloud provider by his clients. Our objective is to offer a detailed analysis of attack strategies that can be used by a malicious administrator to break the privacy of cloud clients, as well as the level of efficacy demonstrated by some protection mechanism that have already been proposed for the cloud. We also hope that this work is capable of capturing the attention of the research community to the security problems existent in the cloud and, that at the same time, it works as a motivation factor for a prompt action in order to find solutions for these problems

    Enhancing the programmability and energy efficiency of storage in hpc and virtualized environments

    Get PDF
    Mención Internacional en el título de doctorA decade ago computing systems hit a clock and power ceiling that places the energetic challenge among the most relevant issues in High Performance Computing (HPC). Motivated by the fact that computation is increasingly becoming cheaper than data movement in terms of power, our work studies and optimizes data movement across different levels of the software stack. We propose novel methodologies for analyzing, modeling, and optimizing the energy efficiency of data movement. More precisely, we propose methodologies to enhance the understanding of power consumption in the software I/O stack, and optimize the I/O energy efficiency in the operating system’s I/O stack, low-level CPU device drivers, and virtualized environments. Our experimental results show that through the understanding of the different operating system layers and their interaction, it is possible to develop novel coordination techniques that optimize the energy consumption and increase performance of I/O workloads. First, we develop a methodology for data collection, power and performance characterization, and modeling power usage in the I/O stack. Our work presents a detailed study of power and energy usage across all system components during various I/O-intensive workloads. We propose a data gathering methodology that combines software and hardware-based instrumentation in order to study I/O data movement, and develop novel power prediction models employing data analysis techniques. Second, this thesis presents novel CPU-level optimizations that improve the energy efficiency of I/O workloads. We address two issues present in modern processors: thermal imbalance causing performance variation and an inefficient use of CPU resources during I/O workloads. We develop novel techniques for power optimization and thermal efficiency through cross-layer coordination of CPU and I/O management. Third, we also focus on optimizing data sharing among virtual domains. In our work we refer to this as virtualized data sharing, which mainly differs from existing solutions by coordinating data flows through the software I/O stack. We develop a virtualized data sharing solution in order to reduce data movement among virtual environments, introducing new abstractions and mechanisms to more efficiently coordinate storage I/O.Hace una década, los computadores alcanzaron el límite físico de la frecuencia y potencia disipada, estableciendo el consumo energético como uno de los principales obstáculos en el campo de la computación de alto rendimiento. Motivados por el hecho de que la computación resulta cada vez menos costosa que el movimiento de datos en términos de energía, nuestro trabajo estudia y optimiza el movimiento de datos en varios niveles de la arquitectura software. En este trabajo proponemos nuevas metodologías para analizar, modelar y optimizar la eficiencia energética del movimiento de datos. Concretamente, proponemos metodologías para mejorar el análisis del consumo de potencia en la arquitectura software de E/S, así como optimizar la eficiencia energética de: la pila de E/S del sistema operativo, controladores de la CPU y entornos virtuales de E/S. Los resultados experimentales muestran que, mediante la comprensión de la interacción de las capas del sistema operativo, es posible desarrollar nuevas técnicas de coordinación que optimicen el consumo energético e incrementen el rendimiento de las cargas de trabajo de E/S. En primer lugar desarrollamos una metodología para la recolección de datos y la caracterización del rendimiento y consumo de potencia en la pila de E/S. Nuestro trabajo presenta un estudio detallado del consumo energético y de potencia de cada uno de los componentes del sistema durante la ejecución de cargas de trabajo de E/S. Concretamente proponemos una metodología de captura de datos que combina instrumentación hardware y software para estudiar el movimiento de datos, con el fin de desarrollar nuevos modelos de predicción de consumo empleando técnicas de análisis de datos. En segundo lugar, esta Tesis Doctoral presentamos nuevas optimizaciones a nivel de CPU que mejoran la eficiencia energética de las cargas de trabajo de E/S. Para ello consideramos dos problemas fundamentales en los procesadores modernos: el desequilibrio térmico que causa variablidad de rendimiento y el uso ineficiente de los recursos de la CPU durante cargas de trabajo de E/S. Además desarrollamos nuevas técnicas que optimizan la eficiencia energética a través de la coordinación entre las distintas capas del sistema operativo que gestionan CPU y la E/S. En tercer lugar, también centramos este trabajo en la optimización del intercambio de datos entre dominios virtuales. En nuestro trabajo nos referimos a esto como el intercambio de datos virtualizado, que se diferencia principalmente de las soluciones existentes mediante la coordinación de los flujos de datos mediante la cooperación entre distintos dominios virtuales. Para ello desarrollamos una solución de intercambio de datos que minimiza la copia de datos con el fin de reducir el movimiento de datos, e introducimos nuevas abstracciones y mecanismos para coordinar de manera más eficiente el almacenamiento de E/S en entornos virtuales.Programa Oficial de Doctorado en Ciencia y Tecnología InformáticaPresidente: Laurent Lefevre.- Vocal: Arturo González Escriban

    Evaluating Host Intrusion Detection Systems

    Get PDF
    Host Intrusion Detection Systems (HIDSs) are critical tools needed to provide in-depth security to computer systems. Quantitative metrics for HIDSs are necessary for comparing HIDSs or determining the optimal operational point of a HIDS. While HIDSs and Network Intrusion Detection Systems (NIDSs) greatly differ, similar evaluations have been performed on both types of IDSs by assessing metrics associated with the classification algorithm (e.g., true positives, false positives). This dissertation motivates the necessity of additional characteristics to better describe the performance and effectiveness of HIDSs. The proposed additional characteristics are the ability to collect data where an attack manifests (visibility), the ability of the HIDS to resist attacks in the event of an intrusion (attack resiliency), the ability to timely detect attacks (efficiency), and the ability of the HIDS to avoid interfering with the normal functioning of the system under supervision (transparency). For each characteristic, we propose corresponding quantitative evaluation metrics. To measure the effect of visibility on the detection of attacks, we introduce the probability of attack manifestation and metrics related to data quality (i.e., relevance of the data regarding the attack to be detected). The metrics were applied empirically to evaluate filesystem data, which is the data source for many HIDSs. To evaluate attack resiliency we introduce the probability of subversion, which we estimate by measuring the isolation between the HIDS and the system under supervision. Additionally, we provide methods to evaluate time delays for efficiency, and performance overhead for transparency. The proposed evaluation methods are then applied to compare two HIDSs. Finally, we show how to integrate the proposed measurements into a cost framework. First, mapping functions are established to link operational costs of the HIDS with the metrics proposed for efficiency and transparency. Then we show how the number of attacks detected by the HIDS not only depends on detection accuracy, but also on the evaluation results of visibility and attack resiliency

    Advancing Operating Systems via Aspect-Oriented Programming

    Get PDF
    Operating system kernels are among the most complex pieces of software in existence to- day. Maintaining the kernel code and developing new functionality is increasingly compli- cated, since the amount of required features has risen significantly, leading to side ef fects that can be introduced inadvertedly by changing a piece of code that belongs to a completely dif ferent context. Software developers try to modularize their code base into separate functional units. Some of the functionality or “concerns” required in a kernel, however, does not fit into the given modularization structure; this code may then be spread over the code base and its implementation tangled with code implementing dif ferent concerns. These so-called “crosscutting concerns” are especially dif ficult to handle since a change in a crosscutting concern implies that all relevant locations spread throughout the code base have to be modified. Aspect-Oriented Software Development (AOSD) is an approach to handle crosscutting concerns by factoring them out into separate modules. The “advice” code contained in these modules is woven into the original code base according to a pointcut description, a set of interaction points (joinpoints) with the code base. To be used in operating systems, AOSD requires tool support for the prevalent procedu- ral programming style as well as support for weaving aspects. Many interactions in kernel code are dynamic, so in order to implement non-static behavior and improve performance, a dynamic weaver that deploys and undeploys aspects at system runtime is required. This thesis presents an extension of the “C” programming language to support AOSD. Based on this, two dynamic weaving toolkits – TOSKANA and TOSKANA-VM – are presented to permit dynamic aspect weaving in the monolithic NetBSD kernel as well as in a virtual- machine and microkernel-based Linux kernel running on top of L4. Based on TOSKANA, applications for this dynamic aspect technology are discussed and evaluated. The thesis closes with a view on an aspect-oriented kernel structure that maintains coherency and handles crosscutting concerns using dynamic aspects while enhancing de- velopment methods through the use of domain-specific programming languages

    Platform as a service integration for scientific computing using DIRAC

    Get PDF
    Cada día crece máis a demanda de recursos de computación requirida polos investigadores, capacidades de cálculo que coexisten co crecente volume de datos xerado actualmente. Estes investigadores están a demandar un servizo de Computación de Altas Prestacións (HPC) que permita a execución das suas simulacións dunha forma na que se deslocalicen os recursos para poder acceder aos máximos posibles, facilitandoo coa forma o máis cómoda e segura para eles. Doutra banda, as universidades están conectadas con centros de investigación con redes que pusuen unha velocidade e fiabilidade que posibilitan a execución de traballos de cálculo científico. As capacidades de computo existentes en universidades van dende aulas informáticas para usos docentes, laboratorios, etc., ata clusters de ordenadores pertencentes a grupos de investigación. Usando tecnoloxías grid e cloud estes recursos computacionais heteroxéneos poderían ser reutilizados polos investigadores para realizar simulacións, aportando unha maior cantidade de cómputo a xa existente e deslocalizando os recursos entre distintos lugares ao redor do planeta. O obxectivo desta tese é adaptar a contorna para computación distribuída DIRAC, desenvolvida para o proxecto LHCb do CERN, para o seu uso por varias comunidades de usuarios baseado nas tecnoloxías cloud e big data. Esta contorna pusuiría repositorios de software centralizados que permitan proveer o software necesario para que a través dos entornos na nube se poidan executar as aplicacións dos investigadores en calquera parte do planeta dunha forma escalable, permitindo aprobeitar tanto recursos dedicados como nondedicados. Avaliando así a execución desta plataforma para a realización de cálculos científicos. Este traballo comezará coa obtención de requisitos, para pasar despois ao proceso de integración básica. Posteriormente, optimizarase o uso do software cientifico empregado para as contornas cloud, tratando de adaptalo aos entornos virtualizados. Para iso, será necesario realizar un estudo estadístico que sexa o máis próximo posible aos entornos en producción para poder determinar e crear as infraestructuras adaptadas evitando así a perda de rendemento dentro de recursos. O seguinte caso sería utilizar as tecnoloxías virtualizadas, adaptando as arquitecturas creadas, para a creación de sistemas que permitan o envío de traballos que requiran de grandes cantidades de datos no eido do big data dunha forma distribuida

    DEPENDABILITY BENCHMARKING OF NETWORK FUNCTION VIRTUALIZATION

    Get PDF
    Network Function Virtualization (NFV) is an emerging networking paradigm that aims to reduce costs and time-to-market, improve manageability, and foster competition and innovative services. NFV exploits virtualization and cloud computing technologies to turn physical network functions into Virtualized Network Functions (VNFs), which will be implemented in software, and will run as Virtual Machines (VMs) on commodity hardware located in high-performance data centers, namely Network Function Virtualization Infrastructures (NFVIs). The NFV paradigm relies on cloud computing and virtualization technologies to provide carrier-grade services, i.e., the ability of a service to be highly reliable and available, within fast and automatic failure recovery mechanisms. The availability of many virtualization solutions for NFV poses the question on which virtualization technology should be adopted for NFV, in order to fulfill the requirements described above. Currently, there are limited solutions for analyzing, in quantitative terms, the performance and reliability trade-offs, which are important concerns for the adoption of NFV. This thesis deals with assessment of the reliability and of the performance of NFV systems. It proposes a methodology, which includes context, measures, and faultloads, to conduct dependability benchmarks in NFV, according to the general principles of dependability benchmarking. To this aim, a fault injection framework for the virtualization technologies has been designed and implemented for the virtualized technologies being used as case studies in this thesis. This framework is successfully used to conduct an extensive experimental campaign, where we compare two candidate virtualization technologies for NFV adoption: the commercial, hypervisor-based virtualization platform VMware vSphere, and the open-source, container-based virtualization platform Docker. These technologies are assessed in the context of a high-availability, NFV-oriented IP Multimedia Subsystem (IMS). The analysis of experimental results reveal that i) fault management mechanisms are crucial in NFV, in order to provide accurate failure detection and start the subsequent failover actions, and ii) fault injection proves to be valuable way to introduce uncommon scenarios in the NFVI, which can be fundamental to provide a high reliable service in production

    Matching distributed file systems with application workloads

    Get PDF
    Modern storage systems have a large number of configurable parameters, distributed over many layers of abstraction. The number of combinations of these parameters, that can be altered to create an instance of such a system, is enormous. In practise, many of these parameters are never altered; instead default values, intended to support generic workloads and access patterns, are used. As systems become larger and evolve to support different workloads, the appropriateness of using default parameters in this way comes into question. This thesis examines the implications of changing some of these parameters and explores the effects these changes have on performance. As part of that work multiple contributions have been made, including the creation of a structured method to create and evaluate different storage configurations, choosing appropriate access sizes for the evaluation, picking representative cloud workloads and capturing storage traces for further analysis, extraction of the workload storage characteristics, creating logical partitions of the distributed file system used for the optimization, the creation of heterogeneous storage pools within the homogeneous system and the mapping and evaluation of the chosen workloads to the examined configurations
    corecore