274 research outputs found

    Deduplication-based Energy Effcient Storage System in Cloud Environment

    Get PDF
    In cloud computing, companies usually use high-end storage systems to guarantee the I/O performance of virtual machines (VM). These storage systems cost a lot of energy for their high performance. In this paper, we propose an EEVS, a deduplication-based energy efficiency storage system for VM storage. We firstly investigate some VM image files with general operating systems. With the analysis result, we find there are many redundant data blocks that bring extra energy cost VM storage. Therefore, in the EEVS, we design an online-deduplication mechanism to reduce these redundant data without service interruption, while traditional deduplication technology is used for offline backup. Based on the system design, we implement an EEVS with the existing cloud platform. Since this mechanism needs considerable computing resources, we design a deduplication selection algorithm such that the storage energy consumption is minimized for a given set of VMs with limited resources for deduplication. Experiment results in a para-virtualization environments of the EEVS show that energy consumption is reduced by even up to 66% with negligible performance degradation

    HEC: Collaborative Research: SAM^2 Toolkit: Scalable and Adaptive Metadata Management for High-End Computing

    Get PDF
    The increasing demand for Exa-byte-scale storage capacity by high end computing applications requires a higher level of scalability and dependability than that provided by current file and storage systems. The proposal deals with file systems research for metadata management of scalable cluster-based parallel and distributed file storage systems in the HEC environment. It aims to develop a scalable and adaptive metadata management (SAM2) toolkit to extend features of and fully leverage the peak performance promised by state-of-the-art cluster-based parallel and distributed file storage systems used by the high performance computing community. There is a large body of research on data movement and management scaling, however, the need to scale up the attributes of cluster-based file systems and I/O, that is, metadata, has been underestimated. An understanding of the characteristics of metadata traffic, and an application of proper load-balancing, caching, prefetching and grouping mechanisms to perform metadata management correspondingly, will lead to a high scalability. It is anticipated that by appropriately plugging the scalable and adaptive metadata management components into the state-of-the-art cluster-based parallel and distributed file storage systems one could potentially increase the performance of applications and file systems, and help translate the promise and potential of high peak performance of such systems to real application performance improvements. The project involves the following components: 1. Develop multi-variable forecasting models to analyze and predict file metadata access patterns. 2. Develop scalable and adaptive file name mapping schemes using the duplicative Bloom filter array technique to enforce load balance and increase scalability 3. Develop decentralized, locality-aware metadata grouping schemes to facilitate the bulk metadata operations such as prefetching. 4. Develop an adaptive cache coherence protocol using a distributed shared object model for client-side and server-side metadata caching. 5. Prototype the SAM2 components into the state-of-the-art parallel virtual file system PVFS2 and a distributed storage data caching system, set up an experimental framework for a DOE CMS Tier 2 site at University of Nebraska-Lincoln and conduct benchmark, evaluation and validation studies

    A survey and classification of software-defined storage systems

    Get PDF
    The exponential growth of digital information is imposing increasing scale and efficiency demands on modern storage infrastructures. As infrastructure complexity increases, so does the difficulty in ensuring quality of service, maintainability, and resource fairness, raising unprecedented performance, scalability, and programmability challenges. Software-Defined Storage (SDS) addresses these challenges by cleanly disentangling control and data flows, easing management, and improving control functionality of conventional storage systems. Despite its momentum in the research community, many aspects of the paradigm are still unclear, undefined, and unexplored, leading to misunderstandings that hamper the research and development of novel SDS technologies. In this article, we present an in-depth study of SDS systems, providing a thorough description and categorization of each plane of functionality. Further, we propose a taxonomy and classification of existing SDS solutions according to different criteria. Finally, we provide key insights about the paradigm and discuss potential future research directions for the field.This work was financed by the Portuguese funding agency FCT-Fundacao para a Ciencia e a Tecnologia through national funds, the PhD grant SFRH/BD/146059/2019, the project ThreatAdapt (FCT-FNR/0002/2018), the LASIGE Research Unit (UIDB/00408/2020), and cofunded by the FEDER, where applicable

    Doctor of Philosophy

    Get PDF
    dissertationIn the past few years, we have seen a tremendous increase in digital data being generated. By 2011, storage vendors had shipped 905 PB of purpose-built backup appliances. By 2013, the number of objects stored in Amazon S3 had reached 2 trillion. Facebook had stored 20 PB of photos by 2010. All of these require an efficient storage solution. To improve space efficiency, compression and deduplication are being widely used. Compression works by identifying repeated strings and replacing them with more compact encodings while deduplication partitions data into fixed-size or variable-size chunks and removes duplicate blocks. While we have seen great improvements in space efficiency from these two approaches, there are still some limitations. First, traditional compressors are limited in their ability to detect redundancy across a large range since they search for redundant data in a fine-grain level (string level). For deduplication, metadata embedded in an input file changes more frequently, and this introduces more unnecessary unique chunks, leading to poor deduplication. Cloud storage systems suffer from unpredictable and inefficient performance because of interference among different types of workloads. This dissertation proposes techniques to improve the effectiveness of traditional compressors and deduplication in improving space efficiency, and a new IO scheduling algorithm to improve performance predictability and efficiency for cloud storage systems. The common idea is to utilize similarity. To improve the effectiveness of compression and deduplication, similarity in content is used to transform an input file into a compression- or deduplication-friendly format. We propose Migratory Compression, a generic data transformation that identifies similar data in a coarse-grain level (block level) and then groups similar blocks together. It can be used as a preprocessing stage for any traditional compressor. We find metadata have a huge impact in reducing the benefit of deduplication. To isolate the impact from metadata, we propose to separate metadata from data. Three approaches are presented for use cases with different constrains. For the commonly used tar format, we propose Migratory Tar: a data transformation and also a new tar format that deduplicates better. We also present a case study where we use deduplication to reduce storage consumption for storing disk images, while at the same time achieving high performance in image deployment. Finally, we apply the same principle of utilizing similarity in IO scheduling to prevent interference between random and sequential workloads, leading to efficient, consistent, and predictable performance for sequential workloads and a high disk utilization

    Leveraging content properties to optimize distributed storage systems

    Get PDF
    Les fournisseurs de services de cloud computing, les réseaux sociaux et les entreprises de gestion des données ont assisté à une augmentation considérable du volume de données qu'ils reçoivent chaque jour. Toutes ces données créent des nouvelles opportunités pour étendre la connaissance humaine dans des domaines comme la santé, l'urbanisme et le comportement humain et permettent d'améliorer les services offerts comme la recherche, la recommandation, et bien d'autres. Ce n'est pas par accident que plusieurs universitaires mais aussi les médias publics se référent à notre époque comme l'époque Big Data . Mais ces énormes opportunités ne peuvent être exploitées que grâce à de meilleurs systèmes de gestion de données. D'une part, ces derniers doivent accueillir en toute sécurité ce volume énorme de données et, d'autre part, être capable de les restituer rapidement afin que les applications puissent bénéficier de leur traite- ment. Ce document se concentre sur ces deux défis relatifs aux Big Data . Dans notre étude, nous nous concentrons sur le stockage de sauvegarde (i) comme un moyen de protéger les données contre un certain nombre de facteurs qui peuvent les rendre indisponibles et (ii) sur le placement des données sur des systèmes de stockage répartis géographiquement, afin que les temps de latence perçue par l'utilisateur soient minimisés tout en utilisant les ressources de stockage et du réseau efficacement. Tout au long de notre étude, les données sont placées au centre de nos choix de conception dont nous essayons de tirer parti des propriétés de contenu à la fois pour le placement et le stockage efficace.Cloud service providers, social networks and data-management companies are witnessing a tremendous increase in the amount of data they receive every day. All this data creates new opportunities to expand human knowledge in fields like healthcare and human behavior and improve offered services like search, recommendation, and many others. It is not by accident that many academics but also public media refer to our era as the Big Data era. But these huge opportunities come with the requirement for better data management systems that, on one hand, can safely accommodate this huge and constantly increasing volume of data and, on the other, serve them in a timely and useful manner so that applications can benefit from processing them. This document focuses on the above two challenges that come with Big Data . In more detail, we study (i) backup storage systems as a means to safeguard data against a number of factors that may render them unavailable and (ii) data placement strategies on geographically distributed storage systems, with the goal to reduce the user perceived latencies and the network and storage resources are efficiently utilized. Throughout our study, data are placed in the centre of our design choices as we try to leverage content properties for both placement and efficient storage.RENNES1-Bibl. électronique (352382106) / SudocSudocFranceF

    On Personal Storage Systems: Architecture and Design Considerations

    Get PDF
    Actualment, els usuaris necessiten grans quantitats d’espai d’emmagatzematge remot per guardar la seva informació personal. En aquesta dissertació, estudiarem dues arquitectures emergents de sistemes d’emmagatzematge d’informació personal: els Núvols Personals (centralitzats) i els sistemes d’emmagatzematge social (descentralitzats). A la Part I d'aquesta tesi, contribuïm desvelant l’operació interna d’un Núvol Personal d’escala global, anomenat UbuntuOne (U1), incloent-hi la seva arquitectura, el seu servei de metadades i les interaccions d’emmagatzematge de dades. A més, proporcionem una anàlisi de la part de servidor d’U1 on estudiem la càrrega del sistema, el comportament dels usuaris i el rendiment del seu servei de metadades. També suggerim tota una sèrie de millores potencials al sistema que poden beneficiar sistemes similars. D'altra banda, en aquesta tesi també contribuïm mesurant i analitzant la qualitat de servei (p.e., velocitat, variabilitat) de les transferències sobre les REST APIs oferides pels Núvols Personals. A més, durant aquest estudi, ens hem adonat que aquestes interfícies poden ser objecte d’abús quan són utilitzades sobre els comptes gratuïts que normalment ofereixen aquests serveis. Això ha motivat l’estudi d’aquesta vulnerabilitat, així com de potencials contramesures. A la Part II d'aquesta dissertació, la nostra primera contribució és analitzar la qualitat de servei que els sistemes d’emmagatzematge social poden proporcionar en termes de disponibilitat de dades, velocitat de transferència i balanceig de la càrrega. El nostre interès principal és entendre com fenòmens intrínsecs, com les dinàmiques de connexió dels usuaris o l’estructura de la xarxa social, limiten el rendiment d’aquests sistemes. També proposem nous mecanismes de manegament de dades per millorar aquestes limitacions. Finalment, dissenyem una arquitectura híbrida que combina recursos del Núvol i dels usuaris. Aquesta arquitectura té com a objectiu millorar la qualitat de servei del sistema i deixa als usuaris decidir la quantitat de recursos utilitzats del Núvol, o en altres paraules, és una decisió entre control de les seves dades i rendiment.Los usuarios cada vez necesitan espacios mayores de almacenamiento en línea para guardar su información personal. Este reto motiva a los investigadores a diseñar y evaluar nuevas infraestructuras de almacenamiento de datos personales. En esta tesis, nos centramos en dos arquitecturas emergentes de almacenamiento de datos personales: las Nubes Personales (centralización) y los sistemas de almacenamiento social (descentralización). Creemos que, pese a su creciente popularidad, estos sistemas requieren de un mayor estudio científico. En la Parte I de esta disertación, examinamos aspectos referentes a la operación interna y el rendimiento de varias Nubes Personales. Concretamente, nuestra primera contribución es desvelar la operación interna e infraestructura de una Nube Personal de gran escala (UbuntuOne, U1). Además, proporcionamos un estudio de la actividad interna de U1 que incluye la carga diaria soportada, el comportamiento de los usuarios y el rendimiento de su sistema de metadatos. También sugerimos mejoras sobre U1 que pueden ser de utilidad en sistemas similares. Por otra parte, en esta tesis medimos y caracterizamos el rendimiento del servicio de REST APIs ofrecido por varias Nubes Personales (velocidad de transferencia, variabilidad, etc.). También demostramos que la combinación de REST APIs sobre cuentas gratuitas de usuario puede dar lugar a abusos por parte de usuarios malintencionados. Esto nos motiva a proponer mecanismos para limitar el impacto de esta vulnerabilidad. En la Parte II de esta tesis, estudiamos la calidad de servicio que pueden ofrecer los sistemas de almacenamiento social en términos de disponibilidad de datos, balanceo de carga y tiempos de transferencia. Nuestro interés principal es entender la manera en que fenómenos intrínsecos, como las dinámicas de conexión de los usuarios o la estructura de su red social, limitan el rendimiento de estos sistemas. También proponemos nuevos mecanismos de gestión de datos para mejorar esas limitaciones. Finalmente, diseñamos y evaluamos una arquitectura híbrida para mejorar la calidad de servicio de los sistemas de almacenamiento social que combina recursos de usuarios y de la Nube. Esta arquitectura permite al usuario decidir su equilibrio entre control de sus datos y rendimiento.Increasingly, end-users demand larger amounts of online storage space to store their personal information. This challenge motivates researchers to devise novel personal storage infrastructures. In this thesis, we focus on two popular personal storage architectures: Personal Clouds (centralized) and social storage systems (decentralized). In our view, despite their growing popularity among users and researchers, there still remain some critical aspects to address regarding these systems. In the Part I of this dissertation, we examine various aspects of the internal operation and performance of various Personal Clouds. Concretely, we first contribute by unveiling the internal structure of a global-scale Personal Cloud, namely UbuntuOne (U1). Moreover, we provide a back-end analysis of U1 that includes the study of the storage workload, the user behavior and the performance of the U1 metadata store. We also suggest improvements to U1 (storage optimizations, user behavior detection and security) that can also benefit similar systems. From an external viewpoint, we actively measure various Personal Clouds through their REST APIs for characterizing their QoS, such as transfer speed, variability and failure rate. We also demonstrate that combining open APIs and free accounts may lead to abuse by malicious parties, which motivates us to propose countermeasures to limit the impact of abusive applications in this scenario. In the Part II of this thesis, we study the storage QoS of social storage systems in terms of data availability, load balancing and transfer times. Our main interest is to understand the way intrinsic phenomena, such as the dynamics of users and the structure of their social relationships, limit the storage QoS of these systems, as well as to research novel mechanisms to ameliorate these limitations. Finally, we design and evaluate a hybrid architecture to enhance the QoS achieved by a social storage system that combines user resources and cloud storage to let users infer the right balance between user control and QoS

    Secure Data Sharing in Cloud Computing: A Comprehensive Review

    Get PDF
    Cloud Computing is an emerging technology, which relies on sharing computing resources. Sharing of data in the group is not secure as the cloud provider cannot be trusted. The fundamental difficulties in distributed computing of cloud suppliers is Data Security, Sharing, Resource scheduling and Energy consumption. Key-Aggregate cryptosystem used to secure private/public data in the cloud. This key is consistent size aggregate for adaptable decisions of ciphertext in cloud storage. Virtual Machines (VMs) provisioning is effectively empowered the cloud suppliers to effectively use their accessible resources and get higher benefits. The most effective method to share information resources among the individuals from the group in distributed storage is secure, flexible and efficient. Any data stored in different cloud data centers are corrupted, recovery using regenerative coding. Security is provided many techniques like Forward security, backward security, Key-Aggregate cryptosystem, Encryption and Re-encryption etc. The energy is reduced using Energy-Efficient Virtual Machines Scheduling in Multi-Tenant Data Centers

    Secure data sharing in cloud computing: a comprehensive review

    Get PDF
    Cloud Computing is an emerging technology, which relies on sharing computing resources. Sharing of data in the group is not secure as the cloud provider cannot be trusted. The fundamental difficulties in distributed computing of cloud suppliers is Data Security, Sharing, Resource scheduling and Energy consumption. Key-Aggregate cryptosystem used to secure private/public data in the cloud. This key is consistent size aggregate for adaptable decisions of ciphertext in cloud storage. Virtual Machines (VMs) provisioning is effectively empowered the cloud suppliers to effectively use their accessible resources and get higher benefits. The most effective method to share information resources among the individuals from the group in distributed storage is secure, flexible and efficient. Any data stored in different cloud data centers are corrupted, recovery using regenerative coding. Security is provided many techniques like Forward security, backward security, Key-Aggregate cryptosystem, Encryption and Re-encryption etc. The energy is reduced using Energy-Efficient Virtual Machines Scheduling in Multi-Tenant Data Centers

    THREE TEMPORAL PERSPECTIVES ON DECENTRALIZED LOCATION-AWARE COMPUTING: PAST, PRESENT, FUTURE

    Get PDF
    Durant les quatre dernières décennies, la miniaturisation a permis la diffusion à large échelle des ordinateurs, les rendant omniprésents. Aujourd’hui, le nombre d’objets connectés à Internet ne cesse de croitre et cette tendance n’a pas l’air de ralentir. Ces objets, qui peuvent être des téléphones mobiles, des véhicules ou des senseurs, génèrent de très grands volumes de données qui sont presque toujours associés à un contexte spatiotemporel. Le volume de ces données est souvent si grand que leur traitement requiert la création de système distribués qui impliquent la coopération de plusieurs ordinateurs. La capacité de traiter ces données revêt une importance sociétale. Par exemple: les données collectées lors de trajets en voiture permettent aujourd’hui d’éviter les em-bouteillages ou de partager son véhicule. Un autre exemple: dans un avenir proche, les données collectées à l’aide de gyroscopes capables de détecter les trous dans la chaussée permettront de mieux planifier les interventions de maintenance à effectuer sur le réseau routier. Les domaines d’applications sont par conséquent nombreux, de même que les problèmes qui y sont associés. Les articles qui composent cette thèse traitent de systèmes qui partagent deux caractéristiques clés: un contexte spatiotemporel et une architecture décentralisée. De plus, les systèmes décrits dans ces articles s’articulent autours de trois axes temporels: le présent, le passé, et le futur. Les systèmes axés sur le présent permettent à un très grand nombre d’objets connectés de communiquer en fonction d’un contexte spatial avec des temps de réponses proche du temps réel. Nos contributions dans ce domaine permettent à ce type de système décentralisé de s’adapter au volume de donnée à traiter en s’étendant sur du matériel bon marché. Les systèmes axés sur le passé ont pour but de faciliter l’accès a de très grands volumes données spatiotemporelles collectées par des objets connectés. En d’autres termes, il s’agit d’indexer des trajectoires et d’exploiter ces indexes. Nos contributions dans ce domaine permettent de traiter des jeux de trajectoires particulièrement denses, ce qui n’avait pas été fait auparavant. Enfin, les systèmes axés sur le futur utilisent les trajectoires passées pour prédire les trajectoires que des objets connectés suivront dans l’avenir. Nos contributions permettent de prédire les trajectoires suivies par des objets connectés avec une granularité jusque là inégalée. Bien qu’impliquant des domaines différents, ces contributions s’articulent autour de dénominateurs communs des systèmes sous-jacents, ouvrant la possibilité de pouvoir traiter ces problèmes avec plus de généricité dans un avenir proche. -- During the past four decades, due to miniaturization computing devices have become ubiquitous and pervasive. Today, the number of objects connected to the Internet is in- creasing at a rapid pace and this trend does not seem to be slowing down. These objects, which can be smartphones, vehicles, or any kind of sensors, generate large amounts of data that are almost always associated with a spatio-temporal context. The amount of this data is often so large that their processing requires the creation of a distributed system, which involves the cooperation of several computers. The ability to process these data is important for society. For example: the data collected during car journeys already makes it possible to avoid traffic jams or to know about the need to organize a carpool. Another example: in the near future, the maintenance interventions to be carried out on the road network will be planned with data collected using gyroscopes that detect potholes. The application domains are therefore numerous, as are the prob- lems associated with them. The articles that make up this thesis deal with systems that share two key characteristics: a spatio-temporal context and a decentralized architec- ture. In addition, the systems described in these articles revolve around three temporal perspectives: the present, the past, and the future. Systems associated with the present perspective enable a very large number of connected objects to communicate in near real-time, according to a spatial context. Our contributions in this area enable this type of decentralized system to be scaled-out on commodity hardware, i.e., to adapt as the volume of data that arrives in the system increases. Systems associated with the past perspective, often referred to as trajectory indexes, are intended for the access to the large volume of spatio-temporal data collected by connected objects. Our contributions in this area makes it possible to handle particularly dense trajectory datasets, a problem that has not been addressed previously. Finally, systems associated with the future per- spective rely on past trajectories to predict the trajectories that the connected objects will follow. Our contributions predict the trajectories followed by connected objects with a previously unmet granularity. Although involving different domains, these con- tributions are structured around the common denominators of the underlying systems, which opens the possibility of being able to deal with these problems more generically in the near future
    corecore