24 research outputs found

    Enhanced Cauchy Matrix Reed-Solomon Codes and Role-Based Cryptographic Data Access for Data Recovery and Security in Cloud Environment

    Get PDF
    In computer systems ensuring proper authorization is a significant challenge, particularly with the rise of open systems and dispersed platforms like the cloud. Role-Based Access Control (RBAC) has been widely adopted in cloud server applications due to its popularity and versatility. When granting authorization access to data stored in the cloud for collecting evidence against offenders, computer forensic investigations play a crucial role. As cloud service providers may not always be reliable, data confidentiality should be ensured within the system. Additionally, a proper revocation procedure is essential for managing users whose credentials have expired.  With the increasing scale and distribution of storage systems, component failures have become more common, making fault tolerance a critical concern. In response to this, a secure data-sharing system has been developed, enabling secure key distribution and data sharing for dynamic groups using role-based access control and AES encryption technology. Data recovery involves storing duplicate data to withstand a certain level of data loss. To secure data across distributed systems, the erasure code method is employed. Erasure coding techniques, such as Reed-Solomon codes, have the potential to significantly reduce data storage costs while maintaining resilience against disk failures. In light of this, there is a growing interest from academia and the corporate world in developing innovative coding techniques for cloud storage systems. The research goal is to create a new coding scheme that enhances the efficiency of Reed-Solomon coding using the sophisticated Cauchy matrix to achieve fault toleranc

    Le code à effacement Mojette : Applications dans les réseaux et dans le Cloud

    Get PDF
    Dans ce travail, je présente l'intérêt du code correcteur à effacement Mojette pour des architectures de stockage distribuées tolérantes aux pannes. De manière générale, l'approche par code permet de réduire d'un facteur 2 le volume de données stockées par rapport à l'approche standard par réplication qui consiste à copier la donnée en autant de fois que l'on suppose de pannes. De manière spécifique, le code à effacement Mojette présente les performances requises pour la lecture et l'écriture de données chaudes i.e très régulièrement sollicitées. Ces performances en entrées/sorties permettent par exemple l'exécution de machines virtuelles sur des données distribuées par le système de fichier RozoFS. En outre, j'effectue un rappel de mes contributions dans le domaine des réseaux auto-organisés de type P2P et ad hoc mobile en présentant respectivement les protocoles P2PWeb et MP-OLSR. L'ensemble de ce travail est le fruit de 5 encadrements doctoraux et de 3 projets collaboratifs majeurs

    Community computation

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Materials Science and Engineering, 2009.Cataloged from PDF version of thesis.Includes bibliographical references (p. 171-186).In this thesis we lay the foundations for a distributed, community-based computing environment to tap the resources of a community to better perform some tasks, either computationally hard or economically prohibitive, or physically inconvenient, that one individual is unable to accomplish efficiently. We introduce community coding, where information systems meet social networks, to tackle some of the challenges in this new paradigm of community computation. We design algorithms, protocols and build system prototypes to demonstrate the power of community computation to better deal with reliability, scalability and security issues, which are the main challenges in many emerging community-computing environments, in several application scenarios such as community storage, community sensing and community security. For example, we develop a community storage system that is based upon a distributed P2P (peer-to-peer) storage paradigm, where we take an array of small, periodically accessible, individual computers/peer nodes and create a secure, reliable and large distributed storage system. The goal is for each one of them to act as if they have immediate access to a pool of information that is larger than they could hold themselves, and into which they can contribute new stuff in a both open and secure manner. Such a contributory and self-scaling community storage system is particularly useful where reliable infrastructure is not readily available in that such a system facilitates easy ad-hoc construction and easy portability. In another application scenario, we develop a novel framework of community sensing with a group of image sensors. The goal is to present a set of novel tools in which software, rather than humans, examines the collection of images sensed by a group of image sensors to determine what is happening in the field of view. We also present several design principles in the aspects of community security. In one application example, we present community-based email spain detection approach to deal with email spams more efficiently.by Fulu Li.Ph.D

    Efficient data reliability management of cloud storage systems for big data applications

    Get PDF
    Cloud service providers are consistently striving to provide efficient and reliable service, to their client's Big Data storage need. Replication is a simple and flexible method to ensure reliability and availability of data. However, it is not an efficient solution for Big Data since it always scales in terabytes and petabytes. Hence erasure coding is gaining traction despite its shortcomings. Deploying erasure coding in cloud storage confronts several challenges like encoding/decoding complexity, load balancing, exponential resource consumption due to data repair and read latency. This thesis has addressed many challenges among them. Even though data durability and availability should not be compromised for any reason, client's requirements on read performance (access latency) may vary with the nature of data and its access pattern behaviour. Access latency is one of the important metrics and latency acceptance range can be recorded in the client's SLA. Several proactive recovery methods, for erasure codes are proposed in this research, to reduce resource consumption due to recovery. Also, a novel cache based solution is proposed to mitigate the access latency issue of erasure coding

    New directions for remote data integrity checking of cloud storage

    Get PDF
    Cloud storage services allow data owners to outsource their data, and thus reduce their workload and cost in data storage and management. However, most data owners today are still reluctant to outsource their data to the cloud storage providers (CSP), simply because they do not trust the CSPs, and have no confidence that the CSPs will secure their valuable data. This dissertation focuses on Remote Data Checking (RDC), a collection of protocols which can allow a client (data owner) to check the integrity of data outsourced at an untrusted server, and thus to audit whether the server fulfills its contractual obligations. Robustness has not been considered for the dynamic RDCs in the literature. The R-DPDP scheme being designed is the first RDC scheme that provides robustness and, at the same time, supports dynamic data updates, while requiring small, constant, client storage. The main challenge that has to be overcome is to reduce the client-server communication during updates under an adversarial setting. A security analysis for R-DPDP is provided. Single-server RDCs are useful to detect server misbehavior, but do not have provisions to recover damaged data. Thus in practice, they should be extended to a distributed setting, in which the data is stored redundantly at multiple servers. The client can use RDC to check each server and, upon having detected a corrupted server, it can repair this server by retrieving data from healthy servers, so that the reliability level can be maintained. Previously, RDC has been investigated for replication-based and erasure coding-based distributed storage systems. However, RDC has not been investigated for network coding-based distributed storage systems that rely on untrusted servers. RDC-NC is the first RDC scheme for network coding-based distributed storage systems to ensure data remain intact when faced with data corruption, replay, and pollution attacks. Experimental evaluation shows that RDC-NC is inexpensive for both the clients and the servers. The setting considered so far outsources the storage of the data, but the data owner is still heavily involved in the data management process (especially during the repair of damaged data). A new paradigm is proposed, in which the data owner fully outsources both the data storage and the management of the data. In traditional distributed RDC schemes, the repair phase imposes a significant burden on the client, who needs to expend a significant amount of computation and communication, thus, it is very difficult to keep the client lightweight. A new self-repairing concept is developed, in which the servers are responsible to repair the corruption, while the client acts as a lightweight coordinator during repair. To realize this new concept, two novel RDC schemes, RDC-SR and ERDC-SR, are designed for replication-based distributed storage systems, which enable Server-side Repair and minimize the load on the client side. Version control systems (VCS) provide the ability to track and control changes made to the data over time. The changes are usually stored in a VCS repository which, due to its massive size, is often hosted at an untrusted CSP. RDC can be used to address concerns about the untrusted nature of the VCS server by allowing a data owner to periodically check that the server continues to store the data. The RDC-AVCS scheme being designed relies on RDC to ensure all the data versions are retrievable from the untrusted server over time. The RDC-AVCS prototype built on top of Apache SVN only incurs a modest decrease in performance compared to a regular (non-secure) SVN system

    Enabling Private Real-Time Applications by Exploiting the Links Between Erasure Coding and Secret Sharing Mechanisms

    Full text link
    A huge amount of personal data is shared in real time by online users, increasingly using mobile devices and (unreliable) wireless channels. There is a large industry effort in aggregation and analysis of this data to provide personalised services, and a corresponding research effort to enable processing of such data in a secure and privacy preserving way. Secret sharing is a mechanism that allows private data sharing, revealing the information only to a select group. A parallel research effort has been invested in addressing the performance of real time mobile communication on lossy wireless channel, commonly improved by using erasure codes. In this thesis, we bring together the theoretically related fields of secret sharing and erasure coding, to provide a rich source of solutions to the two problem areas. Our aim is to enable solutions that deliver the required performance level while being efficient and implementable. The thesis has the following contributions. We evaluate the applicability of a new class of Maximum Distance Separable (MDS) erasure codes to transmission of real time content to mobile devices and demonstrate that the systematic code outperforms the non-systematic variant in regards to computation complexity and buffer size requirements, making it practical for mobile devices. We propose a new Layered secret sharing scheme for real time data sharing in Online Social Networks (OSNs). The proposed scheme enables automated profile sharing in OSN groups with fine-grained privacy control, via a multi-secret sharing scheme comprising of layered shares. The scheme does not require reliance on a trusted third party. Compared to independent sharing of specific profile attributes (e.g. text, images or video), the scheme does not leak any information about what is shared, including the number of attributes and it introduces a relatively small computation and communications overhead. Finally, we investigate the links between MDS codes and secret sharing schemes, motivated by the inefficiency of the commonly used Shamir scheme. We derive the theoretical links between MDS codes and secret sharing schemes and propose a novel MDS code based construction method for strong ramp schemes. This allows the use of existing efficient implementations of MDS codes for secret sharing and secure computing applications. We demonstrate that strong ramp schemes deliver a significant reduction of processing time and communication overhead, compared to Shamir scheme

    On the design and optimization of heterogeneous distributed storage systems

    Get PDF
    Durant la última dècada, la demanda d’emmagatzematge de dades ha anat creixent exponencialment any rere any. Apart de demanar més capacitat d’emmagatzematge, el usuaris actualment també demanen poder accedir a les seves dades des de qualsevol lloc i des de qualsevol dispositiu. Degut a aquests nous requeriments, els usuaris estan actualment movent les seves dades personals (correus electrònics, documents, fotografies, etc.) cap a serveis d’emmagatzematge en línia com ara Gmail, Facebook, Flickr o Dropbox. Malauradament, aquests serveis d’emmagatzematge en línia estan sostinguts per unes grans infraestructures informàtiques que poques empreses poden finançar. Per tal de reduir el costs d’aquestes grans infraestructures informàtiques, ha sorgit una nova onada de serveis d’emmagatzematge en línia que obtenen grans infraestructures d’emmagatzematge a base d’integrar els recursos petits centres de dades, o fins i tot a base d’integrar els recursos d’emmagatzematge del usuaris finals. No obstant això, els recursos que formen aquestes noves infraestructures d’emmagatzematge són molt heterogenis, cosa que planteja un repte per al dissenyadors d’aquests sistemes: Com es poden dissenyar sistemes d’emmagatzematge en línia, fiables i eficients, quan la infraestructura emprada és tan heterogènia? Aquesta tesis presenta un estudi dels principals problemes que sorgeixen quan un vol respondre a aquesta pregunta. A més proporciona diferents eines per tal d’optimitzar el disseny de sistemes d’emmagatzematge distribuïts i heterogenis. Les principals contribucions són: Primer, creem un marc d’anàlisis per estudiar els efectes de la redundància de dades en el cost dels sistemes d’emmagatzematge distribuïts. Donat un esquema de redundància específic, el marc d’anàlisis presentat permet predir el cost mitjà d’emmagatzematge i el cost mitjà de comunicació d’un sistema d’emmagatzematge implementat sobre qualsevol infraestructura informàtica distribuïda. Segon, analitzem els impactes que la redundància de dades té en la disponibilitat de les dades, i en els temps de recuperació. Donada una redundància, i donat un sistema d’emmagatzematge heterogeni, creem un grup d’algorismes per a determinar la disponibilitat de les dades esperada, i els temps de recuperació esperats. Tercer, dissenyem diferents polítiques d’assignació de dades per a diferents sistemes d’emmagatzematge. Diferenciem entre aquells escenaris on la totalitat de la infraestructura està administrada per una sola organització, i els escenaris on diferents parts auto administrades contribueixen els seus recursos. Els objectius de les nostres polítiques d’assignació de dades són: (i) minimitzar la redundància necessària, (ii) garantir la equitat entre totes les parts que participen al sistema, i (iii) incentivar a les parts perquè contribueixin els seus recursos al sistema.Over the last decade, users’ storage demands have been growing exponentially year over year. Besides demanding more storage capacity and more data reliability, today users also demand the possibility to access their data from any location and from any device. These new needs encourage users to move their personal data (e.g., E-mails, documents, pictures, etc.) to online storage services such as Gmail, Facebook, Flickr or Dropbox. Unfortunately, these online storage services are built upon expensive large datacenters that only a few big enterprises can afford. To reduce the costs of these large datacenters, a new wave of online storage services has recently emerged integrating storage resources from different small datacenters, or even integrating user storage resources into the provider’s storage infrastructure. However, the storage resources that compose these new storage infrastructures are highly heterogeneous, which poses a challenging problem to storage systems designers: How to design reliable and efficient distributed storage systems over heterogeneous storage infrastructures? This thesis provides an analysis of the main problems that arise when one aims to answer this question. Besides that, this thesis provides different tools to optimize the design of heterogeneous distributed storage systems. The contribution of this thesis is threefold: First, we provide a novel framework to analyze the effects that data redundancy has on the storage and communication costs of distributed storage systems. Given a generic redundancy scheme, the presented framework can predict the average storage costs and the average communication costs of a storage system deployed over a specific storage infrastructure. Second, we analyze the impacts that data redundancy has on data availability and retrieval times. For a given redundancy and a heterogeneous storage infrastructure, we provide a set of algorithms that allow to determine the expected data availability and expected retrieval times. Third, we design different data assignment policies for different storage scenarios. We differentiate between scenarios where the entire storage infrastructure is managed by the same organization, and scenarios where different parties contribute their storage resources. The aims of our assignment policies are: (i) to minimize the required redundancy, (ii) to guarantee fairness among all parties, and (iii) to encourage different parties to contribute their local storage resources to the system

    Static Web content distribution and request routing in a P2P overlay

    Get PDF
    The significance of collaboration over the Internet has become a corner-stone of modern computing, as the essence of information processing and content management has shifted to networked and Webbased systems. As a result, the effective and reliable access to networked resources has become a critical commodity in any modern infrastructure. In order to cope with the limitations introduced by the traditional client-server networking model, most of the popular Web-based services have employed separate Content Delivery Networks (CDN) to distribute the server-side resource consumption. Since the Web applications are often latency-critical, the CDNs are additionally being adopted for optimizing the content delivery latencies perceived by the Web clients. Because of the prevalent connection model, the Web content delivery has grown to a notable industry. The rapid growth in the amount of mobile devices further contributes to the amount of resources required from the originating server, as the content is also accessible on the go. While the Web has become one of the most utilized sources of information and digital content, the openness of the Internet is simultaneously being reduced by organizations and governments preventing access to any undesired resources. The access to information may be regulated or altered to suit any political interests or organizational benefits, thus conflicting with the initial design principle of an unrestricted and independent information network. This thesis contributes to the development of more efficient and open Internet by combining a feasibility study and a preliminary design of a peer-to-peer based Web content distribution and request routing mechanism. The suggested design addresses both the challenges related to effectiveness of current client-server networking model and the openness of information distributed over the Internet. Based on the properties of existing peer-to-peer implementations, the suggested overlay design is intended to provide low-latency access to any Web content without sacrificing the end-user privacy. The overlay is additionally designed to increase the cost of censorship by forcing a successful blockade to isolate the censored network from the rest of the Internet
    corecore