2,194 research outputs found

    Enhanced Cauchy Matrix Reed-Solomon Codes and Role-Based Cryptographic Data Access for Data Recovery and Security in Cloud Environment

    Get PDF
    In computer systems ensuring proper authorization is a significant challenge, particularly with the rise of open systems and dispersed platforms like the cloud. Role-Based Access Control (RBAC) has been widely adopted in cloud server applications due to its popularity and versatility. When granting authorization access to data stored in the cloud for collecting evidence against offenders, computer forensic investigations play a crucial role. As cloud service providers may not always be reliable, data confidentiality should be ensured within the system. Additionally, a proper revocation procedure is essential for managing users whose credentials have expired.  With the increasing scale and distribution of storage systems, component failures have become more common, making fault tolerance a critical concern. In response to this, a secure data-sharing system has been developed, enabling secure key distribution and data sharing for dynamic groups using role-based access control and AES encryption technology. Data recovery involves storing duplicate data to withstand a certain level of data loss. To secure data across distributed systems, the erasure code method is employed. Erasure coding techniques, such as Reed-Solomon codes, have the potential to significantly reduce data storage costs while maintaining resilience against disk failures. In light of this, there is a growing interest from academia and the corporate world in developing innovative coding techniques for cloud storage systems. The research goal is to create a new coding scheme that enhances the efficiency of Reed-Solomon coding using the sophisticated Cauchy matrix to achieve fault toleranc

    Service-oriented models for audiovisual content storage

    No full text
    What are the important topics to understand if involved with storage services to hold digital audiovisual content? This report takes a look at how content is created and moves into and out of storage; the storage service value networks and architectures found now and expected in the future; what sort of data transfer is expected to and from an audiovisual archive; what transfer protocols to use; and a summary of security and interface issues

    DDEAS: Distributed Deduplication System with Efficient Access in Cloud Data Storage

    Get PDF
    Cloud storage service is one of the vital function of cloud computing that helps cloud users to outsource a massive volume of data without upgrading their devices. However, cloud data storage offered by Cloud Service Providers (CSPs) faces data redundancy problems. The data de-duplication technique aims to eliminate redundant data segments and keeps a single instance of the data set, even if similar data set is owned by any number of users. Since data blocks are distributed among the multiple individual servers, the user needs to download each block of the file before reconstructing the file, which reduces the system efficiency. We propose a server level data recover module in the cloud storage system to improve file access efficiency and reduce network bandwidth utilization time. In the proposed method, erasure coding is used to store blocks in distributed cloud storage and The MD5 (Message Digest 5) is used for data integrity. Executing recover algorithm helps user to directly fetch the file without downloading each block from the cloud servers. The proposed scheme improves the time efficiency of the system and quick access ability to the stored data. Thus consumes less network bandwidth and

    Network Coding for Distributed Cloud, Fog and Data Center Storage

    Get PDF

    Opportunistic Networks: Present Scenario- A Mirror Review

    Get PDF
    Opportunistic Network is form of Delay Tolerant Network (DTN) and regarded as extension to Mobile Ad Hoc Network. OPPNETS are designed to operate especially in those environments which are surrounded by various issues like- High Error Rate, Intermittent Connectivity, High Delay and no defined route between source to destination node. OPPNETS works on the principle of “Store-and-Forward” mechanism as intermediate nodes perform the task of routing from node to node. The intermediate nodes store the messages in their memory until the suitable node is not located in communication range to transfer the message to the destination. OPPNETs suffer from various issues like High Delay, Energy Efficiency of Nodes, Security, High Error Rate and High Latency. The aim of this research paper is to overview various routing protocols available till date for OPPNETs and classify the protocols in terms of their performance. The paper also gives quick review of various Mobility Models and Simulation tools available for OPPNETs simulation

    LEGOStore: A Linearizable Geo-Distributed Store Combining Replication and Erasure Coding

    Full text link
    We design and implement LEGOStore, an erasure coding (EC) based linearizable data store over geo-distributed public cloud data centers (DCs). For such a data store, the confluence of the following factors opens up opportunities for EC to be latency-competitive with replication: (a) the necessity of communicating with remote DCs to tolerate entire DC failures and implement linearizability; and (b) the emergence of DCs near most large population centers. LEGOStore employs an optimization framework that, for a given object, carefully chooses among replication and EC, as well as among various DC placements to minimize overall costs. To handle workload dynamism, LEGOStore employs a novel agile reconfiguration protocol. Our evaluation using a LEGOStore prototype spanning 9 Google Cloud Platform DCs demonstrates the efficacy of our ideas. We observe cost savings ranging from moderate (5-20\%) to significant (60\%) over baselines representing the state of the art while meeting tail latency SLOs. Our reconfiguration protocol is able to transition key placements in 3 to 4 inter-DC RTTs (<< 1s in our experiments), allowing for agile adaptation to dynamic conditions

    RAID Organizations for Improved Reliability and Performance: A Not Entirely Unbiased Tutorial (1st revision)

    Full text link
    RAID proposal advocated replacing large disks with arrays of PC disks, but as the capacity of small disks increased 100-fold in 1990s the production of large disks was discontinued. Storage dependability is increased via replication or erasure coding. Cloud storage providers store multiple copies of data obviating for need for further redundancy. Varitaions of RAID based on local recovery codes, partial MDS reduce recovery cost. NAND flash Solid State Disks - SSDs have low latency and high bandwidth, are more reliable, consume less power and have a lower TCO than Hard Disk Drives, which are more viable for hyperscalers.Comment: Submitted to ACM Computing Surveys. arXiv admin note: substantial text overlap with arXiv:2306.0876

    Discrete Simulation of DDN IME\uae for architecture prototyping

    Get PDF
    • …
    corecore