469 research outputs found

    Replica Creation Algorithm for Data Grids

    Get PDF
    Data grid system is a data management infrastructure that facilitates reliable access and sharing of large amount of data, storage resources, and data transfer services that can be scaled across distributed locations. This thesis presents a new replication algorithm that improves data access performance in data grids by distributing relevant data copies around the grid. The new Data Replica Creation Algorithm (DRCM) improves performance of data grid systems by reducing job execution time and making the best use of data grid resources (network bandwidth and storage space). Current algorithms focus on number of accesses in deciding which file to replicate and where to place them, which ignores resources’ capabilities. DRCM differs by considering both user and resource perspectives; strategically placing replicas at locations that provide the lowest transfer cost. The proposed algorithm uses three strategies: Replica Creation and Deletion Strategy (RCDS), Replica Placement Strategy (RPS), and Replica Replacement Strategy (RRS). DRCM was evaluated using network simulation (OptorSim) based on selected performance metrics (mean job execution time, efficient network usage, average storage usage, and computing element usage), scenarios, and topologies. Results revealed better job execution time with lower resource consumption than existing approaches. This research contributes replication strategies embodied in one algorithm that enhances data grid performance, capable of making a decision on creating or deleting more than one file during same decision. Furthermore, dependency-level-between-files criterion was utilized and integrated with the exponential growth/decay model to give an accurate file evaluation

    Design, Implementation and Experiments for Moving Target Defense Framework

    Get PDF
    The traditional defensive security strategy for distributed systems employs well-established defensive techniques such as; redundancy/replications, firewalls, and encryption to prevent attackers from taking control of the system. However, given sufficient time and resources, all these methods can be defeated, especially when dealing with sophisticated attacks from advanced adversaries that leverage zero-day exploits

    Big Data Analysis-Based Secure Cluster Management for Optimized Control Plane in Software-Defined Networks

    Get PDF
    In software-defined networks (SDNs), the abstracted control plane is its symbolic characteristic, whose core component is the software-based controller. The control plane is logically centralized, but the controllers can be physically distributed and composed of multiple nodes. To meet the service management requirements of large-scale network scenarios, the control plane is usually implemented in the form of distributed controller clusters. Cluster management technology monitors all types of events and must maintain a consistent global network status, which usually leads to big data in SDNs. Simultaneously, the cluster security is an open issue because of the programmable and dynamic features of SDNs. To address the above challenges, this paper proposes a big data analysis-based secure cluster management architecture for the optimized control plane. A security authentication scheme is proposed for cluster management. Moreover, we propose an ant colony optimization approach that enables big data analysis scheme and the implementation system that optimizes the control plane. Simulations and comparisons show the feasibility and efficiency of the proposed scheme. The proposed scheme is significant in improving the security and efficiency SDN control plane

    SEEDS: Secure Decentralized Storage for Authentication Material

    Get PDF
    Applications that use passwords or cryptographic keys to authenticate users or perform cryptographic operations rely on centralized solutions. Trusted Platform Modules (TPMs) do not offer a way to replicate material, making accessing this information in a heterogeneous environment difficult. Meanwhile, remote services require a constant network connection and are a central point of failure. We present SEEDS, a secure decentralized multi-user data store that generates, stores, and operates on users’ authentication material such as passwords and cryptographic keys on local machines. To ensure the confidentiality and integrity of user accounts and cryptographic keys, SEEDS leverages Intel SGX—a hardware-based trusted execution environment, to store and operate on this data while protecting from a compromised host. We support user-defined policies that restrict users’ operations to protect against a malicious user attempting to access data without sufficient privileges. In addition, we replicate data across machines to improve accessibility and support offline participants for high availability. We implement the storage data structure using Conflict Free Replicated Data Types (CRDTs) to replicate data, recover from network partitions gracefully and offer a horizontally scalable system. We developed two applications that demonstrate the benefits of our system. First, we address centralized user authentication issues by implementing a database module that replaces and decentralizes LDAP user authentication. Next, we improve the management of users’ cryptographic keys by developing a software U2F token that replicates this material across machines for high availability

    ASYMMETRIC DISTRIBUTED LOCK MANAGEMENT IN CLOUD COMPUTING

    Get PDF
    Cloud computing have become part of our daily lives. They offer a dynamic environment for costumers to store and access their data at any time in any location. The developments of social networks have led to the necessity to build a solution which is easily accesible and available when required. Cloud computing provide a solution that does not depend on the location and can offer a wide range of services, while being free from failure and errors. Although there is an increase in the usage of the cloud storage services, there is still a significant number of aspects such as instant servers failures, network partitioning and natural disasters that require to be carefully addressed. Another important point that is vital for a sustainable cloud is the implementation of an algorithm which will coordinate and maintain concurrent access and keep shared files free from errors. One of the main approaches to overcome these problems is to provide a set of servers which will act as a gateway between clients and storage nodes. In this thesis we propose a new approach which provides an alternative solution to the main problematics related with cloud storages. The approach is based on multiple strategies for eliminating the problem of node failure and network partitioning while providing a complete distributed environment. In our approach, every server acts as a master server for its own requests and can provide service to its clients without interacting with other master servers. The concurrent access is maintained in an asymmetric way through our lock manager algorithm with the least communication among other master servers. According to the state of a specific file, master server can execute any received request without communicating with other master servers and only when additional information is required does further communication occur. In our approach the network partitioning or failure of one or more master servers has no effect on the other part of the cloud. To improve availability, we associate every master server with a failover server which takes up the duty of a master when the master server fails or becomes obsolete. To measure the performance of our approach we have performed various tests and the results are presented in detailed graphs

    Auditable and performant Byzantine consensus for permissioned ledgers

    Get PDF
    Permissioned ledgers allow users to execute transactions against a data store, and retain proof of their execution in a replicated ledger. Each replica verifies the transactions’ execution and ensures that, in perpetuity, a committed transaction cannot be removed from the ledger. Unfortunately, this is not guaranteed by today’s permissioned ledgers, which can be re-written if an arbitrary number of replicas collude. In addition, the transaction throughput of permissioned ledgers is low, hampering real-world deployments, by not taking advantage of multi-core CPUs and hardware accelerators. This thesis explores how permissioned ledgers and their consensus protocols can be made auditable in perpetuity; even when all replicas collude and re-write the ledger. It also addresses how Byzantine consensus protocols can be changed to increase the execution throughput of complex transactions. This thesis makes the following contributions: 1. Always auditable Byzantine consensus protocols. We present a permissioned ledger system that can assign blame to individual replicas regardless of how many of them misbehave. This is achieved by signing and storing consensus protocol messages in the ledger and providing clients with signed, universally-verifiable receipts. 2. Performant transaction execution with hardware accelerators. Next, we describe a cloud-based ML inference service that provides strong integrity guarantees, while staying compatible with current inference APIs. We change the Byzantine consensus protocol to execute machine learning (ML) inference computation on GPUs to optimize throughput and latency of ML inference computation. 3. Parallel transactions execution on multi-core CPUs. Finally, we introduce a permissioned ledger that executes transactions, in parallel, on multi-core CPUs. We separate the execution of transactions between the primary and secondary replicas. The primary replica executes transactions on multiple CPU cores and creates a dependency graph of the transactions that the backup replicas utilize to execute transactions in parallel.Open Acces

    Transferring big data across the globe

    Get PDF
    Transmitting data via the Internet is a routine and common task for users today. The amount of data being transmitted by the average user has dramatically increased over the past few years. Transferring a gigabyte of data in an entire day was normal, however users are now transmitting multiple gigabytes in a single hour. With the influx of big data and massive scientific data sets that are measured in tens of petabytes, a user has the propensity to transfer even larger amounts of data. When transferring data sets of this magnitude on public or shared networks, the performance of all workloads in the system will be impacted. This dissertation addresses the issues and challenges inherent with transferring big data over shared networks. A survey of current transfer techniques is provided and these techniques are evaluated in simulated, experimental and live environments. The main contribution of this dissertation is the development of a new, nice model for big data transfers, which is based on a store-and-forward methodology instead of an end-to-end approach. This nice model ensures that big data transfers only occur when there is idle bandwidth that can be repurposed for these large transfers. The nice model improves overall performance and significantly reduces the transmission time for big data transfers. The model allows for efficient transfers regardless of time zone differences or variations in bandwidth between sender and receiver. Nice is the first model that addresses the challenges of transferring big data across the globe

    New directions for remote data integrity checking of cloud storage

    Get PDF
    Cloud storage services allow data owners to outsource their data, and thus reduce their workload and cost in data storage and management. However, most data owners today are still reluctant to outsource their data to the cloud storage providers (CSP), simply because they do not trust the CSPs, and have no confidence that the CSPs will secure their valuable data. This dissertation focuses on Remote Data Checking (RDC), a collection of protocols which can allow a client (data owner) to check the integrity of data outsourced at an untrusted server, and thus to audit whether the server fulfills its contractual obligations. Robustness has not been considered for the dynamic RDCs in the literature. The R-DPDP scheme being designed is the first RDC scheme that provides robustness and, at the same time, supports dynamic data updates, while requiring small, constant, client storage. The main challenge that has to be overcome is to reduce the client-server communication during updates under an adversarial setting. A security analysis for R-DPDP is provided. Single-server RDCs are useful to detect server misbehavior, but do not have provisions to recover damaged data. Thus in practice, they should be extended to a distributed setting, in which the data is stored redundantly at multiple servers. The client can use RDC to check each server and, upon having detected a corrupted server, it can repair this server by retrieving data from healthy servers, so that the reliability level can be maintained. Previously, RDC has been investigated for replication-based and erasure coding-based distributed storage systems. However, RDC has not been investigated for network coding-based distributed storage systems that rely on untrusted servers. RDC-NC is the first RDC scheme for network coding-based distributed storage systems to ensure data remain intact when faced with data corruption, replay, and pollution attacks. Experimental evaluation shows that RDC-NC is inexpensive for both the clients and the servers. The setting considered so far outsources the storage of the data, but the data owner is still heavily involved in the data management process (especially during the repair of damaged data). A new paradigm is proposed, in which the data owner fully outsources both the data storage and the management of the data. In traditional distributed RDC schemes, the repair phase imposes a significant burden on the client, who needs to expend a significant amount of computation and communication, thus, it is very difficult to keep the client lightweight. A new self-repairing concept is developed, in which the servers are responsible to repair the corruption, while the client acts as a lightweight coordinator during repair. To realize this new concept, two novel RDC schemes, RDC-SR and ERDC-SR, are designed for replication-based distributed storage systems, which enable Server-side Repair and minimize the load on the client side. Version control systems (VCS) provide the ability to track and control changes made to the data over time. The changes are usually stored in a VCS repository which, due to its massive size, is often hosted at an untrusted CSP. RDC can be used to address concerns about the untrusted nature of the VCS server by allowing a data owner to periodically check that the server continues to store the data. The RDC-AVCS scheme being designed relies on RDC to ensure all the data versions are retrievable from the untrusted server over time. The RDC-AVCS prototype built on top of Apache SVN only incurs a modest decrease in performance compared to a regular (non-secure) SVN system

    NFV Platforms: Taxonomy, Design Choices and Future Challenges

    Get PDF
    Due to the intrinsically inefficient service provisioning in traditional networks, Network Function Virtualization (NFV) keeps gaining attention from both industry and academia. By replacing the purpose-built, expensive, proprietary network equipment with software network functions consolidated on commodity hardware, NFV envisions a shift towards a more agile and open service provisioning paradigm. During the last few years, a large number of NFV platforms have been implemented in production environments that typically face critical challenges, including the development, deployment, and management of Virtual Network Functions (VNFs). Nonetheless, just like any complex system, such platforms commonly consist of abounding software and hardware components and usually incorporate disparate design choices based on distinct motivations or use cases. This broad collection of convoluted alternatives makes it extremely arduous for network operators to make proper choices. Although numerous efforts have been devoted to investigating different aspects of NFV, none of them specifically focused on NFV platforms or attempted to explore their design space. In this paper, we present a comprehensive survey on the NFV platform design. Our study solely targets existing NFV platform implementations. We begin with a top-down architectural view of the standard reference NFV platform and present our taxonomy of existing NFV platforms based on what features they provide in terms of a typical network function life cycle. Then we thoroughly explore the design space and elaborate on the implementation choices each platform opts for. We also envision future challenges for NFV platform design in the incoming 5G era. We believe that our study gives a detailed guideline for network operators or service providers to choose the most appropriate NFV platform based on their respective requirements. Our work also provides guidelines for implementing new NFV platforms

    It's about THYME: On the design and implementation of a time-aware reactive storage system for pervasive edge computing environments

    Get PDF
    This work was partially supported by Fundacao para a Ciencia e a Tecnologia (FCT-MCTES) through project DeDuCe (PTDC/CCI-COM/32166/2017), NOVA LINCS UIDB/04516/2020, and grant SFRH/BD/99486/2014; and by the European Union through project LightKone (grant agreement n. 732505).Nowadays, smart mobile devices generate huge amounts of data in all sorts of gatherings. Much of that data has localized and ephemeral interest, but can be of great use if shared among co-located devices. However, mobile devices often experience poor connectivity, leading to availability issues if application storage and logic are fully delegated to a remote cloud infrastructure. In turn, the edge computing paradigm pushes computations and storage beyond the data center, closer to end-user devices where data is generated and consumed, enabling the execution of certain components of edge-enabled systems directly and cooperatively on edge devices. In this article, we address the challenge of supporting reliable and efficient data storage and dissemination among co-located wireless mobile devices without resorting to centralized services or network infrastructures. We propose THYME, a novel time-aware reactive data storage system for pervasive edge computing environments, that exploits synergies between the storage substrate and the publish/subscribe paradigm. We present the design of THYME and elaborate a three-fold evaluation, through an analytical study, and both simulation and real world experimentations, characterizing the scenarios best suited for its use. The evaluation shows that THYME allows the notification and retrieval of relevant data with low overhead and latency, and also with low energy consumption, proving to be a practical solution in a variety of situations.publishersversionpublishe
    • …
    corecore