22 research outputs found

    Revisiting Content Availability in Distributed Online Social Networks

    Get PDF
    Online Social Networks (OSN) are among the most popular applications in today's Internet. Decentralized online social networks (DOSNs), a special class of OSNs, promise better privacy and autonomy than traditional centralized OSNs. However, ensuring availability of content when the content owner is not online remains a major challenge. In this paper, we rely on the structure of the social graphs underlying DOSN for replication. In particular, we propose that friends, who are anyhow interested in the content, are used to replicate the users content. We study the availability of such natural replication schemes via both theoretical analysis as well as simulations based on data from OSN users. We find that the availability of the content increases drastically when compared to the online time of the user, e. g., by a factor of more than 2 for 90% of the users. Thus, with these simple schemes we provide a baseline for any more complicated content replication scheme.Comment: 11pages, 12 figures; Technical report at TU Berlin, Department of Electrical Engineering and Computer Science (ISSN 1436-9915

    Exploring heterogeneity of unreliable machines for p2p backup

    Full text link
    P2P architecture is a viable option for enterprise backup. In contrast to dedicated backup servers, nowadays a standard solution, making backups directly on organization's workstations should be cheaper (as existing hardware is used), more efficient (as there is no single bottleneck server) and more reliable (as the machines are geographically dispersed). We present the architecture of a p2p backup system that uses pairwise replication contracts between a data owner and a replicator. In contrast to standard p2p storage systems using directly a DHT, the contracts allow our system to optimize replicas' placement depending on a specific optimization strategy, and so to take advantage of the heterogeneity of the machines and the network. Such optimization is particularly appealing in the context of backup: replicas can be geographically dispersed, the load sent over the network can be minimized, or the optimization goal can be to minimize the backup/restore time. However, managing the contracts, keeping them consistent and adjusting them in response to dynamically changing environment is challenging. We built a scientific prototype and ran the experiments on 150 workstations in the university's computer laboratories and, separately, on 50 PlanetLab nodes. We found out that the main factor affecting the quality of the system is the availability of the machines. Yet, our main conclusion is that it is possible to build an efficient and reliable backup system on highly unreliable machines (our computers had just 13% average availability)

    Emerge: Self-Emerging Data Release Using Cloud Data Storage

    Get PDF
    In the age of Big Data, advances in distributed technologies and cloud storage services provide highly efficient and cost-effective solutions to large scale data storage and management. Supporting self-emerging data using clouds is a challenging problem. While straight-forward centralized approaches provide a basic solution to the problem, unfortunately they are limited to a single point of trust. Supporting attack-resilient timed release of encrypted data stored in clouds requires new mechanisms for self emergence of data encryption keys that enables encrypted data to become accessible at a future point in time. Prior to the release time, the encryption key remains undiscovered and unavailable in a secure distributed system, making the private data unavailable. In this paper, we propose Emerge, a self-emerging timed data release protocol for securely hiding data encryption keys of private encrypted data in a large-scale Distributed Hash Table (DHT) network that makes the data available and accessible only at the defined release time. We develop a suite of erasure-coding-based routing path construction schemes for securely storing and routing encryption keys in DHT networks that protect an adversary from inferring the encryption key prior to the release time (release-ahead attack) or from destroying the key altogether (drop attack). Through extensive experimental evaluation, we demonstrate that the proposed schemes are resilient to both release-ahead attack and drop attack as well as to attacks that arise due to traditional churn issues in DHT networks

    Timed-Release of Self-Emerging Data Using Distributed Hash Tables

    Get PDF
    Releasing private data to the future is a challenging problem. Making private data accessible at a future point in time requires mechanisms to keep data secure and undiscovered so that protected data is not available prior to the legitimate release time and the data appears automatically at the expected release time. In this paper, we develop new mechanisms to support self-emerging data storage that securely hide keys of encrypted data in a Distributed Hash Table (DHT) network that makes the encryption keys automatically appear at the predetermined release time so that the protected encrypted private data can be decrypted at the release time. We show that a straight-forward approach of privately storing keys in a DHT is prone to a number of attacks that could either make the hidden data appear before the prescribed release time (release-ahead attack) or destroy the hidden data altogether (drop attack). We develop a suite of self-emerging key routing mechanisms for securely storing and routing encryption keys in the DHT. We show that the proposed scheme is resilient to both release-ahead attack and drop attack as well as to attacks that arise due to traditional churn issues in DHT networks. Our experimental evaluation demonstrates the performance of the proposed schemes in terms of attack resilience and churn resilience

    Efficient and adaptive congestion control for heterogeneous delay-tolerant networks

    Get PDF
    Detecting and dealing with congestion in delay-tolerant networks (DTNs) is an important and challenging problem. Current DTN forwarding algorithms typically direct traffic towards more central nodes in order to maximise delivery ratios and minimise delays, but as traffic demands increase these nodes may become saturated and unusable. We pro- pose CafRep, an adaptive congestion aware protocol that detects and reacts to congested nodes and congested parts of the network by using implicit hybrid contact and resources congestion heuristics. CafRep exploits localised relative utility based approach to offload the traffic from more to less congested parts of the network, and to replicate at adaptively lower rate in different parts of the network with non-uniform congestion levels. We extensively evaluate our work against benchmark and competitive protocols across a range of metrics over three real connectivity and GPS traces such as Sassy [44], San Francisco Cabs [45] and Infocom 2006 [33]. We show that CafRep performs well, independent of network connectivity and mobility patterns, and consistently outperforms the state-of-the-art DTN forwarding algorithms in the face of increasing rates of congestion. CafRep maintains higher availability and success ratios while keeping low delays, packet loss rates and delivery cost. We test CafRep in the presence of two application scenarios, with fixed rate traffic and with real world Facebook application traffic demands, showing that regardless of the type of traffic CafRep aims to deliver, it reduces congestion and improves forwarding performance

    Replica Placement for Availability in the Worst Case

    Get PDF
    corecore