402 research outputs found
`Q-Feed' - An Effective Solution for the Free-riding Problem in Unstructured P2P Networks
This paper presents a solution for reducing the ill effects of free-riders in
decentralised unstructured P2P networks. An autonomous replication scheme is
proposed to improve the availability and enhance system performance. Q-learning
is widely employed in different situations to improve the accuracy in decision
making by each peer. Based on the performance of neighbours of a peer, every
neighbour is awarded different levels of ranks. At the same time a
low-performing node is allowed to improve its rank in different ways.
Simulation results show that Q-learning based free riding control mechanism
effectively limits the services received by free-riders and also encourages the
low-performing neighbours to improve their position. The popular files are
autonomously replicated to nodes possessing required parameters. Due to this
improvement of quantity of popular files, free riders are given opportunity to
lift their position for active participation in the network for sharing files.
Q-feed effectively manages queries from free riders and reduces network traffic
significantlyComment: 14 pages, 10 figure
Exploring heterogeneity of unreliable machines for p2p backup
P2P architecture is a viable option for enterprise backup. In contrast to
dedicated backup servers, nowadays a standard solution, making backups directly
on organization's workstations should be cheaper (as existing hardware is
used), more efficient (as there is no single bottleneck server) and more
reliable (as the machines are geographically dispersed).
We present the architecture of a p2p backup system that uses pairwise
replication contracts between a data owner and a replicator. In contrast to
standard p2p storage systems using directly a DHT, the contracts allow our
system to optimize replicas' placement depending on a specific optimization
strategy, and so to take advantage of the heterogeneity of the machines and the
network. Such optimization is particularly appealing in the context of backup:
replicas can be geographically dispersed, the load sent over the network can be
minimized, or the optimization goal can be to minimize the backup/restore time.
However, managing the contracts, keeping them consistent and adjusting them in
response to dynamically changing environment is challenging.
We built a scientific prototype and ran the experiments on 150 workstations
in the university's computer laboratories and, separately, on 50 PlanetLab
nodes. We found out that the main factor affecting the quality of the system is
the availability of the machines. Yet, our main conclusion is that it is
possible to build an efficient and reliable backup system on highly unreliable
machines (our computers had just 13% average availability)
Implications of query caching for JXTA peers
This dissertation studies the caching of queries and how to cache in an efficient way, so that retrieving previously accessed data does not need any intermediary nodes between the data-source peer and the querying peer in super-peer P2P network. A precise algorithm was devised that demonstrated how queries can be deconstructed to provide greater flexibility for reusing their constituent elements. It showed how subsequent queries can make use of more than one previous query and any part of those queries to reconstruct direct data communication with one or more source peers that have supplied data previously. In effect, a new query can search and exploit the entire cached list of queries to construct the list of the data locations it requires that might match any locations previously accessed. The new method increases the likelihood of repeat queries being able to reuse earlier queries and provides a viable way of by-passing shared data indexes in structured networks. It could also increase the efficiency of unstructured networks by reducing traffic and the propensity for network flooding. In addition, performance evaluation for predicting query routing performance by using a UML sequence diagram is introduced. This new method of performance evaluation provides designers with information about when it is most beneficial to use caching and how the peer connections can optimize its exploitation
Structured P2P Technologies for Distributed Command and Control
The utility of Peer-to-Peer (P2P) systems extends far beyond traditional file sharing. This paper provides an overview of how P2P systems are capable of providing robust command and control for Distributed Multi-Agent Systems (DMASs). Specifically, this article presents the evolution of P2P architectures to date by discussing supporting technologies and applicability of each generation of P2P systems. It provides a detailed survey of fundamental design approaches found in modern large-scale P2P systems highlighting design considerations for building and deploying scalable P2P applications. The survey includes unstructured P2P systems, content retrieval systems, communications structured P2P systems, flat structured P2P systems and finally Hierarchical Peer-to-Peer (HP2P) overlays. It concludes with a presentation of design tradeoffs and opportunities for future research into P2P overlay systems
Providing Freshness for Cached Data in Unstructured Peer-to-Peer Systems
Replication is a popular technique for increasing data availability and improving perfor- mance in peer-to-peer systems. Maintaining freshness of replicated data is challenging due to the high cost of update management. While updates have been studied in structured networks, they have been neglected in unstructured networks. We therefore confront the problem of maintaining fresh replicas of data in unstructured peer-to-peer networks. We propose techniques that leverage path replication to support efficient lazy updates and provide freshness for cached data in these systems using only local knowledge. In addition, we show that locally available information may be used to provide additional guarantees of freshness at an acceptable cost to performance. Through performance simulations based on both synthetic and real-world workloads from big data environments, we demonstrate the effectiveness of our approach
Simulation of Dissemination Strategies on Temporal Networks
In distributed environments, such as distributed ledgers technologies and other peer-to-peer architectures, communication represents a crucial topic. The ability to efficiently disseminate contents is strongly influenced by the type of system architecture, the protocol used to spread such contents over the network and the actual dynamicity of the communication links (i.e. static vs. temporal nets). In particular, the dissemination strategies either focus on achieving an optimal coverage, minimizing the network traffic or providing assurances on anonymity (that is a fundamental requirement of many cryptocurrencies). In this work, the behaviour of multiple dissemination protocols is discussed and studied through simulation. The performance evaluation has been carried out on temporal networks with the help of LUNES-temporal, a discrete event simulator that allows to test algorithms running on a distributed environment. The experiments show that some gossip protocols allow to either save a considerable number of messages or to provide better anonymity guarantees, at the cost of a little lower coverage achieved and/or a little increase of the delivery time
P2P architecture for scientific collaboration.
P2P networks are often associated with file exchange applications among private users. However, their features make them suitable for other uses. In this paper we present a P2P architecture for scientific collaboration networks, which takes advantage of the properties inherent in these social networks - small-world, clustering, community structure, assortative mixing, preferential attachment and small and stable groups - in order to obtain better performance, efficient use of resources and system resilience.Peer Reviewe
- …