22 research outputs found

    Enhanced Document Search and Sharing Tool in Organized P2p Frameworks

    Get PDF
    In internet p2p file sharing system generating more traffic. in this system file querying is important functionality which indicates the performance of p2p system .To improve file query performance cluster the common interested peers based on physical proximity .Existing methods are dedicated to only unstructured p2p systems and they don’t have strict policy for topology construction which decreases the file location efficiency. In this project proposing a proximity aware interest –clustered p2p file sharing system implemented in structured p2p file system. It forms a cluster based on node proximity as well as groups the nodes which having common interest into sub-cluster. A novel lookup function named as DHT and file replication algorithm which supports efficient file lookup and access. To reduce overhead and file searching delay the file querying may become inefficient due to the sub-interest supernode overload or failure. Thus, though the sub-interest based file querying improves querying efficiency, it is still not sufficiently scalable when there are a very large number of nodes in a sub-interest group. We then propose a distributed intra-sub-cluster file querying method in order to further improve the file querying efficiency

    A Novel File Search And Sharing Component In Controlled P2p Structures

    Get PDF
    In web p2p record sharing framework creating more traffic. In this framework document querying is imperative usefulness which demonstrates the execution of p2p framework. To enhance record query execution cluster the normal intrigued peers in view of physical proximity. Existing strategies are committed to just unstructured p2p frameworks and they don't have strict arrangement for topology development which decreases the file location proficiency. In this venture proposing a proximity aware interest–clustered p2p record sharing framework executed in organized p2p document framework. It shapes a cluster based on node proximity and in addition groups the nodes which having normal interest into sub-cluster. A novel query work named as DHT and record replication algorithm which bolsters productive document query and get to. To reduce overhead and file searching defer the record querying may get to be inefficient because of the sub-interest supernode over-burden or failure. In this manner, however the sub-interest based record querying enhances querying proficiency, it is still not adequately scalable when there are an extensive number of hubs in a sub-intrigue assemble. We then propose a distributed intra-sub-cluster record querying technique to facilitate enhance the document questioning proficiency

    AN INNOVATIVE DATA QUERY SYSTEM FOR COMMON INTERESTS OF NEIGHBOURS

    Get PDF
    Internet recognition bakes an essential motivation towards peer to determine file talking about. For understanding the peer to determine file talking about system, an important qualifying qualifying criterion to is efficiency of file location.  Inside our work we submit a peer to determine file talking about system that's closeness-aware additionally to Interest-clustered based on structured peer to determine system. It forms close nodes to cluster after which groups general interest nodes into sub-cluster that is founded on hierarchical topology and apply a wise file replication to boost file query effectiveness. The forecasted system can keep each and every advantage of distributed hash tables above unstructured peer to determine systems. It's closeness-aware additionally to Interest-clustered utilizes an intellectual file replication to boost file research competence and places files sticking with the same interests with one another which makes them available through routing function. The device will progress intra-sub-cluster file searching completely through several approaches. It evolves an overlay for every group that bond lesser capacity nodes towards advanced capacity nodes for spread file querying during remaining from of node overload. Recommended system utilizes range of positive file data to make sure that file requester can recognize whether requested for file reaches its close by nodes

    A New Approach to Intra-Sub-Cluster File Searching implemented in p2p File System

    Get PDF
    In internet p2p file sharing system generates more traffic. To get better file query performance cluster the common interested peers based on physical proximity. In this project proposing a proximity aware interest –clustered p2p file sharing system implemented in structured p2p file system. It forms a cluster based on node proximity as well as groups the nodes which having common interest into sub-cluster. A narrative lookup function named as DHT and file replication algorithm which supports resourceful file lookup and access. To diminish overhead and file searching delay it keeps up file information collection. Bloom filter technique is used to cut file sharing delay. Finally proposed approach shows competence in file search, sharing and overhead

    Sampling cluster endurance for peer-to-peer based content distribution networks

    Get PDF
    Several types of Content Distribution Networks are being deployed over the Internet today, based on different architectures to meet their requirements (e.g., scalability, efficiency and resiliency). Peer-to-peer (P2P) based Content Distribution Networks are promising approaches that have several advantages. Structured P2P networks, for instance, take a proactive approach and provide efficient routing mechanisms. Nevertheless, their maintenance can increase considerably in highly dynamic P2P environments. In order to address this issue, a two-tier architecture called Omicron that combines a structured overlay network with a clustering mechanism is suggested in a hybrid scheme. In this paper, we examine several sampling algorithms utilized in the aforementioned hybrid network that collect local information in order to apply a selective join procedure. Additionally, we apply the sampling algorithms on Chord in order to evaluate sampling as a general information gathering mechanism. The algorithms are based mostly on random walks inside the overlay networks. The aim of the selective join procedure is to provide a well balanced and stable overlay infrastructure that can easily overcome the unreliable behavior of the autonomous peers that constitute the network. The sampling algorithms are evaluated using simulation experiments as well as probabilistic analysis where several properties related to the graph structure are reveale

    Impact of the Inaccuracy of Distance Prediction Algorithms on Internet Applications--an Analytical and Comparative Study

    Get PDF
    Distance prediction algorithms use O(N) Round Trip Time (RTT) measurements to predict the N2 RTTs among N nodes. Distance prediction can be applied to improve the performance of a wide variety of Internet applications: for instance, to guide the selection of a download server from multiple replicas, or to guide the construction of overlay networks or multicast trees. Although the accuracy of existing prediction algorithms has been extensively compared using the relative prediction error metric, their impact on applications has not been systematically studied. In this paper, we consider distance prediction algorithms from an application\u27s perspective to answer the following questions: (1) Are existing prediction algorithms adequate for the applications? (2) Is there a significant performance difference between the different prediction algorithms, and which is the best from the application perspective? (3) How does the prediction error propagate to affect the user perceived application performance? (4) How can we address the fundamental limitation (i.e., inaccuracy) of distance prediction algorithms? We systematically experiment with three types of representative applications (overlay multicast, server selection, and overlay construction), three distance prediction algorithms (GNP, IDES, and the triangulated heuristic), and three real-world distance datasets (King, PlanetLab, and AMP). We find that, although using prediction can improve the performance of these applications, the achieved performance can be dramatically worse than the optimal case where the real distances are known. We formulate statistical models to explain this performance gap. In addition, we explore various techniques to improve the prediction accuracy and the performance of prediction-based applications. We find that selectively conducting a small number of measurements based on prediction-based screening is most effective

    A Decentralized Network Coordinate System for Robust Internet Distance

    Full text link
    Network distance, measured as round-trip latency be-tween hosts, is important for the performance of many In-ternet applications. For example, nearest server selection and proximity routing in peer-to-peer networks rely on the ability to select nodes based on inter-host latencies. This paper presents PCoord, a decentralized network coordi-nate system for Internet distance prediction. In PCoord, the network is modeled as a D-dimensional geometric space; each host computes its coordinates in this geometric space to characterize its network location based on a small num-ber of peer-to-peer network measurements. The goal is to embed hosts in the geometric space so that the Euclidean distance between two hosts ’ coordinates accurately predicts their actual inter-host network latency. PCoord constructs network coordinates in a fully decentralized fashion. We present several mechanisms in PCoord to stabilize the sys-tem convergence. Our simulation results using real Internet measurements suggest that, even under an extremely chal-lenging flash-crowd scenario where 1740 hosts simultane-ously join the system, PCoord with a 5-dimensional Eu-clidean model is able to converge to 11 % median prediction error in 10 coordinate updates per host on average.

    A decentralized network coordinate system for robust internet distance

    Get PDF
    Abstract-Network coordinate systems have recently been developed as a scalable mechanism to predict latencies among arbitrary Internet hosts. Our research addresses several design challenges of a large-scale decentralized network coordinate system that were not fully addressed in prior work. In particular, we examine the design issues of a decentralized network coordinate system operating in a peer-to-peer network with high churn, high fractions of faulty or misbehaving peers, and high degrees of network path anomalies. This paper presents a fully decentralized network coordinate system, PCoord, for robust and fault-tolerant Internet distance prediction. Through extensive simulations, we examine the convergence behavior and prediction accuracy of PCoord under a variety of scenarios, and compare its performance with an existing network coordinate system, Vivaldi. Our results indicate that PCoord is robust under high churn, and degrades gracefully even under high fractions of faulty nodes, and high degrees of triangle inequality violations in the underlying network distances. Finally, our results indicate that even under an extremely challenging flash-crowd scenario where 1740 hosts simultaneously join the system, PCoord is able to converge to 12% median relative prediction error within 10 seconds

    DNSR: Domain Name Suffix-based Routing in Overlay Networks

    Get PDF
    Abstract. Overlay Peer-to-Peer (P2P) networks are application layer networks which facilitate users in performing distributed functions such as keyword searches over the data of other users. An important problem in such networks is that the connection among peers are arbitrary, leading in that way to a topology structure which doesn't match the underlying physical topology. This phenomenon leads to excessive network resource consumption in Wide Area Networks as well as degraded user experience because of the incurred network delays. Most state-of-the-art research concentrates on structuring overlay networks in a way that query messages can reach the appropriate nodes within some hop-count boundaries. These approaches are not taking into account the underlying network topology mismatch making it therefore inappropriate for wide area routing. In this work we propose and evaluate DNSR (Domain Name Suffix-based Routing), which is a novel technique to route query messages in Overlay Networks, based on the "domain closeness" of the node sending the message. We describe DNSR and show simulation experiments which are performed over PeerWare, our distributed infrastructure which runs on a network of 50 workstations. Our simulations are based on real data gathered from one of the largest open P2P networks, namely Gnutella
    corecore