59 research outputs found

    Reorganization in network regions for optimality and fairness

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2004.Includes bibliographical references (p. 92-95).(cont.) down implicit assumptions of altruism while showing the resulting negative impact on utility. From a selfish equilibrium, with much lower global utility, we show the ability of our algorithm to reorganize and restore the utility of individual nodes, and the system as a whole, to similar levels as realized in the SuperPeer network. Simulation of our algorithm shows that it reaches the predicted optimal utility while providing fairness not realized in other systems. Further analysis includes an epsilon equilibrium model where we attempt to more accurately represent the actual reward function of nodes. We find that by employing such a model, over 60% of the nodes are connected. In addition, this model converges to a utility 34% greater than achieved in the SuperPeer network while making no assumptions on the benevolence of nodes or centralized organization.This thesis proposes a reorganization algorithm, based on the region abstraction, to exploit the natural structure in overlays that stems from common interests. Nodes selfishly adapt their connectivity within the overlay in a distributed fashion such that the topology evolves to clusters of users with shared interests. Our architecture leverages the inherent heterogeneity of users and places within the system their incentives and ability to affect the network. As such, it is not dependent on the altruism of any other nodes in the system. Of particular interest is the optimality and fairness of our design. We rigorously define ideal and fair networks and develop a continuum of optimality measures by which to evaluate our algorithm. Further, to evaluate our algorithm within a realistic context, validate assumptions and make design decisions, we capture data from a portion of a live file-sharing network. More importantly, we discover, name, quantify and solve several previously unrecognized subtle problems in a content-based self-organizing network as a direct result of simulations using the trace data. We motivate our design by examining the dependence of existing systems on benevolent Super-Peers. Through simulation we find that the current architecture is highly dependent on the filtering capability and the willingness of the SuperPeer network to absorb the majority of the query burden. The remainder of the thesis is devoted to a world in which SuperPeers no longer exist or are untenable. In our evaluation, we introduce four reasons for utility suboptimal self-reorganizing networks: anarchy (selfish behavior), indifference, myopia and ordering. We simulate the level of utility and happiness achieved in existing architectures. Then we systematically tearby Robert E. Beverly, IV.S.M

    Provision, discovery and development of ubiquitous services and applications

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Creating an adaptive network of hubs using Schelling's model

    Full text link

    Reaching Scalability in Unstructured P2P Networks Using a Divide and Conquer Strategy

    Get PDF
    Unstructured peer-to-peer networks have a low maintenance cost, high resilience and tolerance to the continuous arrival and departure of nodes. In these networks search is usually performed by flooding, which is highly inefficient. To improve scalability, unstructured overlays evolved to a two-tiered architecture where regular nodes rely on superpeers to locate resources. While this approach takes advantage of node heterogeneity, it makes the overlay less resilient to accidental and malicious faults, and less attractive to users concerned with the consumption of their resources. In this paper we propose a search algorithm, called FASE, which combines a replication policy and a search space division technique to achieve scalability on unstructured overlays with flat topologies. We present simulation results which validate FASE improved scalability and efficienc

    Architecture and optimization for a peer-to-peer content management system

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Civil and Environmental Engineering, 2004.Includes bibliographical references (leaves 87-88).This thesis will explore the design and optimization of a peer-to-peer network application as a solution to complex content management problems. Currently, most content management systems are expensive, cumbersome and inflexible custom solutions that require knowledge workers to change their work habits. Peer-to-peer offers a uniquely decentralized and, potentially, scalable solution for knowledge workers by providing a simple and visual tool for file management, meta-data description and collaboration. This thesis will reference a client beta designed and developed by the author. Additionally, this thesis will address the need for content management solutions, the state of current solutions and a requirements document for a solution. Subsequently, the thesis will explore the design aspects of a peer-to-peer content management solution. As well as designing and developing a P2P client as proof of concept, this thesis will mathematically explore the implications of scaling the client to many users and methods to optimize performance. The last few chapters will cover the implementation of the client, proposed next steps for development and analysis of alternative architectures.by Dion M. Edge.S.M

    Peer-to-peer update dissemination in browser-based networked virtual environments.

    Get PDF
    PhD ThesisNetworked Virtual Environments (NVEs) have always imposed strict requirements on architectures for update dissemination (UD). Clients must maintain views that are as synchronous and consistent as possible in order to achieve a level of user experience that is tolerable for the user. In recent times, the web browser has become a viable platform on which to deploy these NVEs. Doing so adds another layer of challenges however. There is a distinct need for systems that adapt to these constraints and exploit the characteristics of this new context to achieve reliably high consistency between users for a range of use cases. A promising approach is to carry forward the rich body of past research in peer-to-peer (P2P) networks and apply this to the problem of UD in NVEs under the constraints of a web browser. Making NVEs scalable through P2P networks is not a new concept, however previous work has always been either too specific to a certain kind of NVE, or made performance trade-offs that especially cannot work in a browser context. Furthermore, in previous work on P2P NVEs, UD has always taken the backseat compared to object management and distributed neighbour selection. The evaluation of these UD systems have as a result been one-dimensional and overly simplifying. In this work, we begin by surveying past UD solutions and evaluation methodologies. We then capture NVE, browser, and network constraints, aided by the analysis of a rich dataset of NVE network traces that we have collected, and draw out key observations and challenges to develop the requirements for a feasible UD system. From there, we illustrate the design and implementation of our P2P UD system for NVEs in great detail, augmenting our system with novel architectural insights from the Software-Defined Networking (SDN) space. Finally, we evaluate our system under a range of workloads, test environments, and performance metrics to demonstrate that we have overcome these challenges, as well as compare our method to other existing methods, which we have also implemented and tested. We hope that our contributions in research and resources (such as our taxonomies, NVE analysis, UD system, browser library, workload datasets, and a benchmarking framework) bring more structure as well as research and development opportunities to a relatively niche sub-field

    Coordinated Self-Adaptation in Large-Scale Peer-to-Peer Overlays

    Get PDF
    Self-adaptive systems typically rely on a closed control loop which detects when the current behavior deviates too much from the optimal one, determines new optimal values for system parameters, and applies changes to the system configuration. In decentralized systems, implementing each of these steps is challenging, especially when nodes need to coordinate their local configurations. In this paper, we propose a decentralized method to automatically tune global system parameters in a coordinated manner. We use gossip-based protocols to continuously monitor system properties and to disseminate parameter updates. We show that this method applied to a decentralized resource selection service allows the system to quickly adapt to changes in workload types and node properties, and only incurs a negligible communication overhead

    Efficient and Flexible Search in Large Scale Distributed Systems

    Get PDF
    Peer-to-peer (P2P) technology has triggered a wide range of distributed systems beyond simple file-sharing. Distributed XML databases, distributed computing, server-less web publishing and networked resource/service sharing are only a few to name. Despite of the diversity in applications, these systems share a common problem regarding searching and discovery of information. This commonality stems from the transitory nodes population and volatile information content in the participating nodes. In such dynamic environment, users are not expected to have the exact information about the available objects in the system. Rather queries are based on partial information, which requires the search mechanism to be flexible. On the other hand, to scale with network size the search mechanism is required to be bandwidth efficient. Since the advent of P2P technology experts from industry and academia have proposed a number of search techniques - none of which is able to provide satisfactory solution to the conflicting requirements of search efficiency and flexibility. Structured search techniques, mostly Distributed Hash Table (DHT)-based, are bandwidth efficient while semi(un)-structured techniques are flexible. But, neither achieves both ends. This thesis defines the Distributed Pattern Matching (DPM) problem. The DPM problem is to discover a pattern (\ie bit-vector) using any subset of its 1-bits, under the assumption that the patterns are distributed across a large population of networked nodes. Search problem in many distributed systems can be reduced to the DPM problem. This thesis also presents two distinct search mechanisms, named Distributed Pattern Matching System (DPMS) and Plexus, for solving the DPM problem. DPMS is a semi-structured, hierarchical architecture aiming to discover a predefined number of matches by visiting a small number of nodes. Plexus, on the other hand, is a structured search mechanism based on the theory of Error Correcting Code (ECC). The design goal behind Plexus is to discover all the matches by visiting a reasonable number of nodes
    corecore