2,203 research outputs found

    A model of self-avoiding random walks for searching complex networks

    Get PDF
    Random walks have been proven useful in several applications in networks. Some variants of the basic random walk have been devised pursuing a suitable trade-off between better performance and limited cost. A self-avoiding random walk (SAW) is one that tries not to revisit nodes, therefore covering the network faster than a random walk. Suggested as a network search mechanism, the performance of the SAW has been analyzed using essentially empirical studies. A strict analytical approach is hard since, unlike the random walk, the SAW is not a Markovian stochastic process. We propose an analytical model to estimate the average search length of a SAW when used to locate a resource in a network. The model considers single or multiple instances of the resource sought and the possible availability of one-hop replication in the network (nodes know about resources held by their neighbors). The model characterizes networks by their size and degree distribution, without assuming a particular topology. It is, therefore, a mean-field model, whose applicability to real networks is validated by simulation. Experiments with sets of randomly built regular networks, Erdős–Rényi networks, and scale-free networks of several sizes and degree averages, with and without one-hop replication, show that model predictions are very close to simulation results, and allow us to draw conclusions about the applicability of SAWs to network search

    Performance Modelling and Optimisation of Multi-hop Networks

    Get PDF
    A major challenge in the design of large-scale networks is to predict and optimise the total time and energy consumption required to deliver a packet from a source node to a destination node. Examples of such complex networks include wireless ad hoc and sensor networks which need to deal with the effects of node mobility, routing inaccuracies, higher packet loss rates, limited or time-varying effective bandwidth, energy constraints, and the computational limitations of the nodes. They also include more reliable communication environments, such as wired networks, that are susceptible to random failures, security threats and malicious behaviours which compromise their quality of service (QoS) guarantees. In such networks, packets traverse a number of hops that cannot be determined in advance and encounter non-homogeneous network conditions that have been largely ignored in the literature. This thesis examines analytical properties of packet travel in large networks and investigates the implications of some packet coding techniques on both QoS and resource utilisation. Specifically, we use a mixed jump and diffusion model to represent packet traversal through large networks. The model accounts for network non-homogeneity regarding routing and the loss rate that a packet experiences as it passes successive segments of a source to destination route. A mixed analytical-numerical method is developed to compute the average packet travel time and the energy it consumes. The model is able to capture the effects of increased loss rate in areas remote from the source and destination, variable rate of advancement towards destination over the route, as well as of defending against malicious packets within a certain distance from the destination. We then consider sending multiple coded packets that follow independent paths to the destination node so as to mitigate the effects of losses and routing inaccuracies. We study a homogeneous medium and obtain the time-dependent properties of the packet’s travel process, allowing us to compare the merits and limitations of coding, both in terms of delivery times and energy efficiency. Finally, we propose models that can assist in the analysis and optimisation of the performance of inter-flow network coding (NC). We analyse two queueing models for a router that carries out NC, in addition to its standard packet routing function. The approach is extended to the study of multiple hops, which leads to an optimisation problem that characterises the optimal time that packets should be held back in a router, waiting for coding opportunities to arise, so that the total packet end-to-end delay is minimised

    Search strategies in unstructured overlays

    Get PDF
    Trabalho de projecto de mestrado em Engenharia Informática, apresentado à Universidade de Lisboa, através da Faculdade de Ciências, 2008Unstructured peer-to-peer networks have a low maintenance cost, high resilience and tolerance to the continuous arrival and departure of nodes. In these networks search is usually performed by flooding, which generates a high number of duplicate messages. To improve scalability, unstructured overlays evolved to a two-tiered architecture where regular nodes rely on special nodes, called supernodes or superpeers, to locate resources, thus reducing the scope of flooding based searches. While this approach takes advantage of node heterogeneity, it makes the overlay less resilient to accidental and malicious faults, and less attractive to users concerned with the consumption of their resources and who may not desire to commit additional resources that are required by nodes selected as superpeers. Another point of concern is churn, defined as the constant entry and departure of nodes. Churn affects both structured and unstructured overlay networks and, in order to build resilient search protocols, it must be taken into account. This dissertation proposes a novel search algorithm, called FASE, which combines a replication policy and a search space division technique to achieve low hop counts using a small number of messages, on unstructured overlays with nonhierarquical topologies. The problem of churn is mitigated by a distributed monitoring algorithm designed with FASE in mind. Simulation results validate FASE efficiency when compared to other search algorithms for peer-to-peer networks. The evaluation of the distributed monitoring algorithm shows that it maintains FASE performance when subjected to churn.Os sistemas peer-to-peer, como aplicações de partilha e distribuição de conteúdos ou voz-sobre-IP, são construídos sobre redes sobrepostas. Redes sobrepostas são redes virtuais que existem sobre uma rede subjacente, em que a topologia da rede sobreposta não tem de ter uma correspondência com a topologia da rede subjacente. Ao contrário das suas congéneres estruturadas, as redes sobrepostas não-estru-turadas não restringem a localização dos seus participantes, ou seja, não limitam a escolha de vizinhos de um dado nó, o que torna a sua manutenção mais simples. O baixo custo de manutenção das redes sobrepostas não-estruturadas torna estas especialmente adequadas para a construção de sistemas peer-to-peer capazes de tolerar o comportamento dinâmico dos seus participantes, uma vez que estas redes são permanentemente afectadas pela entrada e saída de nós na rede, um fénomeno conhecido como churn. O algoritmo de pesquisa mais comum em redes sobrepostas não-estruturadas consiste em inundar a rede, o que origina uma grande quantidade de mensagens duplicadas por cada pesquisa. A escalabilidade destes algoritmos é limitada porque consomem demasiados recursos da rede em sistemas com muitos participantes. Para reduzir o número de mensagens, as redes sobrepostas não-estruturadas podem ser organizadas em topologias hierárquicas. Nestas topologias alguns nós da rede, chamados supernós, assumem um papel mais importante, responsabilizando-se pela localização de objectos. A utilização de supernós cria novos problemas, como a sua selecção e a dependência da rede de uma pequena percentagem dos nós. Esta dissertação apresenta um novo algoritmo de pesquisa, chamado FASE, criado para operar sobre redes sobrepostas não estruturadas com topologias não-hierárquicas. Este algoritmo combina uma política de replicação com uma técnica de divisão do espaço de procura para resolver pesquisas ao alcançe de um número reduzido de saltos com o menor custo possível. Adicionalmente, o algoritmo procura nivelar a contribuição dos participantes, já que todos contribuem de uma forma semelhante para o desempenho da pesquisa. A estratégia seguida pelo algo- ritmo consiste em dividir tanto os nós da rede como as chaves dos seus conteúdos por diferentes “frequências” e replicar chaves nas respectivas frequências, sem, no entanto, limitar a localização de um nó ou impor uma estrutura à rede ou mesmo aplicar uma definição rígida de chave. Com o objectivo de mitigar o problema do churn, é apresentado um algoritmo de monitorização distribuído para as réplicas originadas pelo FASE. Os algoritmos propostos são avaliados através de simulações, que validam a eficiência do FASE quando comparado com outros algoritmos de pesquisa em redes sobrepostas não-estruturadas. É também demonstrado que o FASE mantém o seu desempenho em redes sob o efeito do churn quando combinado com o algoritmo de monitorização

    Resource location based on precomputed partial random walks in dynamic networks

    Full text link
    The problem of finding a resource residing in a network node (the \emph{resource location problem}) is a challenge in complex networks due to aspects as network size, unknown network topology, and network dynamics. The problem is especially difficult if no requirements on the resource placement strategy or the network structure are to be imposed, assuming of course that keeping centralized resource information is not feasible or appropriate. Under these conditions, random algorithms are useful to search the network. A possible strategy for static networks, proposed in previous work, uses short random walks precomputed at each network node as partial walks to construct longer random walks with associated resource information. In this work, we adapt the previous mechanisms to dynamic networks, where resource instances may appear in, and disappear from, network nodes, and the nodes themselves may leave and join the network, resembling realistic scenarios. We analyze the resulting resource location mechanisms, providing expressions that accurately predict average search lengths, which are validated using simulation experiments. Reduction of average search lengths compared to simple random walk searches are found to be very large, even in the face of high network volatility. We also study the cost of the mechanisms, focusing on the overhead implied by the periodic recomputation of partial walks to refresh the information on resources, concluding that the proposed mechanisms behave efficiently and robustly in dynamic networks.Comment: 39 pages, 25 figure
    corecore