119 research outputs found

    Weighted Betweenness for Multipath Networks

    Get PDF
    International audienceTypical betweenness centrality metrics neglect thepotential contribution of nodes that are near but not exactlyon shortest paths. The idea of this paper is to give morevalue to these nodes. We propose a weighted betweennesscentrality, a novel metric that assigns weights to nodes basedon the stretch of the paths they intermediate against theshortest paths. We compare the proposed metric with thetraditional and the distance-scaled betweenness metrics usingfour different network datasets. Results show that the weightedbetweenness centrality pinpoints and promotes nodes that areunderestimated by typical metrics, which can help to avoidnetwork disconnections and better exploit multipath protocols

    Exact Distributed Load Centrality Computation: Algorithms, Convergence, and Applications to Distance Vector Routing

    Get PDF
    Many optimization techniques for networking protocols take advantage of topological information to improve performance. Often, the topological information at the core of these techniques is a centrality metric such as the Betweenness Centrality (BC) index. BC is, in fact, a centrality metric with many well-known successful applications documented in the literature, from resource allocation to routing. To compute BC, however, each node must run a centralized algorithm and needs to have the global topological knowledge; such requirements limit the feasibility of optimization procedures based on BC. To overcome restrictions of this kind, we present a novel distributed algorithm that requires only local information to compute an alternative similar metric, called Load Centrality (LC). We present the new algorithm together with a proof of its convergence and the analysis of its time complexity. The proposed algorithm is general enough to be integrated with any distance vector (DV) routing protocol. In support of this claim, we provide an implementation on top of Babel, a real-world DV protocol. We use this implementation in an emulation framework to show how LC can be exploited to reduce Babel's convergence time upon node failure, without increasing control overhead. As a key step towards the adoption of centrality-based optimization for routing, we study how the algorithm can be incrementally introduced in a network running a DV routing protocol. We show that even when only a small fraction of nodes participate in the protocol, the algorithm accurately ranks nodes according to their centrality

    Design, Analysis, and Optimization of Traffic Engineering for Software Defined Networks

    Get PDF
    Network traffic has been growing exponentially due to the rapid development of applications and communications technologies. Conventional routing protocols, such as Open-Shortest Path First (OSPF), do not provide optimal routing and result in weak network resources. Optimal traffic engineering (TE) is not applicable in practice due to operational constraints such as limited memory on the forwarding devices and routes oscillation. Recently, a new way of centralized management of networks enabled by Software-Defined Networking (SDN) made it easy to apply most traffic engineering ideas in practice. \par Toward creating an applicable traffic engineering system, we created a TE simulator for experimenting with TE and evaluating TE systems efficiently as this tool employs parallel processing to achieve high efficiency. The purpose of the simulator is two aspects: (1) We use it to understand traffic engineering, (2) we use it to formulate a new traffic engineering algorithm that is near-optimal and applicable in practice. We study the design of some important aspects of any TE system. In particular, the consequences of achieving optimal TE by solving the multi-commodity flow problem (MCF) and the consequences of choosing single-path routing over multi-path routing. With the help of the TE simulator, we compare many TE systems constructed by combining different paths selection techniques with two objective functions for rate adaptations: load balancing (LB) and average delay (AD). The results confirm that paths selected based on the theoretical approach known as Oblivious Routing combined with AD objective function can significantly increase the performance in terms of throughput, congestion, and delay.\par However, the new proposed system comes with a cost. The AD function has a higher complexity than the LB function. We show that this problem can be tackled by training deep learning models. We trained two models with two different neural network architectures: Multilayer Perceptron (MLP) and Long-Short Term Memory (LSTM), to get a responsive traffic engineering system. The input training data is based on synthetic data obtained from the simulator. The output of the two models is the split ratios that the SDN controller uses to instruct the switching devices about how to forward traffic in the network. The result confirms that both models are effective and can be used to forward traffic in an optimal or near-optimal way. The LSTM model has shown a slightly better result than MLP due to its ability to predict a longer output sequence

    Network Resilience Improvement and Evaluation Using Link Additions

    Get PDF
    Computer networks are getting more involved in providing services for most of our daily life activities related to education, business, health care, social life, and government. Publicly available computer networks are prone to targeted attacks and natural disasters that could disrupt normal operation and services. Building highly resilient networks is an important aspect of their design and implementation. For existing networks, resilience against such challenges can be improved by adding more links. In fact, adding links to form a full mesh yields the most resilient network but it incurs an unfeasibly high cost. In this research, we investigate the resilience improvement of real-world networks via adding a cost-efficient set of links. Adding a set of links to an obtain optimal solution using an exhaustive search is impracticable for large networks. Using a greedy algorithm, a feasible solution is obtained by adding a set of links to improve network connectivity by increasing a graph robustness metric such as algebraic connectivity or total graph diversity. We use a graph metric called flow robustness as a measure for network resilience. To evaluate the improved networks, we apply three centrality-based attacks and study their resilience. The flow robustness results of the attacks show that the improved networks are more resilient than the non-improved networks

    An Invulnerability Algorithm for Wireless Sensor Network\u27s Topology Based on Distance and Energy

    Get PDF
    To improve the topological stability of wireless sensor networks, an anti-destructive algorithm based on energy-aware weighting is proposed. The algorithm takes the Weighted Dynamic Topology Control (WDTC) algorithm as a reference, and calculates the weight of nodes by using the distance between nodes and the residual energy of nodes. Then chooses optimal weights and constructs a stable balanced topological network with multiple-connectivity paths using the K-connection idea. The simulation results show that the proposed algorithm improves the average connectivity of the topological network, enhances the robustness of the network, ensures the stable transmission of network information, and optimizes the betweenness centrality of the network nodes, making the network has a good invulnerability

    The power of quasi-shortest paths and the impact of node mobility on dynamic networks

    Get PDF
    The objective of this thesis is to investigate three important aspects of dynamic networks: the impact of node mobility on multihop data transmission, the effect of the use of longer paths on the relative importance of nodes and the performance of the network in the presence of failure on central nodes. To analyze the first aspect, this work proposes the (κ, λ)-vicinity, which extends the traditional vicinity to consider as neighbors nodes at multihop distance and restricts the link establishment according to the relative speed between nodes. This proposal is used later on the development of three forwarding strategies. The relative speed restriction imposed on these strategies results in significant reduction of resources consumption, without incurring significant impact on the average packet delivery ratio. To analyze the second aspect, we propose the ρ-geodesic betweenness centrality, which uses shortest and quasi-shortest paths to quantify the relative importance of a node. The quasishortest paths are limited by a spreadness factor, ρ. The use of non-optimal paths causes the reranking of several nodes and its main effect is a reduced occupation of the most central positions by articulation points. Lastly, the network performenace in presence of failures is investigated through simulations, in which failures happen on nodes defined as the most central according to distinct centrality metrics. The result is a severe reduction of the average network throughput, and it is independent of the metric used to determine which nodes are the most central. The major strength of the proposed metric, then, is that, despite the severe reduction of the throughput, there is a high probability of maintaining the network connected after a failure, because it is unlikely that a failing node in the most central position is also an articulation point.O objetivo desta tese é investigar três aspectos importantes das redes dinâmicas: o impacto da mobilidade dos nós na transmissão de dados em múltiplos saltos, o efeito do uso de caminhos mais longos na importância relativa dos nós, e o desempenho da rede na presença de falha em nós centrais. Para analisar o primeiro aspecto, este trabalho propõe a (κ, λ)-vizinhança, que estende a vizinhança tradicional para considerar como vizinhos nós a múltiplos saltos de distância e restringe o estabelecimento de enlaces de acordo com a velocidade relativa entre os nós. Essa proposta é usada posteriormente no desenvolvimento de três estratégias de encaminhamento. A restrição de velocidade relativa imposta nessas estratégias resulta em uma redução significativa do consumo de recursos, sem que ocorra impacto significativo na taxa média de entrega de pacotes. Para analisar o segundo aspecto, propõe a centralidade de intermediação ρ-geodésica, que usa caminhos mais curtos e quase mais curtos para quantificar a importância relativa dos nos. Os caminhos quase mais curtos são limitados por um fator de espalhamento ρ. O uso de caminhos não ótimos provoca o reranqueamento de diversos nós e tem como principal efeito uma menor ocupação de posições mais centrais por pontos de articulação. Por fim, o desempenho da rede em presença de falha é investigado através de simulações nas quais as falhas atingem nós definidos como os mais centrais de acordo com métricas de centralidade distintas. O resultado é uma redução brusca da vazão média da rede, independentemente da métrica usada para determinar quais são os nós mais centrais. O grande trunfo da métrica proposta é que, apesar da severa redução na vazão, é grande a probabilidade de manter a rede conectada após a falha, uma vez que é pouco provável que um nó em falha nas posições mais centrais seja também um ponto de articulação

    Multi-Core Parallel Routing

    Get PDF
    The recent increase in the amount of data (i.e., big data) led to higher data volumes to be transferred and processed over the network. Also, over the last years, the deployment of multi-core routers has grown rapidly. However, such big data transfers are not leveraging the powerful multi-core routers to the extent possible, particularly in the key function of routing. Our main goal is to find a way so we can use these cores more effectively and efficiently in routing the big data transfers. In this dissertation, we propose a novel approach to parallelize data transfers by leveraging the multi-core CPUs in the routers. Legacy routing protocols, e.g. OSPF for intra-domain routing, send data from source to destination on a shortest single path. We describe an end-to-end method to distribute data optimally on flows by using multiple paths. We generate new virtual topology substrates from the underlying router topology and perform shortest path routing on each substrate. With this framework, even though calculating shortest paths could be done with well-known techniques such as OSPF's Dijkstra implementation, finding optimal substrates so as to maximize the aggregate throughput over multiple end-to-end paths is still an NP-hard problem. We focus our efforts on solving the problem and design heuristics for substrate generation from a given router topology. Our heuristics' interim goal is to generate substrates in such a way that the shortest path between a source-destination pair on each substrate minimally overlaps with each other. Once these substrates are determined, we assign each substrate to a core in routers and employ a multi-path transport protocol, like MPTCP, to perform end-to-end parallel transfers
    corecore