10 research outputs found

    Shifting network tomography toward a practical goal

    Get PDF
    Boolean Inference makes it possible to observe the congestion status of end-to-end paths and infer, from that, the congestion status of individual network links. In principle, this can be a powerful monitoring tool, in scenarios where we want to monitor a network without having direct access to its links. We consider one such real scenario: a Tier-1 ISP operator wants to monitor the congestion status of its peers. We show that, in this scenario, Boolean Inference cannot be solved with enough accuracy to be useful; we do not attribute this to the limitations of particular algorithms, but to the fundamental difficulty of the Inference problem. Instead, we argue that the "right" problem to solve, in this context, is compute the probability that each set of links is congested (as opposed to try to infer which particular links were congested when). Even though solving this problem yields less information than provided by Boolean Inference, we show that this information is more useful in practice, because it can be obtained accurately under weaker assumptions than typically required by Inference algorithms and more challenging network conditions (link correlations, non-stationary network dynamics, sparse topologies). © 2011 ACM

    The Fault-Finding Capacity of the Cable Network When Measured Along Complete Paths

    Get PDF
    We look into whether or not it is possible to find the exact location of a broken node in a communication network by using the binary state (normal or failed) of each link in the chain. To find out where failures are in a group of nodes of interest, it is necessary to link the different states of the routes to the different failures at the nodes. Due to the large number of possible node failures that need to be listed, it may be hard to check this condition on large networks. The first important thing we've added is a set of criteria that are both enough and necessary for testing in polynomial time whether or not a set of nodes has a limited number of failures. As part of our requirements, we take into account not only the architecture of the network but also the positioning of the monitors. We look at three different types of probing methods. Each one is different depending on the nature of the measurement paths, which can be random, controlled but not cycle-free, or uncontrolled (depending on the default routing protocol). Our second contribution is an analysis of the greatest number of failures (anywhere in the network) for which failures within a particular node set can be uniquely localized and the largest node set within which failures can be uniquely localized under a given constraint on the overall number of failures in the network. Both of these results are based on the fact that failures can be uniquely localized only if there is a constraint on the overall number of failures. When translated into functions of a per-node attribute, the sufficient and necessary conditions that came before them make it possible for an efficient calculation of both measurements

    SRLG: To Finding the Packet Loss in Peer to Peer Network

    Get PDF
    We introduce the ideas of watching methods (MPs) and watching cycles (MCs) for distinctive localization of shared risk connected cluster (SRLG) failures in all-optical networks. An SRLG failure causes multiple links to interrupt at the same time due to the failure of a typical resource. MCs (MPs) begin and finish at identical (distinct) watching location(s).They are constructed such any SRLG failure leads to the failure of a unique combination of methods and cycles. We tend to derive necessary and ample conditions on the set of MCs and MPs required for localizing associate single SRLG failure in a capricious graph. We determine the minimum range of optical splitters that area unit needed to watch all SRLG failures within the network. Extensive simulations area unit won�t to demonstrate the effectiveness of the planned watching technique

    A New Enhanced Technique for Identify Node Failure With Optimal Path In Network

    Get PDF
    We examine the skill of limiting node failures in communication networks from binary states of end-to-end paths. Specified a set of nodes of curiosity, inimitably localizing failures within this set necessitates that un a like apparent path states secondary with different node failure events. Though, this disorder is tough to test on large networks due to the necessity to compute all thinkable node failures. Our first input is a set of appropriate/compulsory conditions for detecting a bounded number of letdowns within a random node set that can be verified in polynomial time. In adding to network topology and locations of monitors, our circumstances also join constraints compulsory by the searching device used. Both measures can be rehabilitated into purposes of a per-node stuff, which can be calculated professionally based on the above enough/essential circumstances

    Finding the Right Tree: Topology Inference Despite Spatial Dependences

    Full text link
    © 1963-2012 IEEE. Network tomographic techniques have almost exclusively been built on a strong assumption of mutual independence of link processes. We introduce model classes for link loss processes with non-Trivial spatial dependencies, for which the tree topology is nonetheless identifiable from leaf measurements using multicast probing. We show that these classes are large in a well-defined sense, and we provide an algorithm, SLTD, capable of returning the correct topology with certainty in the limit of infinite data

    Node Failure Localization via Network Tomography

    Full text link
    We investigate the problem of localizing node failures in a communication network from end-to-end path measure-ments, under the assumption that a path behaves normally if and only if it does not contain any failed nodes. To uniquely localize node failures, the measurement paths must show dif-ferent symptoms under different failure events, i.e., for any two distinct sets of failed nodes, there must be a measure-ment path traversing one and only one of them. This condi-tion is, however, impractical to test for large networks. Our first contribution is a characterization of this condition in terms of easily verifiable conditions on the network topol-ogy with given monitor placements under three families of probing mechanisms, which differ in whether measurement paths are (i) arbitrarily controllable, (ii) controllable but cycle-free, or (iii) uncontrollable (i.e., determined by the de-fault routing protocol). Our second contribution is a char-acterization of the maximum identifiability of node failures, measured by the maximum number of simultaneous failures that can always be uniquely localized. Specifically, we bound the maximal identifiability from both the upper and the lower bounds which differ by at most one, and show that these bounds can be evaluated in polynomial time. Finally, we quantify the impact of the probing mechanism on the capability of node failure localization under different prob-ing mechanisms on both random and real network topolo-gies. We observe that despite a higher implementation cost, probing along controllable paths can significantly improve a network’s capability to localize simultaneous node failures

    Practical Network Tomography

    Get PDF
    In this thesis, we investigate methods for the practical and accurate localization of Internet performance problems. The methods we propose belong to the field of network loss tomography, that is, they infer the loss characteristics of links from end-to-end measurements. The existing versions of the problem of network loss tomography are ill-posed, hence, tomographic algorithms that attempt to solve them resort to making various assumptions, and as these assumptions do not usually hold in practice, the information provided by the algorithms might be inaccurate. We argue, therefore, for tomographic algorithms that work under weak, realistic assumptions. We first propose an algorithm that infers the loss rates of network links from end-to-end measurements. Inspired by previous work, we design an algorithm that gains initial information about the network by computing the variances of links' loss rates and by using these variances as an indication of the congestion level of links, i.e., the more congested the link, the higher the variance of its loss rate. Its novelty lies in the way it uses this information – to identify and characterize the maximum set of links whose loss rates can be accurately inferred from end-to-end measurements. We show that our algorithm performs significantly better than the existing alternatives, and that this advantage increases with the number of congested links in the network. Furthermore, we validate its performance by using an "Internet tomographer" that runs on a real testbed. Second, we show that it is feasible to perform network loss tomography in the presence of "link correlations," i.e., when the losses that occur on one link might depend on the losses that occur on other links in the network. More precisely, we formally derive the necessary and sufficient condition under which the probability that each set of links is congested is statistically identifiable from end-to-end measurements even in the presence of link correlations. In doing so, we challenge one of the popular assumptions in network loss tomography, specifically, the assumption that all links are independent. The model we propose assumes we know which links are most likely to be correlated, but it does not assume any knowledge about the nature or the degree of their correlations. In practice, we consider that all links in the same local area network or the same administrative domain are potentially correlated, because they could be sharing physical links, network equipment, or even management processes. Finally, we design a practical algorithm that solves "Congestion Probability Inference" even in the presence of link correlations, i.e., it infers the probability that each set of links is congested even when the losses that occur on one link might depend on the losses that occur on other links in the network. We model Congestion Probability Inference as a system of linear equations where each equation corresponds to a set of paths. Because it is infeasible to consider an equation for each set of paths in the network, our algorithm finds the maximum number of linearly independent equations by selecting particular sets of paths based on our theoretical results. On the one hand, the information provided by our algorithm is less than that provided by the existing alternatives that infer either the loss rates or the congestion statuses of links, i.e., we only learn how often each set of links is congested, as opposed to how many packets were lost at each link, or to which particular links were congested when. On the other hand, this information is more useful in practice because our algorithm works under assumptions weaker than those required by the existing alternatives, and we experimentally show that it is accurate under challenging network conditions such as non-stationary network dynamics and sparse topologies
    corecore