28 research outputs found

    Shortest Path versus Multi-Hub Routing in Networks with Uncertain Demand

    Full text link
    We study a class of robust network design problems motivated by the need to scale core networks to meet increasingly dynamic capacity demands. Past work has focused on designing the network to support all hose matrices (all matrices not exceeding marginal bounds at the nodes). This model may be too conservative if additional information on traffic patterns is available. Another extreme is the fixed demand model, where one designs the network to support peak point-to-point demands. We introduce a capped hose model to explore a broader range of traffic matrices which includes the above two as special cases. It is known that optimal designs for the hose model are always determined by single-hub routing, and for the fixed- demand model are based on shortest-path routing. We shed light on the wider space of capped hose matrices in order to see which traffic models are more shortest path-like as opposed to hub-like. To address the space in between, we use hierarchical multi-hub routing templates, a generalization of hub and tree routing. In particular, we show that by adding peak capacities into the hose model, the single-hub tree-routing template is no longer cost-effective. This initiates the study of a class of robust network design (RND) problems restricted to these templates. Our empirical analysis is based on a heuristic for this new hierarchical RND problem. We also propose that it is possible to define a routing indicator that accounts for the strengths of the marginals and peak demands and use this information to choose the appropriate routing template. We benchmark our approach against other well-known routing templates, using representative carrier networks and a variety of different capped hose traffic demands, parameterized by the relative importance of their marginals as opposed to their point-to-point peak demands

    Running Header: Statistical Problem Detection.

    No full text
    tel:(518)276-8424; fax:(518)276-2433. The detection of network fault scenarios was achieved using an appropriate subset of Man-agement Information Base (MIB) variables. Anomalous changes in the behavior of the MIB variables was detected using a sequential Generalized Likelihood Ratio (GLR) test. This in-formation was then temporally correlated using a duration filter to provide node level alarms which correlated with observed network faults and performance problems. The algorithm was implemented on data obtained from two different network nodes. The algorithm was optimized using five of the nine fault data sets and it proved general enough to detect three of the re-maining four faults. Consistent results were obtained from the second node as well. Detection of most faults occurred in advance (at least 5 mins) of the fault suggesting the possibility of ∗Supported by DARPA under contract number F30602-97-C-0274 1 prediction and recovery in the future

    End-to-end service quality measurement using source-routed probes

    No full text
    Abstract — The need to monitor real time network services has prompted service providers to use new measurement technologies, such as service-specific probes. Service-specific probes are active probes that closely mimic the service traffic so that it receives the same treatment from the network as the actual service traffic. These probes are end-to-end and their deployment depends on solutions that address questions such as minimizing probe traffic, while still obtaining maximum coverage of all the links in the network. In this paper, we provide a polynomial-time probe-path computation algorithm, as well as a-approximate solution for merging probe paths when the number of probes exceed a required bound�. Our algorithms are evaluated using ISP topologies generated via Rocketfuel. We find that for most topologies, it is possible to cover more than��of the edges using just�of the nodes as terminals. Our work also suggests that the deployment strategy for active probes is dependent on cost issues, such as probe installation, probe set-up, and maintenance costs. I

    Distributed Network Monitoring with Bounded Link Utilization in IP Networks

    No full text
    Designing optimal measurement infrastructure is a key step for network management. In this work we address the problem of optimizing a scalable distributed polling system. The goal of the optimization is to reduce the cost of deployment of the measurement infrastructure by identifying a minimum poller set subject to bandwidth constraints on the individual links. We show that this problem is NP-hard and propose three different heuristics to obtain a solution. We evaluate our heuristics on both hierarchical and flat topologies with different network sizes under different polling bandwidth constraints. We find that the heuristic of choosing the poller that can poll the maximum number of unpolled nodes is the best approach. Our simulation studies show that the results obtained by our best heuristic is close to the lower bound obtained using LP relaxation

    Router Buffer Sizing Revisited: The Role of the Output/Input Capacity Ratio

    No full text
    The issue of router buffer sizing is still open and significant. Previous work either considers open-loop traffic or only analyzes persistent TCP flows. This paper differs in two ways. First, it considers the more realistic case of non-persistent TCP flows with heavy-tailed size distribution. Second, instead of only looking at link metrics, we focus on the impact of buffer sizing on TCP performance. Specifically, our goal is to find the buffer size that maximizes the average per-flow TCP throughput. Through a combination of testbed experiments, simulation, and analysis, we reach the following conclusions. The output/input capacity ratio at a network link largely determines the required buffer size. If that ratio is larger than one, the loss rate drops exponentially with the buffer size and the optimal buffer size is close to zero. Otherwise, if the output/input capacity ratio is lower than one, the loss rate follows a power-law reduction with the buffer size and significant buffering is needed, especially with flows that are mostly in congestion-avoidance. Smaller transfers, which are mostly in slow-start, require significantly smaller buffers. We conclude by revisiting the ongoing debate on “small versus large” buffers from a new perspective
    corecore