2 research outputs found

    The STRESS Method for Boundary-point Performance Analysis of End-to-end Multicast Timer-Suppression Mechanisms

    Full text link
    Evaluation of Internet protocols usually uses random scenarios or scenarios based on designers' intuition. Such approach may be useful for average-case analysis but does not cover boundary-point (worst or best-case) scenarios. To synthesize boundary-point scenarios a more systematic approach is needed.In this paper, we present a method for automatic synthesis of worst and best case scenarios for protocol boundary-point evaluation. Our method uses a fault-oriented test generation (FOTG) algorithm for searching the protocol and system state space to synthesize these scenarios. The algorithm is based on a global finite state machine (FSM) model. We extend the algorithm with timing semantics to handle end-to-end delays and address performance criteria. We introduce the notion of a virtual LAN to represent delays of the underlying multicast distribution tree. The algorithms used in our method utilize implicit backward search using branch and bound techniques and start from given target events. This aims to reduce the search complexity drastically. As a case study, we use our method to evaluate variants of the timer suppression mechanism, used in various multicast protocols, with respect to two performance criteria: overhead of response messages and response time. Simulation results for reliable multicast protocols show that our method provides a scalable way for synthesizing worst-case scenarios automatically. Results obtained using stress scenarios differ dramatically from those obtained through average-case analyses. We hope for our method to serve as a model for applying systematic scenario generation to other multicast protocols.Comment: 24 pages, 10 figures, IEEE/ACM Transactions on Networking (ToN) [To appear

    Fairness Evaluation Experiments for Multicast Congestion Control Protocols

    No full text
    Fairness to current Internet traffic, particularly TCP, is an important requirement for new protocols in order to be safely deployed in the Internet. This specifically applies to multicast protocols that should be deployed with great care. In this paper we provide a set of experiments that can be used as a benchmark to evaluate the fairness of multicast congestion control mechanisms when running with competing TCP flows. We carefully select the experiments in such a way to target specific congestion control mechanisms and to reveal the differences between TCP and the proposed multicasting protocol. This enables us to have a better understanding of the proposed protocol behavior and to evaluate its fairness and when violations can happen. To clarify our experiments we carry them on a single-rate case study protocol, pgmcc, using NS-2 simulations. Our analysis shows the strengths and potential problems of the protocol and point to possible improvements. Several congestion control mechanisms are targeted by the experiments such as timeouts, response to ACKs and losses, independent and congestion losses effect. In addition, we evaluate multicast mechanisms such as the effect of multiple receivers, group representative selection, and feedback suppression when there is network support
    corecore