19 research outputs found
On improving bandwidth assurance in AF-based DiffServ networks using a control theoretic approach
The assured forwarding (AF) based service in a differentiated services (DiffServ) network fails to provide bandwidth assurance among competing aggregates under certain conditions, for example, where there exists a large disparity in the round-trip times, packet sizes, or target rates of the aggregates, or there exist non-adaptive aggregates. Several mechanisms have been proposed in order to address the problem of providing bandwidth assurance for aggregates, using only the knowledge gathered at ingress routers. In this paper, we present a control theoretic approach to analyze these mechanisms and explore the reasons when they fail to achieve bandwidth assurance under some circumstances. Then we propose a simple but robust controller for this problem, namely, the variable-structure adaptive CIR threshold (VS-ACT) mechanism. We validate the analysis and demonstrate that VS-ACT outperforms several other mechanisms proposed in the literature over a wide range of network dynamics through extensive simulations. (c) 2005 Elsevier B.V. All rights reserved
A Petri net approach for logical inference of clauses
Logical inference of clauses has been an important technique in automated reasoning. The process of inference determine's whether a given clause is implied by a collection of clauses. Petri nets have been a popular formalism for modelling the behavior of complex systems. A large volume of techniques have been developed to analyze Petri net models and derive conclusions about the logical behavior of systems. The computation of the T-invariants for Petri net models enable us to study their logical properties. Mapping between Horn clauses and Petri nets have been proposed in the literature, In this paper, we survey the mapping techniques proposed in the literature. We also show how this mapping can be extended to non Horn clauses
Effect of different marking strategies on explicit congestion notification (ECN) performance
The congestion control mechanisms built into the Transmission Control Protocol (TCP) use packet drops as a means to detect congestion occurring in the network. Unnecessary packet drops lead to poor performance for low-bandwidth delay-sensitive applications. Explicit Congestion Notification (ECN) is proposed as a mechanism to provide feedback to the sources about impending congestion in the routers, without the need to drop packets. This requires (lie ECN bit of the IP packet to be marked at the router based on mechanisms like Random Earl), Detection (RED) to identify congestion. In this paper, we examine three different marking strategies, viz., mark-tail, mark-front and mark-random. The throughput performance of ECN flows and the unfairness among the ECN flows are examined. We also study the interaction between ECN and non-ECN flows
Adaptive marking threshold for assured forwarding services
Recent research studies have shown that Assured Forwarding (AF) service in the current Differentiated Services (Diffserv) framework does not provide bandwidth assurance in some circumstances. This paper proposes a mechanism, called Adaptive CIR+PIR Threshold (ACPT), which improves bandwidth assurance and domain throughput simultaneously. Extensive simulation results demonstrate significant improvement with ACPT in bandwidth assurance and domain throughput under various conditions: different Round Trip Times (RTT), different numbers of micro-flows in an aggregate, different target rates, different packet sizes, and the presence of non-adaptive flows, compared to other mechanisms proposed in the literature
MODELING CORRELATION IN SOFTWARE RECOVERY BLOCKS
This paper considers the problem of accurately modeling the software fault-tolerance technique based on recovery blocks. Models of such systems have been criticized for their assumptions of independence. Analysis of some systems have considered the correlation between software modules. This correlation may be due to a portion of the functional specification that is common to all software modules or due to the inherent hardness of some problems. We consider three types of dependence which can be captured using measurements. We consider correlation between software modules for a single input, correlation between successive acceptance tests on correct module outputs and incorrect module outputs, and correlation between subsequent inputs. The technique we use is quite general and can be applied to other types of correlation. In accounting for dependence, we use the intensity distribution introduced by Eckhardt and Lee. We consider a new method of generating the intensity distribution which is based on the pairwise correlation between modules. This method provides us with a pessimistic result and a probability-based approximation. We contrast this method with the assumption of independent modules as well as the use of the beta-binomial density which was introduced by Nicola and Goyal. For the purpose of obtaining numerical results, we use stochastic reward nets (SRN's) that incorporate all of the above dependencies and then use a modeling tool called Stochastic Petri Net Package (SPNP)
VQ-RED: An efficient virtual queue management approach to improve fairness in infrastructure WLAN
In this paper, we consider two fairness problems (downlink/uplink fairness and fairness among flows in the same direction) that arise in the infrastructure WLAN. We propose a virtual queue management approach, named VQ-RED to address the fairness problems. We demonstrate the effectiveness of our approach by conducting a series of simulations. The results show that compared with standard DCF, VQ-RED not only greatly improves the fairness, but also reduces packet delays
Accelerating mean time to failure computations
In this paper we consider the problem of numerical computation of the mean time to failure (MTTF) in Markovian dependability and/or performance models. The problem can be cast as a system of linear equations which is solved using an iterative method preserving sparsity of the Markov chain matrix. For highly dependable systems, system failure is a rare event and the above system solution can take an extremely large number of iterations. We propose to solve the problem by dividing the computation in two parts. First, by making some of the high probability states absorbing, we compute the MTTF of the modified Markov chain. In a subsequent step, by solving another system of linear equations, we are able to compute the MTTF of the original model. We prove that for a class of highly dependable systems, the resulting method can speed up computation of the MTTF by orders of magnitude. Experimental results supporting this claim are presented. We also obtain bounds on the convergence rate for computing the mean entrance time of a rare set of states in a class of queueing models
Analysis of nonblocking ATM switches with multiple input queues
An analytical model for the performance analysis of a multiple input queued asynchronous transfer mode (ATM) switch is presented in this paper. The interconnection network of the ATM switch is internally nonblocking and each input port maintains a separate queue of cells for each output port. The switch uses parallel iterative matching (PIM) [7] to find the maximal matching between the input and output ports of the switch, A closed-form solution for the maximum throughput of the switch under saturated conditions is derived, It is found that the maximum throughput of the switch exceeds 99\% with just four iterations of the PIM algorithm. Using the tagged input queue approach, an analytical model for evaluating the switch performance under an independent identically distributed Bernoulli traffic with the cell destinations uniformly distributed over all output ports is developed. The switch throughput, mean cell delay, and cell loss probability are computed from the analytical model. The accuracy of the analytical model is verified using simulation
Exploiting proximity in cooperative download of large files in peer-to-peer networks
Peer-to-peer networks have long being used for file-sharing among the connected peers. Recently, the popular BitTorrent tool has permitted the exchange of large sized files by dividing it into fragments, and enabling the peers to exchange the fragments amongst themselves through an overlay network. Each file requires the establishment of a separate overlay among the peers. In this paper, we explore the use of proximity both in the construction of the overlay network, and the efficient exchange of the file fragments, mainly aimed at reducing the download time for the peers and reducing resource usage (esp. link bandwidth) in the underlying network. We give some analytical and simulation results to show the improvement that can be achieved using proximity. © 2007 IEEE