545 research outputs found

    Weighted Round Robin Configuration for Worst-Case Delay Optimization in Network-on-Chip

    Get PDF
    We propose an approach for computing the end-to-end delay bound of individual variable bit-rate flows in a FIFO multiplexer with aggregate scheduling under Weighted Round Robin (WRR) policy. To this end, we use network calculus to derive per-flow end-to-end equivalent service curves employed for computing Least Upper Delay Bounds (LUDBs) of individual flows. Since real time applications are going to meet guaranteed services with lower delay bounds, we optimize weights in WRR policy to minimize LUDBs while satisfying performance constraints. We formulate two constrained delay optimization problems, namely, Minimize-Delay and Multiobjective optimization. Multi-objective optimization has both total delay bounds and their variance as minimization objectives. The proposed optimizations are solved using a genetic algorithm. A Video Object Plane Decoder (VOPD) case study exhibits 15.4% reduction of total worst-case delays and 40.3% reduction on the variance of delays when compared with round robin policy. The optimization algorithm has low run-time complexity, enabling quick exploration of large design spaces. We conclude that an appropriate weight allocation can be a valuable instrument for delay optimization in on-chip network designs

    Performance Modelling and Optimisation of Multi-hop Networks

    Get PDF
    A major challenge in the design of large-scale networks is to predict and optimise the total time and energy consumption required to deliver a packet from a source node to a destination node. Examples of such complex networks include wireless ad hoc and sensor networks which need to deal with the effects of node mobility, routing inaccuracies, higher packet loss rates, limited or time-varying effective bandwidth, energy constraints, and the computational limitations of the nodes. They also include more reliable communication environments, such as wired networks, that are susceptible to random failures, security threats and malicious behaviours which compromise their quality of service (QoS) guarantees. In such networks, packets traverse a number of hops that cannot be determined in advance and encounter non-homogeneous network conditions that have been largely ignored in the literature. This thesis examines analytical properties of packet travel in large networks and investigates the implications of some packet coding techniques on both QoS and resource utilisation. Specifically, we use a mixed jump and diffusion model to represent packet traversal through large networks. The model accounts for network non-homogeneity regarding routing and the loss rate that a packet experiences as it passes successive segments of a source to destination route. A mixed analytical-numerical method is developed to compute the average packet travel time and the energy it consumes. The model is able to capture the effects of increased loss rate in areas remote from the source and destination, variable rate of advancement towards destination over the route, as well as of defending against malicious packets within a certain distance from the destination. We then consider sending multiple coded packets that follow independent paths to the destination node so as to mitigate the effects of losses and routing inaccuracies. We study a homogeneous medium and obtain the time-dependent properties of the packet’s travel process, allowing us to compare the merits and limitations of coding, both in terms of delivery times and energy efficiency. Finally, we propose models that can assist in the analysis and optimisation of the performance of inter-flow network coding (NC). We analyse two queueing models for a router that carries out NC, in addition to its standard packet routing function. The approach is extended to the study of multiple hops, which leads to an optimisation problem that characterises the optimal time that packets should be held back in a router, waiting for coding opportunities to arise, so that the total packet end-to-end delay is minimised

    Analysis of Multiple Flows using Different High Speed TCP protocols on a General Network

    Full text link
    We develop analytical tools for performance analysis of multiple TCP flows (which could be using TCP CUBIC, TCP Compound, TCP New Reno) passing through a multi-hop network. We first compute average window size for a single TCP connection (using CUBIC or Compound TCP) under random losses. We then consider two techniques to compute steady state throughput for different TCP flows in a multi-hop network. In the first technique, we approximate the queues as M/G/1 queues. In the second technique, we use an optimization program whose solution approximates the steady state throughput of the different flows. Our results match well with ns2 simulations.Comment: Submitted to Performance Evaluatio

    Throughput Optimal Routing in Overlay Networks

    Get PDF
    Maximum throughput requires path diversity enabled by bifurcating traffic at different network nodes. In this work, we consider a network where traffic bifurcation is allowed only at a subset of nodes called \emph{routers}, while the rest nodes (called \emph{forwarders}) cannot bifurcate traffic and hence only forward packets on specified paths. This implements an overlay network of routers where each overlay link corresponds to a path in the physical network. We study dynamic routing implemented at the overlay. We develop a queue-based policy, which is shown to be maximally stable (throughput optimal) for a restricted class of network scenarios where overlay links do not correspond to overlapping physical paths. Simulation results show that our policy yields better delay over dynamic policies that allow bifurcation at all nodes, such as the backpressure policy. Additionally, we provide a heuristic extension of our proposed overlay routing scheme for the unrestricted class of networks

    Concave Switching in Single and Multihop Networks

    Full text link
    Switched queueing networks model wireless networks, input queued switches and numerous other networked communications systems. For single-hop networks, we consider a {(α,g\alpha,g)-switch policy} which combines the MaxWeight policies with bandwidth sharing networks -- a further well studied model of Internet congestion. We prove the maximum stability property for this class of randomized policies. Thus these policies have the same first order behavior as the MaxWeight policies. However, for multihop networks some of these generalized polices address a number of critical weakness of the MaxWeight/BackPressure policies. For multihop networks with fixed routing, we consider the Proportional Scheduler (or (1,log)-policy). In this setting, the BackPressure policy is maximum stable, but must maintain a queue for every route-destination, which typically grows rapidly with a network's size. However, this proportionally fair policy only needs to maintain a queue for each outgoing link, which is typically bounded in number. As is common with Internet routing, by maintaining per-link queueing each node only needs to know the next hop for each packet and not its entire route. Further, in contrast to BackPressure, the Proportional Scheduler does not compare downstream queue lengths to determine weights, only local link information is required. This leads to greater potential for decomposed implementations of the policy. Through a reduction argument and an entropy argument, we demonstrate that, whilst maintaining substantially less queueing overhead, the Proportional Scheduler achieves maximum throughput stability.Comment: 28 page

    Comparaison de strategies de calcul de bornes sur NoC

    Get PDF
    The Kalray MPPA2-256 processor integrates 256 processing cores and 32 management cores on a chip. Theses cores are grouped into clusters, and clusters are connected by a high-performance network on chip (NoC). This NoC provides some hardware mechanisms (egress traffic limiters) that can be configured to offer bounded latencies. This paper presents how network calculus can be used to bound these latencies while computing the routes of data flows, using linear programming. Then, its shows how other approaches can also be used and adapted to analyze this NoC. Their performances are then compared on three case studies: two small coming from previous studies, and one realistic with 128 or 256 flows. On theses cases studies, it shows that modeling the shaping introduced by links is of major importance to get accurate bounds. And when packets are of constant size, the Total Flow Analysis gives, on average, bounds 20%-25% smaller than all other methods

    Proportional switching in FIFO networks

    Get PDF
    We consider a family of discrete time multihop switched queueing networks where each packet movesalong a xed route. In this setting, BackPressure is the canonical choice of scheduling policy; this policy hasthe virtues of possessing a maximal stability region and not requiring explicit knowledge of tra c arrival rates.BackPressure has certain structural weaknesses because implementation requires information about each route,and queueing delays can grow super-linearly with route length. For large networks, where packets over manyroutes are processed by a queue, or where packets over a route are processed by many queues, these limitationscan be prohibitive.In this article, we introduce a scheduling policy for FIFO networks, the Proportional Scheduler, which isbased on the proportional fairness criterion. We show that, like BackPressure, the Proportional Scheduler hasa maximal stability region and does not require explicit knowledge of tra c arrival rates. The ProportionalScheduler has the advantage that information about the network's route structure is not required for scheduling,which substantially improves the policy's performance for large networks. For instance, packets can be routedwith only next-hop information and new nodes can be added to the network with only knowledge of thescheduling constraintsThe research of the rst author was partially supported by NSF grants DMS-1105668 and DMS-1203201. The research of the second author was partially supported by the Spanish Ministry of Economy and Competitiveness Grants MTM2013-42104-P via FEDER funds; he thanks the ICMAT (Madrid, Spain) Research Institute that kindly hosted him while developing this project

    Proportional Switching in First-in, First-out Networks

    Get PDF
    We consider a family of discrete time multihop switched queueing networks where each packet moves along a fixed route. In this setting, BackPressure is the canonical choice of scheduling policy; this policy has the virtues of possessing a maximal stability region and not requiring explicit knowledge of traffic arrival rates. BackPressure has certain structural weaknesses because implementation requires information about each route, and queueing delays can grow super-linearly with route length. For large networks, where packets over many routes are processed by a queue, or where packets over a route are processed by many queues, these limitations can be prohibitive. In this article, we introduce a scheduling policy for first-in, first-out networks, the ProportionalScheduler, which is based on the proportional fairness criterion. We show that, like BackPressure, the ProportionalScheduler has a maximal stability region and does not require explicit knowledge of traffic arrival rates. The ProportionalScheduler has the advantage that information about the network's route structure is not required for scheduling, which substantially improves the policy's performance for large networks. For instance, packets can be routed with only next-hop information and new nodes can be added to the network with only knowledge of the scheduling constraints
    corecore