5,544 research outputs found

    Routing Games with Progressive Filling

    Full text link
    Max-min fairness (MMF) is a widely known approach to a fair allocation of bandwidth to each of the users in a network. This allocation can be computed by uniformly raising the bandwidths of all users without violating capacity constraints. We consider an extension of these allocations by raising the bandwidth with arbitrary and not necessarily uniform time-depending velocities (allocation rates). These allocations are used in a game-theoretic context for routing choices, which we formalize in progressive filling games (PFGs). We present a variety of results for equilibria in PFGs. We show that these games possess pure Nash and strong equilibria. While computation in general is NP-hard, there are polynomial-time algorithms for prominent classes of Max-Min-Fair Games (MMFG), including the case when all users have the same source-destination pair. We characterize prices of anarchy and stability for pure Nash and strong equilibria in PFGs and MMFGs when players have different or the same source-destination pairs. In addition, we show that when a designer can adjust allocation rates, it is possible to design games with optimal strong equilibria. Some initial results on polynomial-time algorithms in this direction are also derived

    Selfish traffic allocation for server farms

    Get PDF
    We study the price of selfish routing in noncooperative networks like the Internet. In particular, we investigate the price of selfish routing using the price of anarchy (a.k.a. the coordination ratio) and other (e.g., bicriteria) measures in the recently introduced game theoretic parallel links network model of Koutsoupias and Papadimitriou. We generalize this model toward general, monotone families of cost functions and cost functions from queueing theory. A summary of our main results for general, monotone cost functions is as follows: 1. We give an exact characterization of all cost functions having a bounded/unbounded price of anarchy. For example, the price of anarchy for cost functions describing the expected delay in queueing systems is unbounded. 2. We show that an unbounded price of anarchy implies an extremely high performance degradation under bicriteria measures. In fact, the price of selfish routing can be as high as a bandwidth degradation by a factor that is linear in the network size. 3. We separate the game theoretic (integral) allocation model from the (fractional) flow model by demonstrating that even a very small or negligible amount of integrality can lead to a dramatic performance degradation. 4. We unify recent results on selfish routing under different objectives by showing that an unbounded price of anarchy under the min-max objective implies an unbounded price of anarchy under the average cost objective and vice versa. Our special focus lies on cost functions describing the behavior of Web servers that can open only a limited number of Transmission Control Protocol (TCP) connections. In particular, we compare the performance of queueing systems that serve all incoming requests with servers that reject requests in case of overload. Our analysis indicates that all queueing systems without rejection cannot give any reasonable guarantee on the expected delay of requests under selfish routing even when the injected load is far away from the capacity of the system. In contrast, Web server farms that are allowed to reject requests can guarantee a high quality of service for every individual request stream even under relatively high injection rates

    NoCo: ILP-based worst-case contention estimation for mesh real-time manycores

    Get PDF
    Manycores are capable of providing the computational demands required by functionally-advanced critical applications in domains such as automotive and avionics. In manycores a network-on-chip (NoC) provides access to shared caches and memories and hence concentrates most of the contention that tasks suffer, with effects on the worst-case contention delay (WCD) of packets and tasks' WCET. While several proposals minimize the impact of individual NoC parameters on WCD, e.g. mapping and routing, there are strong dependences among these NoC parameters. Hence, finding the optimal NoC configurations requires optimizing all parameters simultaneously, which represents a multidimensional optimization problem. In this paper we propose NoCo, a novel approach that combines ILP and stochastic optimization to find NoC configurations in terms of packet routing, application mapping, and arbitration weight allocation. Our results show that NoCo improves other techniques that optimize a subset of NoC parameters.This work has been partially supported by the Spanish Ministry of Economy and Competitiveness under grant TIN2015- 65316-P and the HiPEAC Network of Excellence. It also received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (agreement No. 772773). Carles Hernández is jointly supported by the MINECO and FEDER funds through grant TIN2014-60404-JIN. Jaume Abella has been partially supported by the Spanish Ministry of Economy and Competitiveness under Ramon y Cajal postdoctoral fellowship number RYC-2013-14717. Enrico Mezzetti has been partially supported by the Spanish Ministry of Economy and Competitiveness under Juan de la Cierva-Incorporaci´on postdoctoral fellowship number IJCI-2016-27396.Peer ReviewedPostprint (author's final draft

    Improving performance guarantees in wormhole mesh NoC designs

    Get PDF
    Wormhole-based mesh Networks-on-Chip (wNoC) are deployed in high-performance many-core processors due to their physical scalability and low-cost. Delivering tight and time composable Worst-Case Execution Time (WCET) estimates for applications as needed in safety-critical real-time embedded systems is challenged by wNoCs due to their distributed nature. We propose a bandwidth control mechanism for wNoCs that enables the computation of tight time-composable WCET estimates with low average performance degradation and high scalability. Our evaluation with the EEMBC automotive suite and an industrial real-time parallel avionics application confirms so.The research leading to these results is funded by the European Union Seventh Framework Programme under grant agreement no. 287519 (parMERASA) and by the Ministry of Science and Technology of Spain under contract TIN2012-34557. Milos Panic is funded by the Spanish Ministry of Education under the FPU grant FPU12/05966. Carles Hernández is jointly funded by the Spanish Ministry of Economy and Competitiveness and FEDER funds through grant TIN2014-60404-JIN. Jaume Abella is partially supported by the Ministry of Economy and Competitiveness under Ramon y Cajal postdoctoral fellowship number RYC-2013-14717.Peer ReviewedPostprint (author's final draft

    Fundamentals of Inter-cell Overhead Signaling in Heterogeneous Cellular Networks

    Full text link
    Heterogeneous base stations (e.g. picocells, microcells, femtocells and distributed antennas) will become increasingly essential for cellular network capacity and coverage. Up until now, little basic research has been done on the fundamentals of managing so much infrastructure -- much of it unplanned -- together with the carefully planned macro-cellular network. Inter-cell coordination is in principle an effective way of ensuring different infrastructure components behave in a way that increases, rather than decreases, the key quality of service (QoS) metrics. The success of such coordination depends heavily on how the overhead is shared, and the rate and delay of the overhead sharing. We develop a novel framework to quantify overhead signaling for inter-cell coordination, which is usually ignored in traditional 1-tier networks, and assumes even more importance in multi-tier heterogeneous cellular networks (HCNs). We derive the overhead quality contour for general K-tier HCNs -- the achievable set of overhead packet rate, size, delay and outage probability -- in closed-form expressions or computable integrals under general assumptions on overhead arrivals and different overhead signaling methods (backhaul and/or wireless). The overhead quality contour is further simplified for two widely used models of overhead arrivals: Poisson and deterministic arrival process. This framework can be used in the design and evaluation of any inter-cell coordination scheme. It also provides design insights on backhaul and wireless overhead channels to handle specific overhead signaling requirements.Comment: 21 pages, 9 figure

    pTNoC: Probabilistically time-analyzable tree-based NoC for mixed-criticality systems

    Get PDF
    The use of networks-on-chip (NoC) in real-time safety-critical multicore systems challenges deriving tight worst-case execution time (WCET) estimates. This is due to the complexities in tightly upper-bounding the contention in the access to the NoC among running tasks. Probabilistic Timing Analysis (PTA) is a powerful approach to derive WCET estimates on relatively complex processors. However, so far it has only been tested on small multicores comprising an on-chip bus as communication means, which intrinsically does not scale to high core counts. In this paper we propose pTNoC, a new tree-based NoC design compatible with PTA requirements and delivering scalability towards medium/large core counts. pTNoC provides tight WCET estimates by means of asymmetric bandwidth guarantees for mixed-criticality systems with negligible impact on average performance. Finally, our implementation results show the reduced area and power costs of the pTNoC.The research leading to these results has received funding from the European Community’s Seventh Framework Programme [FP7/2007-2013] under the PROXIMA Project (www.proxima-project.eu), grant agreement no 611085. This work has also been partially supported by the Spanish Ministry of Science and Innovation under grant TIN2015-65316-P and the HiPEAC Network of Excellence. Mladen Slijepcevic is funded by the Obra Social Fundación la Caixa under grant Doctorado “la Caixa” - Severo Ochoa. Carles Hern´andez is jointly funded by the Spanish Ministry of Economy and Competitiveness (MINECO) and FEDER funds through grant TIN2014-60404-JIN. Jaume Abella has been partially supported by the MINECO under Ramon y Cajal postdoctoral fellowship number RYC-2013-14717.Peer ReviewedPostprint (author's final draft

    Throughput Optimal On-Line Algorithms for Advanced Resource Reservation in Ultra High-Speed Networks

    Full text link
    Advanced channel reservation is emerging as an important feature of ultra high-speed networks requiring the transfer of large files. Applications include scientific data transfers and database backup. In this paper, we present two new, on-line algorithms for advanced reservation, called BatchAll and BatchLim, that are guaranteed to achieve optimal throughput performance, based on multi-commodity flow arguments. Both algorithms are shown to have polynomial-time complexity and provable bounds on the maximum delay for 1+epsilon bandwidth augmented networks. The BatchLim algorithm returns the completion time of a connection immediately as a request is placed, but at the expense of a slightly looser competitive ratio than that of BatchAll. We also present a simple approach that limits the number of parallel paths used by the algorithms while provably bounding the maximum reduction factor in the transmission throughput. We show that, although the number of different paths can be exponentially large, the actual number of paths needed to approximate the flow is quite small and proportional to the number of edges in the network. Simulations for a number of topologies show that, in practice, 3 to 5 parallel paths are sufficient to achieve close to optimal performance. The performance of the competitive algorithms are also compared to a greedy benchmark, both through analysis and simulation.Comment: 9 pages, 8 figure

    On the flow-level stability of data networks without congestion control: the case of linear networks and upstream trees

    Full text link
    In this paper, flow models of networks without congestion control are considered. Users generate data transfers according to some Poisson processes and transmit corresponding packet at a fixed rate equal to their access rate until the entire document is received at the destination; some erasure codes are used to make the transmission robust to packet losses. We study the stability of the stochastic process representing the number of active flows in two particular cases: linear networks and upstream trees. For the case of linear networks, we notably use fluid limits and an interesting phenomenon of "time scale separation" occurs. Bounds on the stability region of linear networks are given. For the case of upstream trees, underlying monotonic properties are used. Finally, the asymptotic stability of those processes is analyzed when the access rate of the users decreases to 0. An appropriate scaling is introduced and used to prove that the stability region of those networks is asymptotically maximized
    corecore