4,182 research outputs found

    CapEst: A Measurement-based Approach to Estimating Link Capacity in Wireless Networks

    Full text link
    Estimating link capacity in a wireless network is a complex task because the available capacity at a link is a function of not only the current arrival rate at that link, but also of the arrival rate at links which interfere with that link as well as of the nature of interference between these links. Models which accurately characterize this dependence are either too computationally complex to be useful or lack accuracy. Further, they have a high implementation overhead and make restrictive assumptions, which makes them inapplicable to real networks. In this paper, we propose CapEst, a general, simple yet accurate, measurement-based approach to estimating link capacity in a wireless network. To be computationally light, CapEst allows inaccuracy in estimation; however, using measurements, it can correct this inaccuracy in an iterative fashion and converge to the correct estimate. Our evaluation shows that CapEst always converged to within 5% of the correct value in less than 18 iterations. CapEst is model-independent, hence, is applicable to any MAC/PHY layer and works with auto-rate adaptation. Moreover, it has a low implementation overhead, can be used with any application which requires an estimate of residual capacity on a wireless link and can be implemented completely at the network layer without any support from the underlying chipset

    Routing Games with Progressive Filling

    Full text link
    Max-min fairness (MMF) is a widely known approach to a fair allocation of bandwidth to each of the users in a network. This allocation can be computed by uniformly raising the bandwidths of all users without violating capacity constraints. We consider an extension of these allocations by raising the bandwidth with arbitrary and not necessarily uniform time-depending velocities (allocation rates). These allocations are used in a game-theoretic context for routing choices, which we formalize in progressive filling games (PFGs). We present a variety of results for equilibria in PFGs. We show that these games possess pure Nash and strong equilibria. While computation in general is NP-hard, there are polynomial-time algorithms for prominent classes of Max-Min-Fair Games (MMFG), including the case when all users have the same source-destination pair. We characterize prices of anarchy and stability for pure Nash and strong equilibria in PFGs and MMFGs when players have different or the same source-destination pairs. In addition, we show that when a designer can adjust allocation rates, it is possible to design games with optimal strong equilibria. Some initial results on polynomial-time algorithms in this direction are also derived

    Receiver-Based Flow Control for Networks in Overload

    Get PDF
    We consider utility maximization in networks where the sources do not employ flow control and may consequently overload the network. In the absence of flow control at the sources, some packets will inevitably have to be dropped when the network is in overload. To that end, we first develop a distributed, threshold-based packet dropping policy that maximizes the weighted sum throughput. Next, we consider utility maximization and develop a receiver-based flow control scheme that, when combined with threshold-based packet dropping, achieves the optimal utility. The flow control scheme creates virtual queues at the receivers as a push-back mechanism to optimize the amount of data delivered to the destinations via back-pressure routing. A novel feature of our scheme is that a utility function can be assigned to a collection of flows, generalizing the traditional approach of optimizing per-flow utilities. Our control policies use finite-buffer queues and are independent of arrival statistics. Their near-optimal performance is proved and further supported by simulation results.Comment: 14 pages, 4 figures, 5 tables, preprint submitted to IEEE INFOCOM 201

    Optimal Joint Routing and Scheduling in Millimeter-Wave Cellular Networks

    Full text link
    Millimeter-wave (mmWave) communication is a promising technology to cope with the expected exponential increase in data traffic in 5G networks. mmWave networks typically require a very dense deployment of mmWave base stations (mmBS). To reduce cost and increase flexibility, wireless backhauling is needed to connect the mmBSs. The characteristics of mmWave communication, and specifically its high directional- ity, imply new requirements for efficient routing and scheduling paradigms. We propose an efficient scheduling method, so-called schedule-oriented optimization, based on matching theory that optimizes QoS metrics jointly with routing. It is capable of solving any scheduling problem that can be formulated as a linear program whose variables are link times and QoS metrics. As an example of the schedule-oriented optimization, we show the optimal solution of the maximum throughput fair scheduling (MTFS). Practically, the optimal scheduling can be obtained even for networks with over 200 mmBSs. To further increase the runtime performance, we propose an efficient edge-coloring based approximation algorithm with provable performance bound. It achieves over 80% of the optimal max-min throughput and runs 5 to 100 times faster than the optimal algorithm in practice. Finally, we extend the optimal and approximation algorithms for the cases of multi-RF-chain mmBSs and integrated backhaul and access networks.Comment: To appear in Proceedings of INFOCOM '1

    Application-Oriented Flow Control: Fundamentals, Algorithms and Fairness

    Get PDF
    This paper is concerned with flow control and resource allocation problems in computer networks in which real-time applications may have hard quality of service (QoS) requirements. Recent optimal flow control approaches are unable to deal with these problems since QoS utility functions generally do not satisfy the strict concavity condition in real-time applications. For elastic traffic, we show that bandwidth allocations using the existing optimal flow control strategy can be quite unfair. If we consider different QoS requirements among network users, it may be undesirable to allocate bandwidth simply according to the traditional max-min fairness or proportional fairness. Instead, a network should have the ability to allocate bandwidth resources to various users, addressing their real utility requirements. For these reasons, this paper proposes a new distributed flow control algorithm for multiservice networks, where the application's utility is only assumed to be continuously increasing over the available bandwidth. In this, we show that the algorithm converges, and that at convergence, the utility achieved by each application is well balanced in a proportionally (or max-min) fair manner

    On-Line End-to-End Congestion Control

    Full text link
    Congestion control in the current Internet is accomplished mainly by TCP/IP. To understand the macroscopic network behavior that results from TCP/IP and similar end-to-end protocols, one main analytic technique is to show that the the protocol maximizes some global objective function of the network traffic. Here we analyze a particular end-to-end, MIMD (multiplicative-increase, multiplicative-decrease) protocol. We show that if all users of the network use the protocol, and all connections last for at least logarithmically many rounds, then the total weighted throughput (value of all packets received) is near the maximum possible. Our analysis includes round-trip-times, and (in contrast to most previous analyses) gives explicit convergence rates, allows connections to start and stop, and allows capacities to change.Comment: Proceedings IEEE Symp. Foundations of Computer Science, 200

    Improving performance guarantees in wormhole mesh NoC designs

    Get PDF
    Wormhole-based mesh Networks-on-Chip (wNoC) are deployed in high-performance many-core processors due to their physical scalability and low-cost. Delivering tight and time composable Worst-Case Execution Time (WCET) estimates for applications as needed in safety-critical real-time embedded systems is challenged by wNoCs due to their distributed nature. We propose a bandwidth control mechanism for wNoCs that enables the computation of tight time-composable WCET estimates with low average performance degradation and high scalability. Our evaluation with the EEMBC automotive suite and an industrial real-time parallel avionics application confirms so.The research leading to these results is funded by the European Union Seventh Framework Programme under grant agreement no. 287519 (parMERASA) and by the Ministry of Science and Technology of Spain under contract TIN2012-34557. Milos Panic is funded by the Spanish Ministry of Education under the FPU grant FPU12/05966. Carles Hernández is jointly funded by the Spanish Ministry of Economy and Competitiveness and FEDER funds through grant TIN2014-60404-JIN. Jaume Abella is partially supported by the Ministry of Economy and Competitiveness under Ramon y Cajal postdoctoral fellowship number RYC-2013-14717.Peer ReviewedPostprint (author's final draft

    Strong Nash Equilibria in Games with the Lexicographical Improvement Property

    Get PDF
    We introduce a class of finite strategic games with the property that every deviation of a coalition of players that is profitable to each of its members strictly decreases the lexicographical order of a certain function defined on the set of strategy profiles. We call this property the Lexicographical Improvement Property (LIP) and show that it implies the existence of a generalized strong ordinal potential function. We use this characterization to derive existence, efficiency and fairness properties of strong Nash equilibria. We then study a class of games that generalizes congestion games with bottleneck objectives that we call bottleneck congestion games. We show that these games possess the LIP and thus the above mentioned properties. For bottleneck congestion games in networks, we identify cases in which the potential function associated with the LIP leads to polynomial time algorithms computing a strong Nash equilibrium. Finally, we investigate the LIP for infinite games. We show that the LIP does not imply the existence of a generalized strong ordinal potential, thus, the existence of SNE does not follow. Assuming that the function associated with the LIP is continuous, however, we prove existence of SNE. As a consequence, we prove that bottleneck congestion games with infinite strategy spaces and continuous cost functions possess a strong Nash equilibrium
    • …
    corecore