2,441 research outputs found

    Graph Orientation and Flows Over Time

    Get PDF
    Flows over time are used to model many real-world logistic and routing problems. The networks underlying such problems -- streets, tracks, etc. -- are inherently undirected and directions are only imposed on them to reduce the danger of colliding vehicles and similar problems. Thus the question arises, what influence the orientation of the network has on the network flow over time problem that is being solved on the oriented network. In the literature, this is also referred to as the contraflow or lane reversal problem. We introduce and analyze the price of orientation: How much flow is lost in any orientation of the network if the time horizon remains fixed? We prove that there is always an orientation where we can still send 13\frac{1}{3} of the flow and this bound is tight. For the special case of networks with a single source or sink, this fraction is 12\frac12 which is again tight. We present more results of similar flavor and also show non-approximability results for finding the best orientation for single and multicommodity maximum flows over time

    Throughput Optimal On-Line Algorithms for Advanced Resource Reservation in Ultra High-Speed Networks

    Full text link
    Advanced channel reservation is emerging as an important feature of ultra high-speed networks requiring the transfer of large files. Applications include scientific data transfers and database backup. In this paper, we present two new, on-line algorithms for advanced reservation, called BatchAll and BatchLim, that are guaranteed to achieve optimal throughput performance, based on multi-commodity flow arguments. Both algorithms are shown to have polynomial-time complexity and provable bounds on the maximum delay for 1+epsilon bandwidth augmented networks. The BatchLim algorithm returns the completion time of a connection immediately as a request is placed, but at the expense of a slightly looser competitive ratio than that of BatchAll. We also present a simple approach that limits the number of parallel paths used by the algorithms while provably bounding the maximum reduction factor in the transmission throughput. We show that, although the number of different paths can be exponentially large, the actual number of paths needed to approximate the flow is quite small and proportional to the number of edges in the network. Simulations for a number of topologies show that, in practice, 3 to 5 parallel paths are sufficient to achieve close to optimal performance. The performance of the competitive algorithms are also compared to a greedy benchmark, both through analysis and simulation.Comment: 9 pages, 8 figure

    Quickest Flows Over Time

    Get PDF
    Flows over time (also called dynamic flows) generalize standard network flows by introducing an element of time. They naturally model problems where travel and transmission are not instantaneous. Traditionally, flows over time are solved in time‐expanded networks that contain one copy of the original network for each discrete time step. While this method makes available the whole algorithmic toolbox developed for static flows, its main and often fatal drawback is the enormous size of the time‐expanded network. We present several approaches for coping with this difficulty. First, inspired by the work of Ford and Fulkerson on maximal s‐t‐flows over time (or “maximal dynamic s‐t‐flows”), we show that static length‐bounded flows lead to provably good multicommodity flows over time. Second, we investigate “condensed” time‐expanded networks which rely on a rougher discretization of time. We prove that a solution of arbitrary precision can be computed in polynomial time through an appropriate discretization leading to a condensed time‐expanded network of polynomial size. In particular, our approach yields fully polynomial‐time approximation schemes for the NP‐hard quickest min‐cost and multicommodity flow problems. For single commodity problems, we show that storage of flow at intermediate nodes is unnecessary, and our approximation schemes do not use any

    Exact performance analysis of a single-wavelength optical buffer with correlated inter-arrival times

    Get PDF
    Providing a photonic alternative to the current electronic switching in the backbone, optical packet switching (OPS) and optical bursts witching (OBS) require optical buffering. Optical buffering exploits delays in long optical fibers; an optical buffer is implemented by routing packets through a set of fiber delay lines (FDLs). Previous studies pointed out that, in comparison with electronic buffers, optical buffering suffers from an additional performance degradation. This contribution builds on this observation by studying optical buffer performance under more general traffic assumptions. Features of the optical buffer model under consideration include a Markovian arrival process, general burst sizes and a finite set of fiber delay lines of arbitrary length. Our algorithmic approach yields instant analytic results for important performance measures such as the burst loss ratio and the mean delay

    Everything Matters in Programmable Packet Scheduling

    Full text link
    Programmable packet scheduling allows the deployment of scheduling algorithms into existing switches without need for hardware redesign. Scheduling algorithms are programmed by tagging packets with ranks, indicating their desired priority. Programmable schedulers then execute these algorithms by serving packets in the order described in their ranks. The ideal programmable scheduler is a Push-In First-Out (PIFO) queue, which achieves perfect packet sorting by pushing packets into arbitrary positions in the queue, while only draining packets from the head. Unfortunately, implementing PIFO queues in hardware is challenging due to the need to arbitrarily sort packets at line rate based on their ranks. In the last years, various techniques have been proposed, approximating PIFO behaviors using the available resources of existing data planes. While promising, approaches to date only approximate one of the characteristic behaviors of PIFO queues (i.e., its scheduling behavior, or its admission control). We propose PACKS, the first programmable scheduler that fully approximates PIFO queues on all their behaviors. PACKS does so by smartly using a set of strict-priority queues. It uses packet-rank information and queue-occupancy levels at enqueue to decide: whether to admit packets to the scheduler, and how to map admitted packets to the different queues. We fully implement PACKS in P4 and evaluate it on real workloads. We show that PACKS: better-approximates PIFO than state-of-the-art approaches and scales. We also show that PACKS runs at line rate on existing hardware (Intel Tofino).Comment: 12 pages, 12 figures (without references and appendices

    Packet Skipping and Network Coding for Delay-Sensitive Network Communication

    Full text link
    We provide an analytical study of the impact of packet skipping and opportunistic network coding on the timely communication of messages through a single network element. In a first step, we consider a single-server queueing system with Poisson arrivals, exponential service times, and a single buffer position. Packets arriving at a network node have a fixed deadline before which they should reach the destination. To preserve server capacity, we introduce a thresholding policy, based on remaining time until deadline expiration, to decide whether to serve a packet or skip its service. The obtained goodput improvement of the system is derived, as well as the operating conditions under which thresholding can enhance performance. Subsequently, we focus our analysis on a system that supports network coding instead of thresholding. We characterize the impact of network coding at a router node on the delivery of packets associated with deadlines. We model the router node as a queueing system where packets arrive from two independent Poisson flows and undergo opportunistic coding operations. We obtain an exact expression for the goodput of the system and study the achievable gain. Finally, we provide an analytical model that considers both network coding and packet skipping, capturing their joint performance. A comparative analysis between the aforementioned approaches is provided

    Providing guaranteed QoS in the hose-modeled VPN

    Get PDF
    With the development of the Internet, Internet service providers (ISPs) are required to offer revenue-generating and value-added services instead of only providing bandwidth and access services. Virtual Private Network (VPN) is one of the most important value-added services for ISPs. The classical VPN service is provided by implementing layer 2 technologies, either Frame Relay (FR) or Asynchronous Transfer Mode (ATM). With FR or ATM, virtual circuits are created before data delivery. Since the bandwidth and buffers are reserved, the QoS requirements can be naturally guaranteed. In the past few years, layer 3 VPN technologies are widely deployed due to the desirable performance in terms of flexibility, scalability and simplicity. Layer 3 VPNs are built upon IP tunnels, e.g., by using PPTP, L2TP or IPSec. Since IP is best-of-effort in nature, the QoS requirement cannot be guaranteed in layer 3 VPNs. Actually, layer 3 VPN service can only provide secure connectivity, i.e., protecting and authenticating IP packets between gateways or hosts in a VPN. Without doubt, with more applications on voice, audio and video being used in the Internet, the provision of QoS is one of the most important parts of the emerging services provided by ISPs. An intriguing question is: Is it possible to obtain the best of both layer 2 and 3 VPN? Is it possible to provide guaranteed or predictable QoS, as in layer 2 VPNs, while maintaining the flexibility and simplicity in layer 3 VPN? This question is the starting point of this study. The recently proposed hose model for VPN possesses desirable properties in terms of flexibility, scalability and multiplexing gain. However, the classic fair bandwidth allocation schemes and weighted fair queuing schemes raise the issue of low overall utilization in this model. A new fluid model for provider-provisioned virtual private network (PPVPN) is proposed in this dissertation. Based on the proposed model, an idealized fluid bandwidth allocation scheme is developed. This scheme is proven, analytically, to have the following properties: 1) maximize the overall throughput of the VPN without compromising fairness; 2) provide a mechanism that enables the VPN customers to allocate the bandwidth according to their requirements by assigning different weights to different hose flows, and thus obtain the predictable QoS performance; and 3) improve the overall throughput of the ISPs\u27 network. To approximate the idealized fluid scheme in the real world, the 2-dimensional deficit round robin (2-D DRR and 2-D DRR+) schemes are proposed. The integration of the proposed schemes with the best-effort traffic within the framework of virtual-router-based VPN is also investigated. The 2-D DRR and 2-D DER-+ schemes can be extended to multi-dimensional schemes to be employed in those applications which require a hierarchical scheduling architecture. To enhance the scalability, a more scalable non-per-flow-based scheme for output queued switches is developed as well, and the integration of this scheme within the framework of the MPLS VPN and applications for multicasting traffics is discussed. The performance and properties of these schemes are analyzed

    General scores for accessibility and inequality measures in urban areas

    Get PDF
    In the last decades, the acceleration of urban growth has led to an unprecedented level of urban interactions and interdependence. This situation calls for a significant effort among the scientific community to come up with engaging and meaningful visualizations and accessible scenario simulation engines. The present paper gives a contribution in this direction by providing general methods to evaluate accessibility in cities based on public transportation data. Through the notion of isochrones, the accessibility quantities proposed measure the performance of transport systems at connecting places and people in urban systems. Then we introduce scores rank cities according to their overall accessibility. We highlight significant inequalities in the distribution of these measures across the population, which are found to be strikingly similar across various urban environments. Our results are released through the interactive platform: www.citychrone.org, aimed at providing the community at large with a useful tool for awareness and decision-making
    • 

    corecore