148 research outputs found

    Optical Networks for Future Internet Design

    Get PDF

    Congestion Avoidance Testbed Experiments

    Get PDF
    DARTnet provides an excellent environment for executing networking experiments. Since the network is private and spans the continental United States, it gives researchers a great opportunity to test network behavior under controlled conditions. However, this opportunity is not available very often, and therefore a support environment for such testing is lacking. To help remedy this situation, part of SRI's effort in this project was devoted to advancing the state of the art in the techniques used for benchmarking network performance. The second objective of SRI's effort in this project was to advance networking technology in the area of traffic control, and to test our ideas on DARTnet, using the tools we developed to improve benchmarking networks. Networks are becoming more common and are being used by more and more people. The applications, such as multimedia conferencing and distributed simulations, are also placing greater demand on the resources the networks provide. Hence, new mechanisms for traffic control must be created to enable their networks to serve the needs of their users. SRI's objective, therefore, was to investigate a new queueing and scheduling approach that will help to meet the needs of a large, diverse user population in a "fair" way

    On packet switch design

    Get PDF

    Design and stability analysis of high performance packet switches

    Get PDF
    With the rapid development of optical interconnection technology, high-performance packet switches are required to resolve contentions in a fast manner to satisfy the demand for high throughput and high speed rates. Combined input-crosspoint buffered (CICB) switches are an alternative to input-buffered (IB) packet switches to provide high-performance switching and to relax arbitration timing for packet switches with high-speed ports. A maximum weight matching (MWM) scheme can provide 100% throughput under admissible traffic for lB switches. However, the high complexity of MWM prohibits its implementation in high-speed switches. In this dissertation, a feedback-based arbitration scheme for CICB switches is studied, where cell selection is based on the provided service to virtual output queues (VOQs). The feedback-based scheme is named round-robin with adaptable frame size (RR-AF) arbitration. The frame size in RR-AF is adaptably changed by the serviced and unserviced traffic. If a switch is stable, the switch provides 100% throughput. Here, it is proved that RR-AF can achieve 100% throughput under uniform admissible traffic. Switches with crosspoint buffers need to consider the transmission delays, or round-trip times to define the crosspoint buffer size. As the buffered crossbar switch can be physically located far from the input ports, actual round-trip times can be non-negligible. To support non-negligible round-trip times in a buffered crossbar switch, the crosspoint buffer size needs to be increased. To satisfy this demand, this dissertation investigates how to select the crosspoint buffer size under non-negligible round trip times and under uniform traffic. With the analysis of stability margin, the relationship between the crosspoint buffer size and round-trip time is derived. Considering that CICB switches deliver higher performance than lB switches and require no speedup, this dissertation investigates the maximum throughput performance that these switches can achieve. It is shown that CICB switches without speedup achieve 100% throughput under any admissible traffic through a fluid model. In addition, a new hybrid scheme, based on longest queue-first (as input arbitration) and longest column occupancy first (as output arbitration) is proposed, which achieves 100% throughput under uniform and non-uniform traffic patterns. In order to give a better insight of the feedback nature of arbitration scheme for CICB switches, a frame-based round-robin arbitration scheme with explicit feedback control (FRE) is introduced. FRE dynamically sets the frame size according to the input load and to the accumulation of cells in a VOQ. FRE is used as the input arbitration scheme and it is combined with RR, PRR, and FRE as output arbitration schemes. These combined schemes deliver high performance under uniform and nonuniform traffic models using a buffered crossbar with one-cell crosspoint buffers. The novelty of FRE lies in that each VOQ sets the frame size by an adjustable parameter, Δ(i,j) which indicates the degree of service needed by VOQ(i, j). This value is adjusted according to the input loading and the accumulation of cells experienced in previous service cycles. This dissertation also explores an analysis technique based on feedback control theory. This methodology is proposed to study the stability of arbitration and matching schemes for packet switches. A continuous system is used and a control model is used to emulate a queuing system. The technique is applied to a matching scheme. In addition, the study shows that the dwell time, which is defined as the time a queue receives service in a service opportunity, is a factor that affects the stability of a queuing system. This feedback control model is an alternative approach to evaluate the stability of arbitration and matching schemes

    Transport Architectures for an Evolving Internet

    Get PDF
    In the Internet architecture, transport protocols are the glue between an application’s needs and the network’s abilities. But as the Internet has evolved over the last 30 years, the implicit assumptions of these protocols have held less and less well. This can cause poor performance on newer networks—cellular networks, datacenters—and makes it challenging to roll out networking technologies that break markedly with the past. Working with collaborators at MIT, I have built two systems that explore an objective-driven, computer-generated approach to protocol design. My thesis is that making protocols a function of stated assumptions and objectives can improve application performance and free network technologies to evolve. Sprout, a transport protocol designed for videoconferencing over cellular networks, uses probabilistic inference to forecast network congestion in advance. On commercial cellular networks, Sprout gives 2-to-4 times the throughput and 7-to-9 times less delay than Skype, Apple Facetime, and Google Hangouts. This work led to Remy, a tool that programmatically generates protocols for an uncertain multi-agent network. Remy’s computer-generated algorithms can achieve higher performance and greater fairness than some sophisticated human-designed schemes, including ones that put intelligence inside the network. The Remy tool can then be used to probe the difficulty of the congestion control problem itself—how easy is it to “learn” a network protocol to achieve desired goals, given a necessarily imperfect model of the networks where it ultimately will be deployed? We found weak evidence of a tradeoff between the breadth of the operating range of a computer-generated protocol and its performance, but also that a single computer-generated protocol was able to outperform existing schemes over a thousand-fold range of link rates

    Fairness in a data center

    Get PDF
    Existing data centers utilize several networking technologies in order to handle the performance requirements of different workloads. Maintaining diverse networking technologies increases complexity and is not cost effective. This results in the current trend to converge all traffic into a single networking fabric. Ethernet is both cost-effective and ubiquitous, and as such it has been chosen as the technology of choice for the converged fabric. However, traditional Ethernet does not satisfy the needs of all traffic workloads, for the most part, due to its lossy nature and, therefore, has to be enhanced to allow for full convergence. The resulting technology, Data Center Bridging (DCB), is a new set of standards defined by the IEEE to make Ethernet lossless even in the presence of congestion. As with any new networking technology, it is critical to analyze how the different protocols within DCB interact with each other as well as how each protocol interacts with existing technologies in other layers of the protocol stack. This dissertation presents two novel schemes that address critical issues in DCB networks: fairness with respect to packet lengths and fairness with respect to flow control and bandwidth utilization. The Deficit Round Robin with Adaptive Weight Control (DRR-AWC) algorithm actively monitors the incoming streams and adjusts the scheduling weights of the outbound port. The algorithm was implemented on a real DCB switch and shown to increase fairness for traffic consisting of mixed-length packets. Targeted Priority-based Flow Control (TPFC) provides a hop-by-hop flow control mechanism that restricts the flow of aggressor streams while allowing victim streams to continue unimpeded. Two variants of the targeting mechanism within TPFC are presented and their performance evaluated through simulation

    Towards Tactile Internet in Beyond 5G Era: Recent Advances, Current Issues and Future Directions

    Get PDF
    Tactile Internet (TI) is envisioned to create a paradigm shift from the content-oriented communications to steer/control-based communications by enabling real-time transmission of haptic information (i.e., touch, actuation, motion, vibration, surface texture) over Internet in addition to the conventional audiovisual and data traffics. This emerging TI technology, also considered as the next evolution phase of Internet of Things (IoT), is expected to create numerous opportunities for technology markets in a wide variety of applications ranging from teleoperation systems and Augmented/Virtual Reality (AR/VR) to automotive safety and eHealthcare towards addressing the complex problems of human society. However, the realization of TI over wireless media in the upcoming Fifth Generation (5G) and beyond networks creates various non-conventional communication challenges and stringent requirements in terms of ultra-low latency, ultra-high reliability, high data-rate connectivity, resource allocation, multiple access and quality-latency-rate tradeoff. To this end, this paper aims to provide a holistic view on wireless TI along with a thorough review of the existing state-of-the-art, to identify and analyze the involved technical issues, to highlight potential solutions and to propose future research directions. First, starting with the vision of TI and recent advances and a review of related survey/overview articles, we present a generalized framework for wireless TI in the Beyond 5G Era including a TI architecture, the main technical requirements, the key application areas and potential enabling technologies. Subsequently, we provide a comprehensive review of the existing TI works by broadly categorizing them into three main paradigms; namely, haptic communications, wireless AR/VR, and autonomous, intelligent and cooperative mobility systems. Next, potential enabling technologies across physical/Medium Access Control (MAC) and network layers are identified and discussed in detail. Also, security and privacy issues of TI applications are discussed along with some promising enablers. Finally, we present some open research challenges and recommend promising future research directions

    Quality-of-service management in IP networks

    Get PDF
    Quality of Service (QoS) in Internet Protocol (IF) Networks has been the subject of active research over the past two decades. Integrated Services (IntServ) and Differentiated Services (DiffServ) QoS architectures have emerged as proposed standards for resource allocation in IF Networks. These two QoS architectures support the need for multiple traffic queuing systems to allow for resource partitioning for heterogeneous applications making use of the networks. There have been a number of specifications or proposals for the number of traffic queuing classes (Class of Service (CoS)) that will support integrated services in IF Networks, but none has provided verification in the form of analytical or empirical investigation to prove that its specification or proposal will be optimum. Despite the existence of the two standard QoS architectures and the large volume of research work that has been carried out on IF QoS, its deployment still remains elusive in the Internet. This is not unconnected with the complexities associated with some aspects of the standard QoS architectures. [Continues.
    • …
    corecore