1,367 research outputs found

    Toward End-to-End, Full-Stack 6G Terahertz Networks

    Full text link
    Recent evolutions in semiconductors have brought the terahertz band in the spotlight as an enabler for terabit-per-second communications in 6G networks. Most of the research so far, however, has focused on understanding the physics of terahertz devices, circuitry and propagation, and on studying physical layer solutions. However, integrating this technology in complex mobile networks requires a proper design of the full communication stack, to address link- and system-level challenges related to network setup, management, coordination, energy efficiency, and end-to-end connectivity. This paper provides an overview of the issues that need to be overcome to introduce the terahertz spectrum in mobile networks, from a MAC, network and transport layer perspective, with considerations on the performance of end-to-end data flows on terahertz connections.Comment: Published on IEEE Communications Magazine, THz Communications: A Catalyst for the Wireless Future, 7 pages, 6 figure

    Model based analysis of some high speed network issues

    Get PDF
    The study of complex problems in science and engineering today typically involves large scale data, huge number of large-scale scientific breakthroughs critically depends on large multi-disciplinary and geographically-dispersed research teams, where the high speed network becomes the integral part. To serve the ongoing bandwidth requirement and scalability of these networks, there has been a continuous evolution of different TCPs for high speed networks. Testing these protocols on a real network would be expensive, time consuming and more over not easily available to the researchers worldwide. Network simulation is well accepted and widely used method for performance evaluation, it is well known that packet-based simulators like NS2 and Opnet are not adequate in high speed also in large scale networks because of its inherent bottlenecks in terms of message overhead and execution time. In that case model based approach with the help of a set of coupled differential equations is preferred for simulations. This dissertation is focused on the key challenges on research and development of TCPs on high-speed network. To address these issues/challenges this thesis has three objectives: design an analytical simulation methodology; model behaviors of high speed networks and other components including TCP flows and queue using the analytical simulation method; analyze them and explore impacts and interrelationship among them. To decrease the simulation time and speed up the process of testing and development of high speed TCP, we present a scalable simulation methodology for high speed network. We present the fluid model equations for various high-speed TCP variants. With the help of these fluid model equations, the behavior of high-speed TCP variants under various scenarios and its effect on queue size variations are presented. High speed network is not feasible unless we understand effect of bottleneck buffer size on performance of these high-speed TCP variants. A fluid model is introduced to accommodate the new observations of synchronization and de-synchronization phenomena of packet losses at bottleneck link and a microscopic analysis is presented on different buffer sizes at drop-tail queuing scheme. The proposed model based methods promotes principal understanding of the future heterogeneous networks and accelerates protocol developments

    An evaluation of synchronization loss rate calculations on high speed networks

    Get PDF
    This paper is broken down into two parts:(1) discussion of current formulas that are used to calculate synchronized loss rates among concurrent TCP flows with the results of those equations on flows running through a bottleneck on a high speed emulated network and (2)steps to create revised forms of these equations that are more accurate and give a more reasonable estimation without having the shortcomings of the current equations. This paper brings to light three equations that were previously proposed and were used in published research projects along with their strengths and shortcomings. Through the study of these equations a series of newly revised forms of these algorithms will be introduced that take into account their predecessors’ pitfalls and gives a more affective and useful result under any set of networking conditions including large numbers of concurrent flows, multiple variations of TCP, and variations of receiving queue sizes among others. In this paper we will prove, through logical analysis of the equations structure, definition, and results they provide of network emulations on the CRON testbed, that these modifications to the existing formulas are necessary in order to provide a metric that depicts an accurate view of the synchronization loss rate of a given network

    Study on the Performance of TCP over 10Gbps High Speed Networks

    Get PDF
    Internet traffic is expected to grow phenomenally over the next five to ten years. To cope with such large traffic volumes, high-speed networks are expected to scale to capacities of terabits-per-second and beyond. Increasing the role of optics for packet forwarding and transmission inside the high-speed networks seems to be the most promising way to accomplish this capacity scaling. Unfortunately, unlike electronic memory, it remains a formidable challenge to build even a few dozen packets of integrated all-optical buffers. On the other hand, many high-speed networks depend on the TCP/IP protocol for reliability which is typically implemented in software and is sensitive to buffer size. For example, TCP requires a buffer size of bandwidth delay product in switches/routers to maintain nearly 100\% link utilization. Otherwise, the performance will be much downgraded. But such large buffer will challenge hardware design and power consumption, and will generate queuing delay and jitter which again cause problems. Therefore, improve TCP performance over tiny buffered high-speed networks is a top priority. This dissertation studies the TCP performance in 10Gbps high-speed networks. First, a 10Gbps reconfigurable optical networking testbed is developed as a research environment. Second, a 10Gbps traffic sniffing tool is developed for measuring and analyzing TCP performance. New expressions for evaluating TCP loss synchronization are presented by carefully examining the congestion events of TCP. Based on observation, two basic reasons that cause performance problems are studied. We find that minimize TCP loss synchronization and reduce flow burstiness impact are critical keys to improve TCP performance in tiny buffered networks. Finally, we present a new TCP protocol called Multi-Channel TCP and a new congestion control algorithm called Desynchronized Multi-Channel TCP (DMCTCP). Our algorithm implementation takes advantage of a potential parallelism from the Multi-Path TCP in Linux. Over an emulated 10Gbps network ruled by routers with only a few dozen packets of buffers, our experimental results confirm that bottleneck link utilization can be much better improved by DMCTCP than by many other TCP variants. Our study is a new step towards the deployment of optical packet switching/routing networks

    A study on fairness and latency issues over high speed networks and data center networks

    Get PDF
    Newly emerging computer networks, such as high speed networks and data center networks, have characteristics of high bandwidth and high burstiness which make it difficult to address issues such as fairness, queuing latency and link utilization. In this study, we first conduct extensive experimental evaluation of the performance of 10Gbps high speed networks. We found inter-protocol unfairness and larger queuing latency are two outstanding issues in high speed networks and data center networks. There have been several proposals to address fairness and latency issues at switch level via queuing schemes. These queuing schemes have been fairly successful in addressing either fairness issue or large latency but not both at the same time. We propose a new queuing scheme called Approximated-Fair and Controlled-Delay (AFCD) queuing scheme that meets following goals for high speed networks: approximated fairness, controlled low queuing delay, high link utilization and simple implementation. The design of AFCD utilizes a novel synergistic approach by forming an alliance between approximated fair queuing and controlled delay queuing. AFCD maintains very small amount of state information in sending rate estimation of flows and makes drop decision based on a target delay of individual flow. We then present FaLL, a Fair and Low Latency queuing scheme that meets stringent performance requirements of data center networks: fair share of bandwidth, low queuing latency, high throughput, and ease of deployment. FaLL uses an efficiency module, a fairness module and a target delay based dropping scheme to meet these goals. Through rigorous experiments on real testbed, we show that FaLL outperforms various peer solutions in variety of network conditions over data center networks

    Video Conferencing - Session and Transmission Control

    Get PDF
    Video conferencing is a very bandwidth sensitive application, if the available bandwidth is to low to handle the send rate of the media, packages will be lost in the network and thus the conference will be disrupted. Axis Communications would like to know which techniques are used today to ensure optimized bandwidth usage and as good quality as possible during a video conference even if the network bandwidth changes. An implementation of such a service has been made in this thesis which uses three different TCP congestion avoidance algorithms. They monitor and evaluate the network quality to adapt the video stream rate accordingly. One algorithm is only based on packet loss and is used as a baseline. The other two uses packet loss in conjunction with round trip time (RTT) to evaluate the network. The user experience was deemed better when using the algorithms. The algorithms that was based on both packet loss and RTT was deemed superior. There are however still a few things to adapt in the camera software and hardware before a complete system can be developed

    End-to-End Simulation of 5G mmWave Networks

    Full text link
    Due to its potential for multi-gigabit and low latency wireless links, millimeter wave (mmWave) technology is expected to play a central role in 5th generation cellular systems. While there has been considerable progress in understanding the mmWave physical layer, innovations will be required at all layers of the protocol stack, in both the access and the core network. Discrete-event network simulation is essential for end-to-end, cross-layer research and development. This paper provides a tutorial on a recently developed full-stack mmWave module integrated into the widely used open-source ns--3 simulator. The module includes a number of detailed statistical channel models as well as the ability to incorporate real measurements or ray-tracing data. The Physical (PHY) and Medium Access Control (MAC) layers are modular and highly customizable, making it easy to integrate algorithms or compare Orthogonal Frequency Division Multiplexing (OFDM) numerologies, for example. The module is interfaced with the core network of the ns--3 Long Term Evolution (LTE) module for full-stack simulations of end-to-end connectivity, and advanced architectural features, such as dual-connectivity, are also available. To facilitate the understanding of the module, and verify its correct functioning, we provide several examples that show the performance of the custom mmWave stack as well as custom congestion control algorithms designed specifically for efficient utilization of the mmWave channel.Comment: 25 pages, 16 figures, submitted to IEEE Communications Surveys and Tutorials (revised Jan. 2018

    Performance evaluation of a tree-based routing and address autoconfiguration for vehicle-to-Internet communications

    Get PDF
    Proceeding of: 2011 11th International Conference on ITS Telecommunications, 23-25 August, 2011, St. Petesburg, Russia.Vehicular ad hoc networks have proven to be quite useful for broadcast alike communications between nearby cars, but can also be used to provide Internet connectivity from vehicles. In order to do so, vehicle-to-Internet routing and IP address autoconfiguration are two critical pieces. TREBOL is a tree-based and configurable protocol which benefits from the inherent tree-shaped nature of vehicle to Internet traffic to reduce the signaling overhead while dealing efficiently with the vehicular dynamics. This paper experimentally evaluates the performance of TREBOL using a Linux implementation under lab-controlled realistic scenarios, including real vehicular traces obtained in the region of Madrid.The research of Marco Gramaglia and Carlos J. Bernardos leading to these results has been supported by the Ministry of Science and Innovation of Spain under the QUARTET project (TIN2009-13992-C02-01). The work of Marco Gramaglia, Carlos J. Bernardos and Antonio de la Oliva has also been supported by the European Community’s Seventh Framework Programme (FP7-ICT-2009-5) under grant agreement n. 258053 (MEDIEVAL project)
    • …
    corecore