568 research outputs found

    Goodbye, ALOHA!

    Get PDF
    ©2016 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.The vision of the Internet of Things (IoT) to interconnect and Internet-connect everyday people, objects, and machines poses new challenges in the design of wireless communication networks. The design of medium access control (MAC) protocols has been traditionally an intense area of research due to their high impact on the overall performance of wireless communications. The majority of research activities in this field deal with different variations of protocols somehow based on ALOHA, either with or without listen before talk, i.e., carrier sensing multiple access. These protocols operate well under low traffic loads and low number of simultaneous devices. However, they suffer from congestion as the traffic load and the number of devices increase. For this reason, unless revisited, the MAC layer can become a bottleneck for the success of the IoT. In this paper, we provide an overview of the existing MAC solutions for the IoT, describing current limitations and envisioned challenges for the near future. Motivated by those, we identify a family of simple algorithms based on distributed queueing (DQ), which can operate for an infinite number of devices generating any traffic load and pattern. A description of the DQ mechanism is provided and most relevant existing studies of DQ applied in different scenarios are described in this paper. In addition, we provide a novel performance evaluation of DQ when applied for the IoT. Finally, a description of the very first demo of DQ for its use in the IoT is also included in this paper.Peer ReviewedPostprint (author's final draft

    Re-architecting datacenter networks and stacks for low latency and high performance

    Get PDF
    © 2017 ACM. Modern datacenter networks provide very high capacity via redundant Clos topologies and low switch latency, but transport protocols rarely deliver matching performance. We present NDP, a novel datacenter transport architecture that achieves near-optimal completion times for short transfers and high flow throughput in a wide range of scenarios, including incast. NDP switch buffers are very shallow and when they fill the switches trim packets to headers and priority forward the headers. This gives receivers a full view of instantaneous demand from all senders, and is the basis for our novel, high-performance, multipath-aware transport protocol that can deal gracefully with massive incast events and prioritize traffic from different senders on RTT timescales. We implemented NDP in Linux hosts with DPDK, in a software switch, in a NetFPGA-based hardware switch, and in P4. We evaluate NDP's performance in our implementations and in large-scale simulations, simultaneously demonstrating support for very low-latency and high throughput.This work was partly funded by the SSICLOPS H2020 project (644866)

    Models and Protocols for Resource Optimization in Wireless Mesh Networks

    Get PDF
    Wireless mesh networks are built on a mix of fixed and mobile nodes interconnected via wireless links to form a multihop ad hoc network. An emerging application area for wireless mesh networks is their evolution into a converged infrastructure used to share and extend, to mobile users, the wireless Internet connectivity of sparsely deployed fixed lines with heterogeneous capacity, ranging from ISP-owned broadband links to subscriber owned low-speed connections. In this thesis we address different key research issues for this networking scenario. First, we propose an analytical predictive tool, developing a queuing network model capable of predicting the network capacity and we use it in a load aware routing protocol in order to provide, to the end users, a quality of service based on the throughput. We then extend the queuing network model and introduce a multi-class queuing network model to predict analytically the average end-to-end packet delay of the traffic flows among the mobile end users and the Internet. The analytical models are validated against simulation. Second, we propose an address auto-configuration solution to extend the coverage of a wireless mesh network by interconnecting it to a mobile ad hoc network in a transparent way for the infrastructure network (i.e., the legacy Internet interconnected to the wireless mesh network). Third, we implement two real testbed prototypes of the proposed solutions as a proof-of-concept, both for the load aware routing protocol and the auto-configuration protocol. Finally we discuss the issues related to the adoption of ad hoc networking technologies to address the fragility of our communication infrastructure and to build the next generation of dependable, secure and rapidly deployable communications infrastructures

    Scheduling with Rate Adaptation under Incomplete Knowledge of Channel/Estimator Statistics

    Full text link
    In time-varying wireless networks, the states of the communication channels are subject to random variations, and hence need to be estimated for efficient rate adaptation and scheduling. The estimation mechanism possesses inaccuracies that need to be tackled in a probabilistic framework. In this work, we study scheduling with rate adaptation in single-hop queueing networks under two levels of channel uncertainty: when the channel estimates are inaccurate but complete knowledge of the channel/estimator joint statistics is available at the scheduler; and when the knowledge of the joint statistics is incomplete. In the former case, we characterize the network stability region and show that a maximum-weight type scheduling policy is throughput-optimal. In the latter case, we propose a joint channel statistics learning - scheduling policy. With an associated trade-off in average packet delay and convergence time, the proposed policy has a stability region arbitrarily close to the stability region of the network under full knowledge of channel/estimator joint statistics.Comment: 48th Allerton Conference on Communication, Control, and Computing, Monticello, IL, Sept. 201

    Datacenter Traffic Control: Understanding Techniques and Trade-offs

    Get PDF
    Datacenters provide cost-effective and flexible access to scalable compute and storage resources necessary for today's cloud computing needs. A typical datacenter is made up of thousands of servers connected with a large network and usually managed by one operator. To provide quality access to the variety of applications and services hosted on datacenters and maximize performance, it deems necessary to use datacenter networks effectively and efficiently. Datacenter traffic is often a mix of several classes with different priorities and requirements. This includes user-generated interactive traffic, traffic with deadlines, and long-running traffic. To this end, custom transport protocols and traffic management techniques have been developed to improve datacenter network performance. In this tutorial paper, we review the general architecture of datacenter networks, various topologies proposed for them, their traffic properties, general traffic control challenges in datacenters and general traffic control objectives. The purpose of this paper is to bring out the important characteristics of traffic control in datacenters and not to survey all existing solutions (as it is virtually impossible due to massive body of existing research). We hope to provide readers with a wide range of options and factors while considering a variety of traffic control mechanisms. We discuss various characteristics of datacenter traffic control including management schemes, transmission control, traffic shaping, prioritization, load balancing, multipathing, and traffic scheduling. Next, we point to several open challenges as well as new and interesting networking paradigms. At the end of this paper, we briefly review inter-datacenter networks that connect geographically dispersed datacenters which have been receiving increasing attention recently and pose interesting and novel research problems.Comment: Accepted for Publication in IEEE Communications Surveys and Tutorial

    Low-latency Networking: Where Latency Lurks and How to Tame It

    Full text link
    While the current generation of mobile and fixed communication networks has been standardized for mobile broadband services, the next generation is driven by the vision of the Internet of Things and mission critical communication services requiring latency in the order of milliseconds or sub-milliseconds. However, these new stringent requirements have a large technical impact on the design of all layers of the communication protocol stack. The cross layer interactions are complex due to the multiple design principles and technologies that contribute to the layers' design and fundamental performance limitations. We will be able to develop low-latency networks only if we address the problem of these complex interactions from the new point of view of sub-milliseconds latency. In this article, we propose a holistic analysis and classification of the main design principles and enabling technologies that will make it possible to deploy low-latency wireless communication networks. We argue that these design principles and enabling technologies must be carefully orchestrated to meet the stringent requirements and to manage the inherent trade-offs between low latency and traditional performance metrics. We also review currently ongoing standardization activities in prominent standards associations, and discuss open problems for future research

    Performance Modelling and Network Monitoring for Internet of Things (IoT) Connectivity

    Get PDF
    corecore