22,312 research outputs found

    Learning and Management for Internet-of-Things: Accounting for Adaptivity and Scalability

    Get PDF
    Internet-of-Things (IoT) envisions an intelligent infrastructure of networked smart devices offering task-specific monitoring and control services. The unique features of IoT include extreme heterogeneity, massive number of devices, and unpredictable dynamics partially due to human interaction. These call for foundational innovations in network design and management. Ideally, it should allow efficient adaptation to changing environments, and low-cost implementation scalable to massive number of devices, subject to stringent latency constraints. To this end, the overarching goal of this paper is to outline a unified framework for online learning and management policies in IoT through joint advances in communication, networking, learning, and optimization. From the network architecture vantage point, the unified framework leverages a promising fog architecture that enables smart devices to have proximity access to cloud functionalities at the network edge, along the cloud-to-things continuum. From the algorithmic perspective, key innovations target online approaches adaptive to different degrees of nonstationarity in IoT dynamics, and their scalable model-free implementation under limited feedback that motivates blind or bandit approaches. The proposed framework aspires to offer a stepping stone that leads to systematic designs and analysis of task-specific learning and management schemes for IoT, along with a host of new research directions to build on.Comment: Submitted on June 15 to Proceeding of IEEE Special Issue on Adaptive and Scalable Communication Network

    Goodbye, ALOHA!

    Get PDF
    ©2016 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.The vision of the Internet of Things (IoT) to interconnect and Internet-connect everyday people, objects, and machines poses new challenges in the design of wireless communication networks. The design of medium access control (MAC) protocols has been traditionally an intense area of research due to their high impact on the overall performance of wireless communications. The majority of research activities in this field deal with different variations of protocols somehow based on ALOHA, either with or without listen before talk, i.e., carrier sensing multiple access. These protocols operate well under low traffic loads and low number of simultaneous devices. However, they suffer from congestion as the traffic load and the number of devices increase. For this reason, unless revisited, the MAC layer can become a bottleneck for the success of the IoT. In this paper, we provide an overview of the existing MAC solutions for the IoT, describing current limitations and envisioned challenges for the near future. Motivated by those, we identify a family of simple algorithms based on distributed queueing (DQ), which can operate for an infinite number of devices generating any traffic load and pattern. A description of the DQ mechanism is provided and most relevant existing studies of DQ applied in different scenarios are described in this paper. In addition, we provide a novel performance evaluation of DQ when applied for the IoT. Finally, a description of the very first demo of DQ for its use in the IoT is also included in this paper.Peer ReviewedPostprint (author's final draft

    Exploiting the Capture Effect to Enhance RACH Performance in Cellular-Based M2M Communications

    Get PDF
    Cellular-based machine-to-machine (M2M) communication is expected to facilitate services for the Internet of Things (IoT). However, because cellular networks are designed for human users, they have some limitations. Random access channel (RACH) congestion caused by massive access from M2M devices is one of the biggest factors hindering cellular-based M2M services because the RACH congestion causes random access (RA) throughput degradation and connection failures to the devices. In this paper, we show the possibility exploiting the capture effects, which have been known to have a positive impact on the wireless network system, on RA procedure for improving the RA performance of M2M devices. For this purpose, we analyze an RA procedure using a capture model. Through this analysis, we examine the effects of capture on RA performance and propose an Msg3 power-ramping (Msg3 PR) scheme to increase the capture probability (thereby increasing the RA success probability) even when severe RACH congestion problem occurs. The proposed analysis models are validated using simulations. The results show that the proposed scheme, with proper parameters, further improves the RA throughput and reduces the connection failure probability, by slightly increasing the energy consumption. Finally, we demonstrate the effects of coexistence with other RA-related schemes through simulation results
    corecore