19 research outputs found

    Relaying in the Internet of Things (IoT): A Survey

    Get PDF
    The deployment of relays between Internet of Things (IoT) end devices and gateways can improve link quality. In cellular-based IoT, relays have the potential to reduce base station overload. The energy expended in single-hop long-range communication can be reduced if relays listen to transmissions of end devices and forward these observations to gateways. However, incorporating relays into IoT networks faces some challenges. IoT end devices are designed primarily for uplink communication of small-sized observations toward the network; hence, opportunistically using end devices as relays needs a redesign of both the medium access control (MAC) layer protocol of such end devices and possible addition of new communication interfaces. Additionally, the wake-up time of IoT end devices needs to be synchronized with that of the relays. For cellular-based IoT, the possibility of using infrastructure relays exists, and noncellular IoT networks can leverage the presence of mobile devices for relaying, for example, in remote healthcare. However, the latter presents problems of incentivizing relay participation and managing the mobility of relays. Furthermore, although relays can increase the lifetime of IoT networks, deploying relays implies the need for additional batteries to power them. This can erode the energy efficiency gain that relays offer. Therefore, designing relay-assisted IoT networks that provide acceptable trade-offs is key, and this goes beyond adding an extra transmit RF chain to a relay-enabled IoT end device. There has been increasing research interest in IoT relaying, as demonstrated in the available literature. Works that consider these issues are surveyed in this paper to provide insight into the state of the art, provide design insights for network designers and motivate future research directions

    A Survey on Long-Range Wide-Area Network Technology Optimizations

    Get PDF
    Long-Range Wide-Area Network (LoRaWAN) enables flexible long-range service communications with low power consumption which is suitable for many IoT applications. The densification of LoRaWAN, which is needed to meet a wide range of IoT networking requirements, poses further challenges. For instance, the deployment of gateways and IoT devices are widely deployed in urban areas, which leads to interference caused by concurrent transmissions on the same channel. In this context, it is crucial to understand aspects such as the coexistence of IoT devices and applications, resource allocation, Media Access Control (MAC) layer, network planning, and mobility support, that directly affect LoRaWAN’s performance.We present a systematic review of state-of-the-art works for LoRaWAN optimization solutions for IoT networking operations. We focus on five aspects that directly affect the performance of LoRaWAN. These specific aspects are directly associated with the challenges of densification of LoRaWAN. Based on the literature analysis, we present a taxonomy covering five aspects related to LoRaWAN optimizations for efficient IoT networks. Finally, we identify key research challenges and open issues in LoRaWAN optimizations for IoT networking operations that must be further studied in the future

    Reinforcement Learning-based Optimization of Multiple Access in Wireless Networks

    Get PDF
    In this thesis, we study the problem of Multiple Access (MA) in wireless networks and design adaptive solutions based on Reinforcement Learning (RL). We analyze the importance of MA in the current communications scenery, where bandwidth-hungry applications emerge due to the co-evolution of technological progress and societal needs, and explain that improvements brought by new standards cannot overcome the problem of resource scarcity. We focus on resource-constrained networks, where devices have restricted hardware-capabilities, there is no centralized point of control and coordination is prohibited or limited. The protocols that we optimize follow a Random Access (RA) approach, where sensing the common medium prior to transmission is not possible. We begin with the study of time access and provide two reinforcement learning algorithms for optimizing Irregular Repetition Slotted ALOHA (IRSA), a state-of-the-art RA protocol. First, we focus on ensuring low complexity and propose a Q-learning variant where learners act independently and converge quickly. We, then, design an algorithm in the area of coordinated learning and focus on deriving convergence guarantees for learning while minimizing the complexity of coordination. We provide simulations that showcase how coordination can help achieve a fine balance, in terms of complexity and performance, between fully decentralized and centralized solutions. In addition to time access, we study channel access, a problem that has recently attracted significant attention in cognitive radio. We design learning algorithms in the framework of Multi-player Multi-armed Bandits (MMABs), both for static and dynamic settings, where devices arrive at different time steps. Our focus is on deriving theoretical guarantees and ensuring that performance scales well with the size of the network. Our works constitute an important step towards addressing the challenges that the properties of decentralization and partial observability, inherent in resource-constrained networks, pose for RL algorithms

    Corrupted Contextual Bandits with Action Order Constraints

    Full text link
    We consider a variant of the novel contextual bandit problem with corrupted context, which we call the contextual bandit problem with corrupted context and action correlation, where actions exhibit a relationship structure that can be exploited to guide the exploration of viable next decisions. Our setting is primarily motivated by adaptive mobile health interventions and related applications, where users might transitions through different stages requiring more targeted action selection approaches. In such settings, keeping user engagement is paramount for the success of interventions and therefore it is vital to provide relevant recommendations in a timely manner. The context provided by users might not always be informative at every decision point and standard contextual approaches to action selection will incur high regret. We propose a meta-algorithm using a referee that dynamically combines the policies of a contextual bandit and multi-armed bandit, similar to previous work, as wells as a simple correlation mechanism that captures action to action transition probabilities allowing for more efficient exploration of time-correlated actions. We evaluate empirically the performance of said algorithm on a simulation where the sequence of best actions is determined by a hidden state that evolves in a Markovian manner. We show that the proposed meta-algorithm improves upon regret in situations where the performance of both policies varies such that one is strictly superior to the other for a given time period. To demonstrate that our setting has relevant practical applicability, we evaluate our method on several real world data sets, clearly showing better empirical performance compared to a set of simple algorithms

    Calibrated Learning for Online Distributed Power Allocation in Small-Cell Networks

    Get PDF
    This paper introduces a combined calibrated learning and bandit approach to online distributed power control in small cell networks operated under the same frequency bandwidth. Each small base station (SBS) is modelled as an intelligent agent who autonomously decides on its instantaneous transmit power level by predicting the transmitting policies of the other SBSs, namely the opponent SBSs, in the network, in real-time. The decision making process is based jointly on the past observations and the calibrated forecasts of the upcoming power allocation decisions of the opponent SBSs who inflict the dominant interferences on the agent. Furthermore, we integrate the proposed calibrated forecast process with a bandit policy to account for the wireless channel conditions unknown a priori , and develop an autonomous power allocation algorithm that is executable at individual SBSs to enhance the accuracy of the autonomous decision making. We evaluate the performance of the proposed algorithm in cases of maximizing the long-term sum-rate, the overall energy efficiency and the average minimum achievable data rate. Numerical simulation results demonstrate that the proposed design outperforms the benchmark scheme with limited amount of information exchange and rapidly approaches towards the optimal centralized solution for all case studies

    A Channel Selection Model based on Trust Metrics for Wireless Communications

    Get PDF
    Dynamic allocation of frequency resources to nodes in a wireless communication network is a well-known method adopted to mitigate potential interference, both unintentional and malicious. Various selection approaches have been adopted in literature, to limit the impact of interference and keep a high quality of wireless links. In this paper, we propose a different channel selection method, based on trust policies. The trust management approach proposed in this work relies on the node’s own experience and trust recommendations provided by its neighbourhood. By means of simulation results in Network Simulator NS-3, we demonstrate the effectiveness of the proposed trust method, while the system is under jamming attacks, in respect of a baseline approach. We also consider and evaluate the resilience of our approach in respect of malicious nodes, providing false information regarding the quality of the channel, to induct bad channel selection of the node. Results show how the system is resilient in respect of malicious nodes, keeping around 10% of throughput more than an approach only based on the own proper experience, considering the presence of 40% of malicious nodes, both single and collusive attacks
    corecore