6,060 research outputs found

    Energy-Optimal Scheduling in Low Duty Cycle Sensor Networks

    Get PDF
    Energy consumption of a wireless sensor node mainly depends on the amount of time the node spends in each of the high power active (e.g., transmit, receive) and low power sleep modes. It has been well established that in order to prolong node's lifetime the duty-cycle of the node should be low. However, low power sleep modes usually have low current draw but high energy cost while switching to the active mode with a higher current draw. In this work, we investigate a MaxWeightlike opportunistic sleep-active scheduling algorithm that takes into account time- varying channel and traffic conditions. We show that our algorithm is energy optimal in the sense that the proposed ESS algorithm can achieve an energy consumption which is arbitrarily close to the global minimum solution. Simulation studies are provided to confirm the theoretical results

    Resource Allocation in Wireless Networks with RF Energy Harvesting and Transfer

    Full text link
    Radio frequency (RF) energy harvesting and transfer techniques have recently become alternative methods to power the next generation of wireless networks. As this emerging technology enables proactive replenishment of wireless devices, it is advantageous in supporting applications with quality-of-service (QoS) requirement. This article focuses on the resource allocation issues in wireless networks with RF energy harvesting capability, referred to as RF energy harvesting networks (RF-EHNs). First, we present an overview of the RF-EHNs, followed by a review of a variety of issues regarding resource allocation. Then, we present a case study of designing in the receiver operation policy, which is of paramount importance in the RF-EHNs. We focus on QoS support and service differentiation, which have not been addressed by previous literatures. Furthermore, we outline some open research directions.Comment: To appear in IEEE Networ

    Dynamic Server Allocation over Time Varying Channels with Switchover Delay

    Get PDF
    We consider a dynamic server allocation problem over parallel queues with randomly varying connectivity and server switchover delay between the queues. At each time slot the server decides either to stay with the current queue or switch to another queue based on the current connectivity and the queue length information. Switchover delay occurs in many telecommunications applications and is a new modeling component of this problem that has not been previously addressed. We show that the simultaneous presence of randomly varying connectivity and switchover delay changes the system stability region and the structure of optimal policies. In the first part of the paper, we consider a system of two parallel queues, and develop a novel approach to explicitly characterize the stability region of the system using state-action frequencies which are stationary solutions to a Markov Decision Process (MDP) formulation. We then develop a frame-based dynamic control (FBDC) policy, based on the state-action frequencies, and show that it is throughput-optimal asymptotically in the frame length. The FBDC policy is applicable to a broad class of network control systems and provides a new framework for developing throughput-optimal network control policies using state-action frequencies. Furthermore, we develop simple Myopic policies that provably achieve more than 90% of the stability region. In the second part of the paper, we extend our results to systems with an arbitrary but finite number of queues.Comment: 38 Pages, 18 figures. arXiv admin note: substantial text overlap with arXiv:1008.234

    Lingering Issues in Distributed Scheduling

    Get PDF
    Recent advances have resulted in queue-based algorithms for medium access control which operate in a distributed fashion, and yet achieve the optimal throughput performance of centralized scheduling algorithms. However, fundamental performance bounds reveal that the "cautious" activation rules involved in establishing throughput optimality tend to produce extremely large delays, typically growing exponentially in 1/(1-r), with r the load of the system, in contrast to the usual linear growth. Motivated by that issue, we explore to what extent more "aggressive" schemes can improve the delay performance. Our main finding is that aggressive activation rules induce a lingering effect, where individual nodes retain possession of a shared resource for excessive lengths of time even while a majority of other nodes idle. Using central limit theorem type arguments, we prove that the idleness induced by the lingering effect may cause the delays to grow with 1/(1-r) at a quadratic rate. To the best of our knowledge, these are the first mathematical results illuminating the lingering effect and quantifying the performance impact. In addition extensive simulation experiments are conducted to illustrate and validate the various analytical results
    • 

    corecore