7,790 research outputs found

    Energy Harvesting Networks with General Utility Functions: Near Optimal Online Policies

    Full text link
    We consider online scheduling policies for single-user energy harvesting communication systems, where the goal is to characterize online policies that maximize the long term average utility, for some general concave and monotonically increasing utility function. In our setting, the transmitter relies on energy harvested from nature to send its messages to the receiver, and is equipped with a finite-sized battery to store its energy. Energy packets are independent and identically distributed (i.i.d.) over time slots, and are revealed causally to the transmitter. Only the average arrival rate is known a priori. We first characterize the optimal solution for the case of Bernoulli arrivals. Then, for general i.i.d. arrivals, we first show that fixed fraction policies [Shaviv-Ozgur] are within a constant multiplicative gap from the optimal solution for all energy arrivals and battery sizes. We then derive a set of sufficient conditions on the utility function to guarantee that fixed fraction policies are within a constant additive gap as well from the optimal solution.Comment: To appear in the 2017 IEEE International Symposium on Information Theory. arXiv admin note: text overlap with arXiv:1705.1030

    Fast-Convergent Learning-aided Control in Energy Harvesting Networks

    Full text link
    In this paper, we present a novel learning-aided energy management scheme (LEM\mathtt{LEM}) for multihop energy harvesting networks. Different from prior works on this problem, our algorithm explicitly incorporates information learning into system control via a step called \emph{perturbed dual learning}. LEM\mathtt{LEM} does not require any statistical information of the system dynamics for implementation, and efficiently resolves the challenging energy outage problem. We show that LEM\mathtt{LEM} achieves the near-optimal [O(ϵ),O(log(1/ϵ)2)][O(\epsilon), O(\log(1/\epsilon)^2)] utility-delay tradeoff with an O(1/ϵ1c/2)O(1/\epsilon^{1-c/2}) energy buffers (c(0,1)c\in(0,1)). More interestingly, LEM\mathtt{LEM} possesses a \emph{convergence time} of O(1/ϵ1c/2+1/ϵc)O(1/\epsilon^{1-c/2} +1/\epsilon^c), which is much faster than the Θ(1/ϵ)\Theta(1/\epsilon) time of pure queue-based techniques or the Θ(1/ϵ2)\Theta(1/\epsilon^2) time of approaches that rely purely on learning the system statistics. This fast convergence property makes LEM\mathtt{LEM} more adaptive and efficient in resource allocation in dynamic environments. The design and analysis of LEM\mathtt{LEM} demonstrate how system control algorithms can be augmented by learning and what the benefits are. The methodology and algorithm can also be applied to similar problems, e.g., processing networks, where nodes require nonzero amount of contents to support their actions

    Distributed Optimization in Energy Harvesting Sensor Networks with Dynamic In-network Data Processing

    Get PDF
    Energy Harvesting Wireless Sensor Networks (EH- WSNs) have been attracting increasing interest in recent years. Most current EH-WSN approaches focus on sensing and net- working algorithm design, and therefore only consider the energy consumed by sensors and wireless transceivers for sensing and data transmissions respectively. In this paper, we incorporate CPU-intensive edge operations that constitute in-network data processing (e.g. data aggregation/fusion/compression) with sens- ing and networking; to jointly optimize their performance, while ensuring sustainable network operation (i.e. no sensor node runs out of energy). Based on realistic energy and network models, we formulate a stochastic optimization problem, and propose a lightweight on-line algorithm, namely Recycling Wasted Energy (RWE), to solve it. Through rigorous theoretical analysis, we prove that RWE achieves asymptotical optimality, bounded data queue size, and sustainable network operation. We implement RWE on a popular IoT operating system, Contiki OS, and eval- uate its performance using both real-world experiments based on the FIT IoT-LAB testbed, and extensive trace-driven simulations using Cooja. The evaluation results verify our theoretical analysis, and demonstrate that RWE can recycle more than 90% wasted energy caused by battery overflow, and achieve around 300% network utility gain in practical EH-WSNs

    Decentralized Delay Optimal Control for Interference Networks with Limited Renewable Energy Storage

    Full text link
    In this paper, we consider delay minimization for interference networks with renewable energy source, where the transmission power of a node comes from both the conventional utility power (AC power) and the renewable energy source. We assume the transmission power of each node is a function of the local channel state, local data queue state and local energy queue state only. In turn, we consider two delay optimization formulations, namely the decentralized partially observable Markov decision process (DEC-POMDP) and Non-cooperative partially observable stochastic game (POSG). In DEC-POMDP formulation, we derive a decentralized online learning algorithm to determine the control actions and Lagrangian multipliers (LMs) simultaneously, based on the policy gradient approach. Under some mild technical conditions, the proposed decentralized policy gradient algorithm converges almost surely to a local optimal solution. On the other hand, in the non-cooperative POSG formulation, the transmitter nodes are non-cooperative. We extend the decentralized policy gradient solution and establish the technical proof for almost-sure convergence of the learning algorithms. In both cases, the solutions are very robust to model variations. Finally, the delay performance of the proposed solutions are compared with conventional baseline schemes for interference networks and it is illustrated that substantial delay performance gain and energy savings can be achieved
    corecore