356 research outputs found

    Weighted Age of Information based Scheduling for Large Population Games on Networks

    Full text link
    In this paper, we consider a discrete-time multi-agent system involving NN cost-coupled networked rational agents solving a consensus problem and a central Base Station (BS), scheduling agent communications over a network. Due to a hard bandwidth constraint on the number of transmissions through the network, at most Rd<NR_d < N agents can concurrently access their state information through the network. Under standard assumptions on the information structure of the agents and the BS, we first show that the control actions of the agents are free of any dual effect, allowing for separation between estimation and control problems at each agent. Next, we propose a weighted age of information (WAoI) metric for the scheduling problem of the BS, where the weights depend on the estimation error of the agents. The BS aims to find the optimum scheduling policy that minimizes the WAoI, subject to the hard bandwidth constraint. Since this problem is NP hard, we first relax the hard constraint to a soft update rate constraint, and then compute an optimal policy for the relaxed problem by reformulating it into a Markov Decision Process (MDP). This then inspires a sub-optimal policy for the bandwidth constrained problem, which is shown to approach the optimal policy as NN \rightarrow \infty. Next, we solve the consensus problem using the mean-field game framework wherein we first design decentralized control policies for a limiting case of the NN-agent system (as NN \rightarrow \infty). By explicitly constructing the mean-field system, we prove the existence and uniqueness of the mean-field equilibrium. Consequently, we show that the obtained equilibrium policies constitute an ϵ\epsilon-Nash equilibrium for the finite agent system. Finally, we validate the performance of both the scheduling and the control policies through numerical simulations.Comment: This work has been submitted to IEEE for possible publicatio

    Learning and Management for Internet-of-Things: Accounting for Adaptivity and Scalability

    Get PDF
    Internet-of-Things (IoT) envisions an intelligent infrastructure of networked smart devices offering task-specific monitoring and control services. The unique features of IoT include extreme heterogeneity, massive number of devices, and unpredictable dynamics partially due to human interaction. These call for foundational innovations in network design and management. Ideally, it should allow efficient adaptation to changing environments, and low-cost implementation scalable to massive number of devices, subject to stringent latency constraints. To this end, the overarching goal of this paper is to outline a unified framework for online learning and management policies in IoT through joint advances in communication, networking, learning, and optimization. From the network architecture vantage point, the unified framework leverages a promising fog architecture that enables smart devices to have proximity access to cloud functionalities at the network edge, along the cloud-to-things continuum. From the algorithmic perspective, key innovations target online approaches adaptive to different degrees of nonstationarity in IoT dynamics, and their scalable model-free implementation under limited feedback that motivates blind or bandit approaches. The proposed framework aspires to offer a stepping stone that leads to systematic designs and analysis of task-specific learning and management schemes for IoT, along with a host of new research directions to build on.Comment: Submitted on June 15 to Proceeding of IEEE Special Issue on Adaptive and Scalable Communication Network

    Sampling for Remote Estimation of the Wiener Process over an Unreliable Channel

    Full text link
    In this paper, we study a sampling problem where a source takes samples from a Wiener process and transmits them through a wireless channel to a remote estimator. Due to channel fading, interference, and potential collisions, the packet transmissions are unreliable and could take random time durations. Our objective is to devise an optimal causal sampling policy that minimizes the long-term average mean square estimation error. This optimal sampling problem is a recursive optimal stopping problem, which is generally quite difficult to solve. However, we prove that the optimal sampling strategy is, in fact, a simple threshold policy where a new sample is taken whenever the instantaneous estimation error exceeds a threshold. This threshold remains a constant value that does not vary over time. By exploring the structure properties of the recursive optimal stopping problem, a low-complexity iterative algorithm is developed to compute the optimal threshold. This work generalizes previous research by incorporating both transmission errors and random transmission times into remote estimation. Numerical simulations are provided to compare our optimal policy with the zero-wait and age-optimal policies.Comment: Accepted by ACM Sigmetrics, will appear in ACM POMACS journa

    Distributed Operation of Uncertain Dynamical Cyberphysical Systems

    Get PDF
    In this thesis we address challenging issues that are faced in the operation of important cyber-physical systems of great current interest. The two particular systems that we address are communication networks and the smart grid. Both systems feature distributed agents making decisions in dynamic uncertain environments. In communication networks, nodes need to decide which packets to transmit, while in the power grid individual generators and loads need to decide how much to pro-duce or consume in a dynamic uncertain environment. The goal in both systems, which also holds for other cyber-physical systems, is to develop distributed policies that perform efficiently in uncertain dynamically changing environments. This thesis proposes an approach of employing duality theory on dynamic stochastic systems in such a way as to develop such distributed operating policies for cyber-physical systems. In the first half of the thesis we examine communication networks. Many cyber-physical systems, e.g., sensor networks, mobile ad-hoc networks, or networked control systems, involve transmitting data over multiple-hops of a communication network. These networks can be unreliable, for example due to the unreliability of the wireless medium. However, real-time applications in cyber-physical systems often require that requisite amounts of data be delivered in a timely manner so that it can be utilized for safely controlling physical processes. Data packets may need to be delivered within their deadlines or at regular intervals without large gaps in packet deliveries when carrying sensor readings. How such packets with deadlines can be scheduled over networks is a major challenge for cyber-physical systems. We develop a framework for routing and scheduling such data packets in a multi-hop network. This framework employs duality theory in such a way that actions of nodes get decoupled, and results in efficient decentralized policies for routing and scheduling such multi-hop communication networks. A key feature of the scheduling policy derived in this work is that the scheduling decisions regarding packets can be made in a fully distributed fashion. A decision regarding the scheduling of an individual packet depend only on the age and location of the packet, and does not require sharing of the queue lengths at various nodes. We examine in more detail a network in which multiple clients stream video packets over shared wireless networks. We are able to derive simple policies of threshold type which maximize the combined QoE of the users. We turn to another important cyber-physical system of great current interest – the emerging smarter grid for electrical power. We address some fundamental problems that arise when attempting to increase the utilization of renewable energy sources. A major challenge is that renewable energy sources are unpredictable in their availability. Utilizing them requires adaptation of demand to their uncertain availability. We address the problem faced by the system operator of coordinating sources of power and loads to balance stochastically time varying supply and demand while maximizing the total utilities of all agents in the system. We develop policies for the system operator that is charged with coordinating such distributed entities through a notion of price. We analyze some models for such systems and employ a combination of duality theory and analysis of stochastic dynamic systems to develop policies that maximize the total utility function of all the agents. We also address the issue of how the size of energy storage facilities should scale with respect to the stochastic behavior of renewables in order to mitigate the unreliability of renewable energy sources

    A Framework for Approximate Optimization of BoT Application Deployment in Hybrid Cloud Environment

    Get PDF
    We adopt a systematic approach to investigate the efficiency of near-optimal deployment of large-scale CPU-intensive Bag-of-Task applications running on cloud resources with the non-proportional cost to performance ratios. Our analytical solutions perform in both known and unknown running time of the given application. It tries to optimize users' utility by choosing the most desirable tradeoff between the make-span and the total incurred expense. We propose a schema to provide a near-optimal deployment of BoT application regarding users' preferences. Our approach is to provide user with a set of Pareto-optimal solutions, and then she may select one of the possible scheduling points based on her internal utility function. Our framework can cope with uncertainty in the tasks' execution time using two methods, too. First, an estimation method based on a Monte Carlo sampling called AA algorithm is presented. It uses the minimum possible number of sampling to predict the average task running time. Second, assuming that we have access to some code analyzer, code profiling or estimation tools, a hybrid method to evaluate the accuracy of each estimation tool in certain interval times for improving resource allocation decision has been presented. We propose approximate deployment strategies that run on hybrid cloud. In essence, proposed strategies first determine either an estimated or an exact optimal schema based on the information provided from users' side and environmental parameters. Then, we exploit dynamic methods to assign tasks to resources to reach an optimal schema as close as possible by using two methods. A fast yet simple method based on First Fit Decreasing algorithm, and a more complex approach based on the approximation solution of the transformed problem into a subset sum problem. Extensive experiment results conducted on a hybrid cloud platform confirm that our framework can deliver a near optimal solution respecting user's utility function

    Decision uncertainty minimization and autonomous information gathering

    Get PDF
    Thesis: Ph. D., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 2013.Cataloged from PDF version of thesis.Includes bibliographical references (pages 272-283).Over the past several decades, technologies for remote sensing and exploration have become increasingly powerful but continue to face limitations in the areas of information gathering and analysis. These limitations affect technologies that use autonomous agents, which are devices that can make routine decisions independent of operator instructions. Bandwidth and other communications limitation require that autonomous differentiate between relevant and irrelevant information in a computationally efficient manner. This thesis presents a novel approach to this problem by framing it as an adaptive sensing problem. Adaptive sensing allows agents to modify their information collection strategies in response to the information gathered in real time. We developed and tested optimization algorithms that apply information guides to Monte Carlo planners. Information guides provide a mechanism by which the algorithms may blend online (realtime) and offline (previously simulated) planning in order to incorporate uncertainty into the decisionmaking process. This greatly reduces computational operations as well as decisional and communications overhead. We begin by introducing a 3-level hierarchy that visualizes adaptive sensing at synoptic (global), mesocale (intermediate) and microscale (close-up) levels (a spatial hierarchy). We then introduce new algorithms for decision uncertainty minimization (DUM) and representational uncertainty minimization (RUM). Finally, we demonstrate the utility of this approach to real-world sensing problems, including bathymetric mapping and disaster relief. We also examine its potential in space exploration tasks by describing its use in a hypothetical aerial exploration of Mars. Our ultimate goal is to facilitate future large-scale missions to extraterrestrial objects for the purposes of scientific advancement and human exploration.by Lawrence A. M. Bush.Ph. D
    corecore