35 research outputs found

    A Distributed ADMM Approach to Non-Myopic Path Planning for Multi-Target Tracking

    Full text link
    This paper investigates non-myopic path planning of mobile sensors for multi-target tracking. Such problem has posed a high computational complexity issue and/or the necessity of high-level decision making. Existing works tackle these issues by heuristically assigning targets to each sensing agent and solving the split problem for each agent. However, such heuristic methods reduce the target estimation performance in the absence of considering the changes of target state estimation along time. In this work, we detour the task-assignment problem by reformulating the general non-myopic planning problem to a distributed optimization problem with respect to targets. By combining alternating direction method of multipliers (ADMM) and local trajectory optimization method, we solve the problem and induce consensus (i.e., high-level decisions) automatically among the targets. In addition, we propose a modified receding-horizon control (RHC) scheme and edge-cutting method for efficient real-time operation. The proposed algorithm is validated through simulations in various scenarios.Comment: Copyright 2019 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other work

    Communication Efficiency in Information Gathering through Dynamic Information Flow

    Get PDF
    This thesis addresses the problem of how to improve the performance of multi-robot information gathering tasks by actively controlling the rate of communication between robots. Examples of such tasks include cooperative tracking and cooperative environmental monitoring. Communication is essential in such systems for both decentralised data fusion and decision making, but wireless networks impose capacity constraints that are frequently overlooked. While existing research has focussed on improving available communication throughput, the aim in this thesis is to develop algorithms that make more efficient use of the available communication capacity. Since information may be shared at various levels of abstraction, another challenge is the decision of where information should be processed based on limits of the computational resources available. Therefore, the flow of information needs to be controlled based on the trade-off between communication limits, computation limits and information value. In this thesis, we approach the trade-off by introducing the dynamic information flow (DIF) problem. We suggest variants of DIF that either consider data fusion communication independently or both data fusion and decision making communication simultaneously. For the data fusion case, we propose efficient decentralised solutions that dynamically adjust the flow of information. For the decision making case, we present an algorithm for communication efficiency based on local LQ approximations of information gathering problems. The algorithm is then integrated with our solution for the data fusion case to produce a complete communication efficiency solution for information gathering. We analyse our suggested algorithms and present important performance guarantees. The algorithms are validated in a custom-designed decentralised simulation framework and through field-robotic experimental demonstrations

    Communication-aware information gathering with dynamic information flow

    Full text link
    © The Author(s) 2014. We are interested in the problem of how to improve estimation in multi-robot information gathering systems by actively controlling the rate of communication between robots. Communication is essential in such systems for decentralized data fusion and decision-making, but wireless networks impose capacity constraints that are frequently overlooked. In order to make efficient use of available capacity, it is necessary to consider a fundamental trade-off between communication cost, computation cost and information value. We introduce a new problem, dynamic information flow, that formalizes this trade-off in terms of decentralized constrained optimization. We propose algorithms that dynamically adjust the data rate of each communication link to maximize an information gain metric subject to constraints on communication and computation resources. The metric is balanced against the communication resources required to transmit data and the computation cost of processing sensor data to form observations. The optimization process selectively routes raw sensor data or processed observation data to zero, one or many robots. Our algorithms therefore allow large systems with many different types of sensors and computational resources to maximize information gain performance while satisfying realistic communication constraints. We also present experimental results with multiple ground robots and multiple sensor types that demonstrate the benefit of dynamic information flow in comparison to simpler bandwidth-limiting methods

    Computation Offloading and Task Scheduling on Network Edge

    Get PDF
    The Fifth-Generation (5G) networks facilitate the evolution of communication systems and accelerate a revolution in the Information Technology (IT) field. In the 5G era, wireless networks are anticipated to provide connectivity for billions of Mobile User Devices (MUDs) around the world and to support a variety of innovative use cases, such as autonomous driving, ubiquitous Internet of Things (IoT), and Internet of Vehicles (IoV). The novel use cases, however, usually incorporate compute-intensive applications, which generate enormous computing service demands with diverse and stringent service requirements. In particular, autonomous driving calls for prompt data processing for the safety-related applications, IoT nodes deployed in remote areas need energy-efficient computing given limited on-board energy, and vehicles require low-latency computing for IoV applications in a highly dynamic network. To support the emerging computing service demands, Mobile Edge Computing (MEC), as a cutting-edge technology in 5G, utilizes computing resources on network edge to provide computing services for MUDs within a radio access network. The primary benefits of MEC can be elaborated from two perspectives. From the perspective of MUDs, MEC enables low-latency and energy-efficient computing by allowing MUDs to offload their computation tasks to proximal edge servers, which are installed in access points such as cellular base stations, Road-Side Units (RSUs), and Unmanned Aerial Vehicles (UAVs). On the other hand, from the perspective of network operators, MEC allows a large amount of computing data to be processed on network edge, thereby alleviating backhaul congestion. {MEC is a promising technology to support computing demands for the novel 5G applications within the RAN. The interesting issue is to maximize the computation capability of network edge to meet the diverse service requirements arising from the applications in dynamic network environments. However, the main technical challenges are: 1) how an edge server schedules its limited computing resources to optimize the Quality-of-Experience (QoE) in autonomous driving; 2) how the computation loads are balanced between the edge server and IoT nodes in computation loads to enable energy-efficient computing service provisioning; and 3) how multiple edge servers coordinate their computing resources to enable seamless and reliable computing services for high-mobility vehicles in IoV. In this thesis, we develop efficient computing resource management strategies for MEC, including computation offloading and task scheduling, to address the above three technical challenges. First, we study computation task scheduling to support real-time applications, such as localization and obstacle avoidance, for autonomous driving. In our considered scenario, autonomous vehicles periodically sense the environment, offload sensor data to an edge server for processing, and receive computing results from the edge server. Due to mobility and computing latency, a vehicle travels a certain distance between the instant of offloading its sensor data and the instant of receiving the computing result. Our objective is to design a scheduling scheme for the edge server to minimize the above traveled distance of vehicles. The idea is to determine the processing order according to the individual vehicle mobility and computation capability of the edge server. We formulate a Restless Multi-Armed Bandit (RMAB) problem, design a Whittle index-based stochastic scheduling scheme, and determine the index using a Deep Reinforcement Learning (DRL) method. The proposed scheduling scheme can avoid the time-consuming policy exploration common in DRL scheduling approaches and makes effectual decisions with low complexity. Extensive simulation results demonstrate that, with the proposed index-based scheme, the edge server can deliver computing results to the vehicles promptly while adapting to time-variant vehicle mobility. Second, we study energy-efficient computation offloading and task scheduling for an edge server while provisioning computing services {for IoT nodes in remote areas}. In the considered scenario, a UAV is equipped with computing resources and plays the role of an aerial edge server to collect and process the computation tasks offloaded by ground MUDs. Given the service requirements of MUDs, we aim to maximize UAV energy efficiency by jointly optimizing the UAV trajectory, the user transmit power, and computation task scheduling. The resulting optimization problem corresponds to nonconvex fractional programming, and the Dinkelbach algorithm and the Successive Convex Approximation (SCA) technique are adopted to solve it. Furthermore, we decompose the problem into multiple subproblems for distributed and parallel problem solving. To cope with the case when the knowledge of user mobility is limited, we apply a spatial distribution estimation technique to predict the location of ground users so that the proposed approach can still be valid. Simulation results demonstrate the effectiveness of the proposed approach to maximize the energy efficiency of the UAV. Third, we study collaboration among multiple edge servers in computation offloading and task scheduling to support computing services {in IoV}. In the considered scenario, vehicles traverse the coverage of edge servers and offload their tasks to their proximal edge servers. We develop a collaborative edge computing framework to reduce computing service latency and alleviate computing service interruption due to the high mobility of vehicles: 1) a Task Partition and Scheduling Algorithm (TPSA) is proposed to schedule the execution order of the tasks offloaded to the edge servers given a computation offloading strategy; and 2) an artificial intelligence-based collaborative computing approach is developed to determine the task offloading, computing, and result delivery policy for vehicles. Specifically, the offloading and computing problem is formulated as a Markov decision process. A DRL technique, i.e., deep deterministic policy gradient, is adopted to find the optimal solution in a complex urban transportation network. With the developed framework, the service cost, which includes computing service latency and service failure penalty, can be minimized via the optimal computation task scheduling and edge server selection. Simulation results show that the proposed AI-based collaborative computing approach can adapt to a highly dynamic environment with outstanding performance. In summary, we investigate computing resource management to optimize QoE of MUDs in the coverage of an edge server, to improve energy efficiency for an aerial edge server while provisioning computing services, and to coordinate computing resources among edge servers for supporting MUDs with high mobility. The proposed approaches and theoretical results contribute to computing resource management for MEC in 5G and beyond

    Spatial and temporal hierarchical decomposition methods for the optimal power flow problem

    Get PDF
    The subject of this thesis is the development of spatial and temporal decomposition methods for the optimal power flow problem, such as in the transmissiondistribution network topologies. In this context, we propose novel decomposition interfaces and effectivemethodology for both the spatial and temporal dimensions applicable to linear and non-linear representations of the OPF problem. These two decomposition strategies are combined with a Benders-based algorithmand have advantages in model building time, memory management and solving time. For example, in the 2880-period linear problems, the decomposition finds optimal solutions up to 50 times faster and allows even larger instances to be solved; and in multi-period non-linear problems with 48 periods, close-to-optimal feasible solutions are found 7 times faster. With these decompositions, detailed networks can be optimized in coordination, effectively exploiting the value of the time-linked elements in both transmission and distribution levels while speeding up the solution process, preserving privacy, and adding flexibility when dealing with different models at each level. In the non-linear methodology, significant challenges, such as active set determination, instability and non-convex overestimations, may hinder its effectiveness, and they are addressed, making the proposed methodology more robust and stable. A test network was constructed by combining standard publicly available networks resulting in nearly 1000 buses and lines with up to 8760 connected periods; several interfaces were presented depending on the problemtype and its topology using a modified Benders algorithm. Insight was given into why a Benders-based decomposition was used for this type of problem instead of a common alternative: ADMM. The methodology is useful mainly in two sets of applications: when highly detailed long-termlinear operational problems need to be solved, such as in planning frameworks where the operational problems solved assume no prior knowledge; and in full AC-OPF problems where prior information from historic solutions can be used to speed up convergence

    Demand Side Management in the Smart Grid

    Get PDF

    On the Combination of Game-Theoretic Learning and Multi Model Adaptive Filters

    Get PDF
    This paper casts coordination of a team of robots within the framework of game theoretic learning algorithms. In particular a novel variant of fictitious play is proposed, by considering multi-model adaptive filters as a method to estimate other players’ strategies. The proposed algorithm can be used as a coordination mechanism between players when they should take decisions under uncertainty. Each player chooses an action after taking into account the actions of the other players and also the uncertainty. Uncertainty can occur either in terms of noisy observations or various types of other players. In addition, in contrast to other game-theoretic and heuristic algorithms for distributed optimisation, it is not necessary to find the optimal parameters a priori. Various parameter values can be used initially as inputs to different models. Therefore, the resulting decisions will be aggregate results of all the parameter values. Simulations are used to test the performance of the proposed methodology against other game-theoretic learning algorithms.</p
    corecore