268,081 research outputs found
Learning for Cross-layer Resource Allocation in the Framework of Cognitive Wireless Networks
The framework of cognitive wireless networks is expected to endow wireless devices with a cognition-intelligence ability with which they can efficiently learn and respond to the dynamic wireless environment. In this dissertation, we focus on the problem of developing cognitive network control mechanisms without knowing in advance an accurate network model. We study a series of cross-layer resource allocation problems in cognitive wireless networks. Based on model-free learning, optimization and game theory, we propose a framework of self-organized, adaptive strategy learning for wireless devices to (implicitly) build the understanding of the network dynamics through trial-and-error.
The work of this dissertation is divided into three parts. In the first part, we investigate a distributed, single-agent decision-making problem for real-time video streaming over a time-varying wireless channel between a single pair of transmitter and receiver. By modeling the joint source-channel resource allocation process for video streaming as a constrained Markov decision process, we propose a reinforcement learning scheme to search for the optimal transmission policy without the need to know in advance the details of network dynamics.
In the second part of this work, we extend our study from the single-agent to a multi-agent decision-making scenario, and study the energy-efficient power allocation problems in a two-tier, underlay heterogeneous network and in a self-sustainable green network. For the heterogeneous network, we propose a stochastic learning algorithm based on repeated games to allow individual macro- or femto-users to find a Stackelberg equilibrium without flooding the network with local action information. For the self-sustainable green network, we propose a combinatorial auction mechanism that allows mobile stations to adaptively choose the optimal base station and sub-carrier group for transmission only from local payoff and transmission strategy information.
In the third part of this work, we study a cross-layer routing problem in an interweaved Cognitive Radio Network (CRN), where an accurate network model is not available and the secondary users that are distributed within the CRN only have access to local action/utility information. In order to develop a spectrum-aware routing mechanism that is robust against potential insider attackers, we model the uncoordinated interaction between CRN nodes in the dynamic wireless environment as a stochastic game. Through decomposition of the stochastic routing game, we propose two stochastic learning algorithm based on a group of repeated stage games for the secondary users to learn the best-response strategies without the need of information flooding
Decentralized Control of Partially Observable Markov Decision Processes using Belief Space Macro-actions
The focus of this paper is on solving multi-robot planning problems in
continuous spaces with partial observability. Decentralized partially
observable Markov decision processes (Dec-POMDPs) are general models for
multi-robot coordination problems, but representing and solving Dec-POMDPs is
often intractable for large problems. To allow for a high-level representation
that is natural for multi-robot problems and scalable to large discrete and
continuous problems, this paper extends the Dec-POMDP model to the
decentralized partially observable semi-Markov decision process (Dec-POSMDP).
The Dec-POSMDP formulation allows asynchronous decision-making by the robots,
which is crucial in multi-robot domains. We also present an algorithm for
solving this Dec-POSMDP which is much more scalable than previous methods since
it can incorporate closed-loop belief space macro-actions in planning. These
macro-actions are automatically constructed to produce robust solutions. The
proposed method's performance is evaluated on a complex multi-robot package
delivery problem under uncertainty, showing that our approach can naturally
represent multi-robot problems and provide high-quality solutions for
large-scale problems
Decentralized Cooperative Planning for Automated Vehicles with Hierarchical Monte Carlo Tree Search
Today's automated vehicles lack the ability to cooperate implicitly with
others. This work presents a Monte Carlo Tree Search (MCTS) based approach for
decentralized cooperative planning using macro-actions for automated vehicles
in heterogeneous environments. Based on cooperative modeling of other agents
and Decoupled-UCT (a variant of MCTS), the algorithm evaluates the
state-action-values of each agent in a cooperative and decentralized manner,
explicitly modeling the interdependence of actions between traffic
participants. Macro-actions allow for temporal extension over multiple time
steps and increase the effective search depth requiring fewer iterations to
plan over longer horizons. Without predefined policies for macro-actions, the
algorithm simultaneously learns policies over and within macro-actions. The
proposed method is evaluated under several conflict scenarios, showing that the
algorithm can achieve effective cooperative planning with learned macro-actions
in heterogeneous environments
Cost Adaptation for Robust Decentralized Swarm Behaviour
Decentralized receding horizon control (D-RHC) provides a mechanism for
coordination in multi-agent settings without a centralized command center.
However, combining a set of different goals, costs, and constraints to form an
efficient optimization objective for D-RHC can be difficult. To allay this
problem, we use a meta-learning process -- cost adaptation -- which generates
the optimization objective for D-RHC to solve based on a set of human-generated
priors (cost and constraint functions) and an auxiliary heuristic. We use this
adaptive D-RHC method for control of mesh-networked swarm agents. This
formulation allows a wide range of tasks to be encoded and can account for
network delays, heterogeneous capabilities, and increasingly large swarms
through the adaptation mechanism. We leverage the Unity3D game engine to build
a simulator capable of introducing artificial networking failures and delays in
the swarm. Using the simulator we validate our method on an example coordinated
exploration task. We demonstrate that cost adaptation allows for more efficient
and safer task completion under varying environment conditions and increasingly
large swarm sizes. We release our simulator and code to the community for
future work.Comment: Accepted to IEEE/RSJ International Conference on Intelligent Robots
and Systems (IROS), 201
Hybrid behavioural-based multi-objective space trajectory optimization
In this chapter we present a hybridization of a stochastic based search approach for multi-objective optimization with a deterministic domain decomposition of the solution space. Prior to the presentation of the algorithm we introduce a general formulation of the optimization problem that is suitable to describe both single and multi-objective problems. The stochastic approach, based on behaviorism, combinedwith the decomposition of the solutions pace was tested on a set of standard multi-objective optimization problems and on a simple but representative case of space trajectory design
Human-Machine Collaborative Optimization via Apprenticeship Scheduling
Coordinating agents to complete a set of tasks with intercoupled temporal and
resource constraints is computationally challenging, yet human domain experts
can solve these difficult scheduling problems using paradigms learned through
years of apprenticeship. A process for manually codifying this domain knowledge
within a computational framework is necessary to scale beyond the
``single-expert, single-trainee" apprenticeship model. However, human domain
experts often have difficulty describing their decision-making processes,
causing the codification of this knowledge to become laborious. We propose a
new approach for capturing domain-expert heuristics through a pairwise ranking
formulation. Our approach is model-free and does not require enumerating or
iterating through a large state space. We empirically demonstrate that this
approach accurately learns multifaceted heuristics on a synthetic data set
incorporating job-shop scheduling and vehicle routing problems, as well as on
two real-world data sets consisting of demonstrations of experts solving a
weapon-to-target assignment problem and a hospital resource allocation problem.
We also demonstrate that policies learned from human scheduling demonstration
via apprenticeship learning can substantially improve the efficiency of a
branch-and-bound search for an optimal schedule. We employ this human-machine
collaborative optimization technique on a variant of the weapon-to-target
assignment problem. We demonstrate that this technique generates solutions
substantially superior to those produced by human domain experts at a rate up
to 9.5 times faster than an optimization approach and can be applied to
optimally solve problems twice as complex as those solved by a human
demonstrator.Comment: Portions of this paper were published in the Proceedings of the
International Joint Conference on Artificial Intelligence (IJCAI) in 2016 and
in the Proceedings of Robotics: Science and Systems (RSS) in 2016. The paper
consists of 50 pages with 11 figures and 4 table
- …