20 research outputs found
Isomorph-Free Branch and Bound Search for Finite State Controllers
The recent proliferation of smart-phones and other wearable devices has lead
to a surge of new mobile applications. Partially observable Markov decision
processes provide a natural framework to design applications that
continuously make decisions based on noisy sensor measurements. However,
given the limited battery life, there is a need to minimize the amount of
online computation. This can be achieved by compiling a policy into a
finite state controller since there is no need for belief monitoring or
online search. In this paper, we propose a new branch and bound technique
to search for a good controller. In contrast to many existing algorithms
for controllers, our search technique is not subject to local optima. We
also show how to reduce the amount of search by avoiding the enumeration of
isomorphic controllers and by taking advantage of suitable upper and lower
bounds. The approach is demonstrated on several benchmark problems as well
as a smart-phone application to assist persons with Alzheimer's to wayfind
Probabilistic Inference Techniques for Scalable Multiagent Decision Making
Decentralized POMDPs provide an expressive framework for multiagent sequential decision making. However, the complexity of these models—NEXP-Complete even for two agents—has limited their scalability. We present a promising new class of approxima-tion algorithms by developing novel connections between multiagent planning and machine learning. We show how the multiagent planning problem can be reformulated as inference in a mixture of dynamic Bayesian networks (DBNs). This planning-as-inference approach paves the way for the application of efficient inference techniques in DBNs to multiagent decision making. To further improve scalability, we identify certain conditions that are sufficient to extend the approach to multiagent systems with dozens of agents. Specifically, we show that the necessary inference within the expectation-maximization framework can be decomposed into processes that often involve a small subset of agents, thereby facilitating scalability. We further show that a number of existing multiagent planning models satisfy these conditions. Experiments on large planning benchmarks confirm the benefits of our approach in terms of runtime and scalability with respect to existing techniques
Parametric POMDPs for planning in continuous state spaces
This thesis is concerned with planning and acting under uncertainty in partially-observable continuous domains. In particular, it focusses on the problem of mobile robot navigation given a known map. The dominant paradigm for robot localisation is to use Bayesian estimation to maintain a probability distribution over possible robot poses. In contrast, control algorithms often base their decisions on the assumption that a single state, such as the mode of this distribution, is correct. In scenarios involving significant uncertainty, this can lead to serious control errors. It is generally agreed that the reliability of navigation in uncertain environments would be greatly improved by the ability to consider the entire distribution when acting, rather than the single most likely state. The framework adopted in this thesis for modelling navigation problems mathematically is the Partially Observable Markov Decision Process (POMDP). An exact solution to a POMDP problem provides the optimal balance between reward-seeking behaviour and information-seeking behaviour, in the presence of sensor and actuation noise. Unfortunately, previous exact and approximate solution methods have had difficulty scaling to real applications. The contribution of this thesis is the formulation of an approach to planning in the space of continuous parameterised approximations to probability distributions. Theoretical and practical results are presented which show that, when compared with similar methods from the literature, this approach is capable of scaling to larger and more realistic problems. In order to apply the solution algorithm to real-world problems, a number of novel improvements are proposed. Specifically, Monte Carlo methods are employed to estimate distributions over future parameterised beliefs, improving planning accuracy without a loss of efficiency. Conditional independence assumptions are exploited to simplify the problem, reducing computational requirements. Scalability is further increased by focussing computation on likely beliefs, using metric indexing structures for efficient function approximation. Local online planning is incorporated to assist global offline planning, allowing the precision of the latter to be decreased without adversely affecting solution quality. Finally, the algorithm is implemented and demonstrated during real-time control of a mobile robot in a challenging navigation task. We argue that this task is substantially more challenging and realistic than previous problems to which POMDP solution methods have been applied. Results show that POMDP planning, which considers the evolution of the entire probability distribution over robot poses, produces significantly more robust behaviour when compared with a heuristic planner which considers only the most likely states and outcomes
Parametric POMDPs for planning in continuous state spaces
This thesis is concerned with planning and acting under uncertainty in partially-observable continuous domains. In particular, it focusses on the problem of mobile robot navigation given a known map. The dominant paradigm for robot localisation is to use Bayesian estimation to maintain a probability distribution over possible robot poses. In contrast, control algorithms often base their decisions on the assumption that a single state, such as the mode of this distribution, is correct. In scenarios involving significant uncertainty, this can lead to serious control errors. It is generally agreed that the reliability of navigation in uncertain environments would be greatly improved by the ability to consider the entire distribution when acting, rather than the single most likely state. The framework adopted in this thesis for modelling navigation problems mathematically is the Partially Observable Markov Decision Process (POMDP). An exact solution to a POMDP problem provides the optimal balance between reward-seeking behaviour and information-seeking behaviour, in the presence of sensor and actuation noise. Unfortunately, previous exact and approximate solution methods have had difficulty scaling to real applications. The contribution of this thesis is the formulation of an approach to planning in the space of continuous parameterised approximations to probability distributions. Theoretical and practical results are presented which show that, when compared with similar methods from the literature, this approach is capable of scaling to larger and more realistic problems. In order to apply the solution algorithm to real-world problems, a number of novel improvements are proposed. Specifically, Monte Carlo methods are employed to estimate distributions over future parameterised beliefs, improving planning accuracy without a loss of efficiency. Conditional independence assumptions are exploited to simplify the problem, reducing computational requirements. Scalability is further increased by focussing computation on likely beliefs, using metric indexing structures for efficient function approximation. Local online planning is incorporated to assist global offline planning, allowing the precision of the latter to be decreased without adversely affecting solution quality. Finally, the algorithm is implemented and demonstrated during real-time control of a mobile robot in a challenging navigation task. We argue that this task is substantially more challenging and realistic than previous problems to which POMDP solution methods have been applied. Results show that POMDP planning, which considers the evolution of the entire probability distribution over robot poses, produces significantly more robust behaviour when compared with a heuristic planner which considers only the most likely states and outcomes
Reinforcement Learning-based Optimization of Multiple Access in Wireless Networks
In this thesis, we study the problem of Multiple Access (MA) in wireless networks and design adaptive solutions based on Reinforcement Learning (RL). We analyze the importance of MA in the current communications scenery, where bandwidth-hungry applications emerge due to the co-evolution of technological progress and societal needs, and explain that improvements brought by new standards cannot overcome the problem of resource scarcity. We focus on resource-constrained networks, where devices have restricted hardware-capabilities, there is no centralized point of control and coordination is prohibited or limited. The protocols that we optimize follow a Random Access (RA) approach, where sensing the common medium prior to transmission is not possible. We begin with the study of time access and provide two reinforcement learning algorithms for optimizing Irregular Repetition Slotted ALOHA (IRSA), a state-of-the-art RA protocol. First, we focus on ensuring low complexity and propose a Q-learning variant where learners act independently and converge quickly. We, then, design an algorithm in the area of coordinated learning and focus on deriving convergence guarantees for learning while minimizing the complexity of coordination. We provide simulations that showcase how coordination can help achieve a fine balance, in terms of complexity and performance, between fully decentralized and centralized solutions. In addition to time access, we study channel access, a problem that has recently attracted significant attention in cognitive radio. We design learning algorithms in the framework of Multi-player Multi-armed Bandits (MMABs), both for static and dynamic settings, where devices arrive at different time steps. Our focus is on deriving theoretical guarantees and ensuring that performance scales well with the size of the network. Our works constitute an important step towards addressing the challenges that the properties of decentralization and partial observability, inherent in resource-constrained networks, pose for RL algorithms
Recommended from our members
Interactive Prediction and Planning for Autonomous Driving: from Algorithms to Fundamental Aspects
Inevitably, autonomous vehicles need to interact with other road participants in a variety of highly complex or critical driving scenarios. It is still an extremely challenging task even for the forefront companies or institutes to enable autonomous vehicles to interactively predict the behavior of others, and plan safe and high-quality motions accordingly. The major obstacles are not just originated from prediction and planning algorithms with insufficient performances. Several fundamental problems in the fields of interactive prediction and planning still remain open, such as formulation, representation and evaluation of interactive prediction methods, motion dataset with densely interactive driving behavior, as well as interface of interactive prediction and planning algorithms. The aforementioned fundamental aspects of interactive prediction and planning are addressed in this dissertation along with various kinds of algorithms. First, generic environmental representation for various scenarios with topological decomposition is constructed, and a corresponding planning algorithm is designed by combining graph search and optimization. Hard constraints in optimization-based planners are also incorporated into the training loss of imitation learning so that the policy net can generate safe and feasible motions in highly constrained scenarios. Unified problem formulation and motion representation are designed for different paradigms of interactive predictors such as planning-based prediction (inverse reinforcement learning), as well as probabilistic graphical models (hidden Markov model) and deep neural networks (mixture density network), which are utilized for the prediction/planning interface design and prediction benchmark. A framework combing decision network and graph-search/optimization/sample-based planner is proposed to achieve a driving strategy which is defensive to potential violations of others, but not overly conservatively to threats of low probabilities. Such driving strategy is achieved via experiments based on the aforementioned interactive prediction and planning algorithms with proper interface designed. These predictors are also evaluated from closed loop perspective considering planning fatality when using the prediction results instead of pure data approximation metrics. Finally, INTERACTION (INTERnational, Adversarial and Cooperative moTION) dataset with highly interactive driving scenarios and behavior from international locations is constructed with interaction density metric defined to compare different datasets. The dataset has been utilized for various behavior-related research areas such as prediction, planning, imitation learning and behavior modeling, and is inspiring new research fields such as representation learning, interaction extraction and scenario generation