4 research outputs found

    Structural Return Maximization for Reinforcement Learning

    Full text link
    Batch Reinforcement Learning (RL) algorithms attempt to choose a policy from a designer-provided class of policies given a fixed set of training data. Choosing the policy which maximizes an estimate of return often leads to over-fitting when only limited data is available, due to the size of the policy class in relation to the amount of data available. In this work, we focus on learning policy classes that are appropriately sized to the amount of data available. We accomplish this by using the principle of Structural Risk Minimization, from Statistical Learning Theory, which uses Rademacher complexity to identify a policy class that maximizes a bound on the return of the best policy in the chosen policy class, given the available data. Unlike similar batch RL approaches, our bound on return requires only extremely weak assumptions on the true system

    Machine learning on a budget

    Full text link
    Thesis (Ph.D.)--Boston UniversityIn a typical discriminative learning setting, a set of labeled training examples is given, and the goal is to learn a decision rule that accurately classifies (or labels) unseen test examples. Much of machine learning research has focused on improving accuracy, but more recently costs of learning and decision making are becoming more important. Such costs arise both during training and testing. Labeling data for training is often an expensive process. During testing, acquiring or processing measurements for every decision is also costly. This work deals with two problems: how to reduce the amount of labeled data during training, and how to minimize measurements cost in making decisions during testing, while maintaining system accuracy. The first part falls into an area known as active learning. It deals with the problem of selecting a small subset of examples to label, from a pool of unlabeled data, for training a good classifier. This problem is relevant in many applications where a large collection of unlabeled data is readily available but to label an instance requires using an expensive expert (a radiologist annotating a medical image). We study active learning in the boosting framework. We develop a practical algorithm that labels examples to maximally reduce the space of feasible classifiers. We show that, under certain assumptions, our strategy achieves the generalization error performance of a system trained on the entire data set while only selecting logarithmically many samples to label. In the second part, we study sequential classifiers under budget constraints. In many systems, such as medical diagnosis and homeland security, sensors have varying acquisition costs, and these costs account for delay, throughput or monetary value. While some decisions require all measurements, it is often unnecessary to use every modality to classify every example. So the problem is to learn a system that, for every decision, sequentially selects sensors to meet a measurement budget while minimizing classification error. Initially, we study the case where the sensor order in which measurement are acquired is given. For every instance, our system has to decide whether to seek more measurements from the next sensor or to terminate by classifying based on the available information. We use Bayesian analysis of this problem to construct a novel multi-stage empirical risk objective and directly learn sequential decision functions from training data. We provide practical algorithms for binary and multi-class settings and derive generalization error guarantees. We compare our approach to alternative strategies on real world data. In the last section, we explore a decision system when the order of sensors is no longer fixed. We investigate how to combine ideas from reinforcement and imitation learning with empirical risk minimization to learn a dynamic sensor selection policy

    Méthodes d'apprentissage de la coordination multiagent : application au transport intelligent

    Get PDF
    Les problèmes de prise de décisions séquentielles multiagents sont difficiles à résoudre surtout lorsque les agents n'observent pas parfaitement l'état de Y environnement. Les approches existantes pour résoudre ces problèmes utilisent souvent des approximations de la fonction de valeur ou se basent sur la structure pour simplifier la résolution. Dans cette thèse, nous proposons d'approximer un problème de décisions séquentielles multiagent à observation limitée, modélisé par un processus décisionnel markovien décentralisé (DEC-MDP) en utilisant deux hypothèses sur la structure du problème. La première hypothèse porte sur la structure de comportement optimal et suppose qu'il est possible d'approximer la politique optimale d'un agent en connaissant seulement les actions optimales au niveau d'un petit nombre de situations auxquelles l'agent peut faire face dans son environnement. La seconde hypothèse porte, quant à elle, sur la structure organisationnelle des agents et suppose que plus les agents sont éloignés les uns des autres, moins ils ont besoin de se coordonner. Ces deux hypothèses nous amènent à proposer deux approches d'approximation. La première approche, nommée Supervised Policy Reinforcement Learning, combine l'apprentissage par renforcement et l'apprentissage supervisé pour généraliser la politique optimale d'un agent. La second approche se base, quant à elle, sur la structure organisationnelle des agents pour apprendre une politique multiagent dans des problèmes où l'observation est limitée. Pour cela, nous présentons un modèle, le D O F - D E C - M DP (Distance-Observable Factored Decentralized Markov Décision Process) qui définit une distance d'observation pour les agents. A partir de ce modèle, nous proposons des bornes sur le gain de récompense que permet l'augmentation de la distance d'observation. Les résultats empiriques obtenus sur des problèmes classiques d'apprentissage par renforcement monoagents et multiagents montrent que nos approches d'approximation sont capables d'apprendre des politiques proches de l'optimale. Enfin, nous avons testé nos approches sur un problème de coordination de véhicules en proposant une méthode de synchronisation d'agents via la communication dans un cadre à observation limitée
    corecore