261,311 research outputs found

    Decision-Aware Actor-Critic with Function Approximation and Theoretical Guarantees

    Full text link
    Actor-critic (AC) methods are widely used in reinforcement learning (RL) and benefit from the flexibility of using any policy gradient method as the actor and value-based method as the critic. The critic is usually trained by minimizing the TD error, an objective that is potentially decorrelated with the true goal of achieving a high reward with the actor. We address this mismatch by designing a joint objective for training the actor and critic in a decision-aware fashion. We use the proposed objective to design a generic, AC algorithm that can easily handle any function approximation. We explicitly characterize the conditions under which the resulting algorithm guarantees monotonic policy improvement, regardless of the choice of the policy and critic parameterization. Instantiating the generic algorithm results in an actor that involves maximizing a sequence of surrogate functions (similar to TRPO, PPO) and a critic that involves minimizing a closely connected objective. Using simple bandit examples, we provably establish the benefit of the proposed critic objective over the standard squared error. Finally, we empirically demonstrate the benefit of our decision-aware actor-critic framework on simple RL problems.Comment: 44 page

    Approaches for Future Internet architecture design and Quality of Experience (QoE) Control

    Get PDF
    Researching a Future Internet capable of overcoming the current Internet limitations is a strategic investment. In this respect, this paper presents some concepts that can contribute to provide some guidelines to overcome the above-mentioned limitations. In the authors' vision, a key Future Internet target is to allow applications to transparently, efficiently and flexibly exploit the available network resources with the aim to match the users' expectations. Such expectations could be expressed in terms of a properly defined Quality of Experience (QoE). In this respect, this paper provides some approaches for coping with the QoE provision problem

    An Inverse Method for Policy-Iteration Based Algorithms

    Full text link
    We present an extension of two policy-iteration based algorithms on weighted graphs (viz., Markov Decision Problems and Max-Plus Algebras). This extension allows us to solve the following inverse problem: considering the weights of the graph to be unknown constants or parameters, we suppose that a reference instantiation of those weights is given, and we aim at computing a constraint on the parameters under which an optimal policy for the reference instantiation is still optimal. The original algorithm is thus guaranteed to behave well around the reference instantiation, which provides us with some criteria of robustness. We present an application of both methods to simple examples. A prototype implementation has been done

    Rethinking the Discount Factor in Reinforcement Learning: A Decision Theoretic Approach

    Full text link
    Reinforcement learning (RL) agents have traditionally been tasked with maximizing the value function of a Markov decision process (MDP), either in continuous settings, with fixed discount factor γ<1\gamma < 1, or in episodic settings, with γ=1\gamma = 1. While this has proven effective for specific tasks with well-defined objectives (e.g., games), it has never been established that fixed discounting is suitable for general purpose use (e.g., as a model of human preferences). This paper characterizes rationality in sequential decision making using a set of seven axioms and arrives at a form of discounting that generalizes traditional fixed discounting. In particular, our framework admits a state-action dependent "discount" factor that is not constrained to be less than 1, so long as there is eventual long run discounting. Although this broadens the range of possible preference structures in continuous settings, we show that there exists a unique "optimizing MDP" with fixed γ<1\gamma < 1 whose optimal value function matches the true utility of the optimal policy, and we quantify the difference between value and utility for suboptimal policies. Our work can be seen as providing a normative justification for (a slight generalization of) Martha White's RL task formalism (2017) and other recent departures from the traditional RL, and is relevant to task specification in RL, inverse RL and preference-based RL.Comment: 8 pages + 1 page supplement. In proceedings of AAAI 2019. Slides, poster and bibtex available at https://silviupitis.com/#rethinking-the-discount-factor-in-reinforcement-learning-a-decision-theoretic-approac
    • …
    corecore