3 research outputs found

    A brief guide to multi-objective reinforcement learning and planning JAAMAS track

    Get PDF
    Real-world sequential decision-making tasks are usually complex, and require trade-offs between multiple - often conflicting - objectives. However, the majority of research in reinforcement learning (RL) and decision-theoretic planning assumes a single objective, or that multiple objectives can be handled via a predefined weighted sum over the objectives. Such approaches may oversimplify the underlying problem, and produce suboptimal results. This extended abstract outlines the limitations of using a semi-blind iterative process to solve multi-objective decision making problems. Our extended paper [4], serves as a guide for the application of explicitly multi-objective methods to difficult problems. © 2023 International Foundation for Autonomous Agents and Multiagent Systems (www.ifaamas.org). All rights reserved

    A practical guide to multi-objective reinforcement learning and planning

    Get PDF
    Real-world sequential decision-making tasks are generally complex, requiring trade-offs between multiple, often conflicting, objectives. Despite this, the majority of research in reinforcement learning and decision-theoretic planning either assumes only a single objective, or that multiple objectives can be adequately handled via a simple linear combination. Such approaches may oversimplify the underlying problem and hence produce suboptimal results. This paper serves as a guide to the application of multi-objective methods to difficult problems, and is aimed at researchers who are already familiar with single-objective reinforcement learning and planning methods who wish to adopt a multi-objective perspective on their research, as well as practitioners who encounter multi-objective decision problems in practice. It identifies the factors that may influence the nature of the desired solution, and illustrates by example how these influence the design of multi-objective decision-making systems for complex problems. © 2022, The Author(s)

    Improving wet clutch engagement with Reinforcement Learning

    No full text
    A common approach when applying reinforcement learning to address control problems is that of first learning a policy based on an approximated model of the plant, whose behavior can be quickly and safely explored in simulation; and then implementing the obtained policy to control the actual plant. Here we follow this approach to learn to engage a transmission clutch, with the aim of obtaining a rapid and smooth engagement, with a small torque loss. Using an approximated model of a wet clutch, which simulates a portion of the whole engagement, we first learn an open loop control signal, which is then transferred on the actual wet clutch, and improved by further learning with a different reward function, based on the actual torque loss observed.info:eu-repo/semantics/publishe
    corecore