171 research outputs found

    Bridging Offline Reinforcement Learning and Imitation Learning: A Tale of Pessimism

    Full text link
    Offline (or batch) reinforcement learning (RL) algorithms seek to learn an optimal policy from a fixed dataset without active data collection. Based on the composition of the offline dataset, two main categories of methods are used: imitation learning which is suitable for expert datasets and vanilla offline RL which often requires uniform coverage datasets. From a practical standpoint, datasets often deviate from these two extremes and the exact data composition is usually unknown a priori. To bridge this gap, we present a new offline RL framework that smoothly interpolates between the two extremes of data composition, hence unifying imitation learning and vanilla offline RL. The new framework is centered around a weak version of the concentrability coefficient that measures the deviation from the behavior policy to the expert policy alone. Under this new framework, we further investigate the question on algorithm design: can one develop an algorithm that achieves a minimax optimal rate and also adapts to unknown data composition? To address this question, we consider a lower confidence bound (LCB) algorithm developed based on pessimism in the face of uncertainty in offline RL. We study finite-sample properties of LCB as well as information-theoretic limits in multi-armed bandits, contextual bandits, and Markov decision processes (MDPs). Our analysis reveals surprising facts about optimality rates. In particular, in all three settings, LCB achieves a faster rate of 1/N1/N for nearly-expert datasets compared to the usual rate of 1/N1/\sqrt{N} in offline RL, where NN is the number of samples in the batch dataset. In the case of contextual bandits with at least two contexts, we prove that LCB is adaptively optimal for the entire data composition range, achieving a smooth transition from imitation learning to offline RL. We further show that LCB is almost adaptively optimal in MDPs.Comment: 84 pages, 6 figure

    Automatic optimum order assignment in IIR adaptive filters

    Get PDF
    金沢大学理工研究域 電子情報学

    Design of Sail-Assisted Unmanned Surface Vehicle Intelligent Control System

    Get PDF
    To achieve the wind sail-assisted function of the unmanned surface vehicle (USV), this work focuses on the design problems of the sail-assisted USV intelligent control systems (SUICS) and illustrates the implementation process of the SUICS. The SUICS consists of the communication system, the sensor system, the PC platform, and the lower machine platform. To make full use of the wind energy, in the SUICS, we propose the sail angle of attack automatic adjustment (Sail_4A) algorithm and present the realization flow for each subsystem of the SUICS. By using the test boat, the design and implementation of the SUICS are fulfilled systematically. Experiments verify the performance and effectiveness of our SUICS. The SUICS enhances the intelligent utility of sustainable wind energy for the sail-assisted USV significantly and plays a vital role in shipping energy-saving emission reduction requirements issued by International Maritime Organization (IMO)

    Highly Active Nanostructured CoS2/CoS Heterojunction Electrocatalysts for Aqueous Polysulfide/Iodide Redox Flow Batteries

    Get PDF
    Aqueous polysulfide/iodide redox flow batteries are attractive for scalable energy storage due to their high energy density and low cost. However, their energy efficiency and power density are usually limited by poor electrochemical kinetics of the redox reactions of polysulfide/iodide ions on graphite electrodes, which has become the main obstacle for their practical applications. Here, CoS2/CoS heterojunction nanoparticles with uneven charge distribution, which are synthesized in situ on graphite felt by a one-step solvothermal process, can significantly boost electrocatalytic activities of I−/I3− and S2−/Sx2−redox reactions by improving absorptivity of charged ions and promoting charge transfer. The polysulfide/iodide flow battery with the graphene felt-CoS2/CoS heterojunction can deliver a high energy efficiency of 84.5% at a current density of 10 mA cm−2, a power density of 86.2 mW cm−2 and a stable energy efficiency retention of 96% after approximately 1000 h of continuous operation

    Jump-Start Reinforcement Learning

    Full text link
    Reinforcement learning (RL) provides a theoretical framework for continuously improving an agent's behavior via trial and error. However, efficiently learning policies from scratch can be very difficult, particularly for tasks with exploration challenges. In such settings, it might be desirable to initialize RL with an existing policy, offline data, or demonstrations. However, naively performing such initialization in RL often works poorly, especially for value-based methods. In this paper, we present a meta algorithm that can use offline data, demonstrations, or a pre-existing policy to initialize an RL policy, and is compatible with any RL approach. In particular, we propose Jump-Start Reinforcement Learning (JSRL), an algorithm that employs two policies to solve tasks: a guide-policy, and an exploration-policy. By using the guide-policy to form a curriculum of starting states for the exploration-policy, we are able to efficiently improve performance on a set of simulated robotic tasks. We show via experiments that JSRL is able to significantly outperform existing imitation and reinforcement learning algorithms, particularly in the small-data regime. In addition, we provide an upper bound on the sample complexity of JSRL and show that with the help of a guide-policy, one can improve the sample complexity for non-optimism exploration methods from exponential in horizon to polynomial.Comment: 20 pages, 10 figure
    corecore