67,573 research outputs found

    Reinforcement Learning for Ramp Control: An Analysis of Learning Parameters

    Get PDF
    Reinforcement Learning (RL) has been proposed to deal with ramp control problems under dynamic traffic conditions; however, there is a lack of sufficient research on the behaviour and impacts of different learning parameters. This paper describes a ramp control agent based on the RL mechanism and thoroughly analyzed the influence of three learning parameters; namely, learning rate, discount rate and action selection parameter on the algorithm performance. Two indices for the learning speed and convergence stability were used to measure the algorithm performance, based on which a series of simulation-based experiments were designed and conducted by using a macroscopic traffic flow model. Simulation results showed that, compared with the discount rate, the learning rate and action selection parameter made more remarkable impacts on the algorithm performance. Based on the analysis, some suggestionsabout how to select suitable parameter values that can achieve a superior performance were provided

    Reinforcement Learning for Ramp Control: An Analysis of Learning Parameters

    Get PDF
    Reinforcement Learning (RL) has been proposed to deal with ramp control problems under dynamic traffic conditions; however, there is a lack of sufficient research on the behaviour and impacts of different learning parameters. This paper describes a ramp control agent based on the RL mechanism and thoroughly analyzed the influence of three learning parameters; namely, learning rate, discount rate and action selection parameter on the algorithm performance. Two indices for the learning speed and convergence stability were used to measure the algorithm performance, based on which a series of simulation-based experiments were designed and conducted by using a macroscopic traffic flow model. Simulation results showed that, compared with the discount rate, the learning rate and action selection parameter made more remarkable impacts on the algorithm performance. Based on the analysis, some suggestionsabout how to select suitable parameter values that can achieve a superior performance were provided

    Solving Inverse Problems with Reinforcement Learning

    Full text link
    In this paper, we formally introduce, with rigorous derivations, the use of reinforcement learning to the field of inverse problems by designing an iterative algorithm, called REINFORCE-IP, for solving a general type of non-linear inverse problem. By choosing specific probability models for the action-selection rule, we connect our approach to the conventional regularization methods of Tikhonov regularization and iterative regularization. For the numerical implementation of our approach, we parameterize the solution-searching rule with the help of neural networks and iteratively improve the parameter using a reinforcement-learning algorithm~-- REINFORCE. Under standard assumptions we prove the almost sure convergence of the parameter to a locally optimal value. Our work provides two typical examples (non-linear integral equations and parameter-identification problems in partial differential equations) of how reinforcement learning can be applied in solving non-linear inverse problems. Our numerical experiments show that REINFORCE-IP is an efficient algorithm that can escape from local minimums and identify multi-solutions for inverse problems with non-uniqueness.Comment: 33 pages, 10 figure
    corecore