33 research outputs found

    Game Solvable By Backward Reasoning

    Get PDF
    Inti dari permainan strategi adalah saling ketergantungan dari keputusan para pemain. Prinsip umum untuk permainan sekuensial-bergerak adalah bahwa setiap pemain harus mencari tahu respons masa depan pemain lain dan menggunakannya dalam menghitung langkah terbaiknya saat ini. Gagasan ini sangat penting sehingga perlu dikodifikasikan menjadi aturan dasar perilaku strategis. Pada kesempatan ini, penulis memfokusan penulisan pada perihal permainan dapat dipecahkan dengan penalaran mundur. Buku dan beberapa journal telah dikumpulkan untuk selanjutnya menjadi bahan guna menambah lebih banyak konstruksi ke dalam permainan dengan penalaran penting. Penelitian ini bertujuan untuk mengeksplorasi pentingnya Backward Reasoning dalam dalam pemecahan sebuah permainan dalam bisnis sebelum menentukan strategi interaksi

    ALGAMES: A Fast Solver for Constrained Dynamic Games

    Full text link
    Dynamic games are an effective paradigm for dealing with the control of multiple interacting actors. This paper introduces ALGAMES (Augmented Lagrangian GAME-theoretic Solver), a solver that handles trajectory optimization problems with multiple actors and general nonlinear state and input constraints. Its novelty resides in satisfying the first order optimality conditions with a quasi-Newton root-finding algorithm and rigorously enforcing constraints using an augmented Lagrangian formulation. We evaluate our solver in the context of autonomous driving on scenarios with a strong level of interactions between the vehicles. We assess the robustness of the solver using Monte Carlo simulations. It is able to reliably solve complex problems like ramp merging with three vehicles three times faster than a state-of-the-art DDP-based approach. A model predictive control (MPC) implementation of the algorithm demonstrates real-time performance on complex autonomous driving scenarios with an update frequency higher than 60 Hz.Comment: 10 pages, 8 figures, submitted to Robotics: Science and Systems Conference (RSS) 202

    Stackelberg Meta-Learning Based Control for Guided Cooperative LQG Systems

    Full text link
    Guided cooperation allows intelligent agents with heterogeneous capabilities to work together by following a leader-follower type of interaction. However, the associated control problem becomes challenging when the leader agent does not have complete information about follower agents. There is a need for learning and adaptation of cooperation plans. To this end, we develop a meta-learning-based Stackelberg game-theoretic framework to address the challenges in the guided cooperative control for linear systems. We first formulate the guided cooperation between agents as a dynamic Stackelberg game and use the feedback Stackelberg equilibrium as the agent-wise cooperation strategy. We further leverage meta-learning to address the incomplete information of follower agents, where the leader agent learns a meta-response model from a prescribed set of followers offline and adapts to a new coming cooperation task with a small amount of learning data. We use a case study in robot teaming to corroborate the effectiveness of our framework. Comparison with other learning approaches also shows that our learned cooperation strategy provides better transferability for different cooperation tasks

    Driving in Dense Traffic with Model-Free Reinforcement Learning

    Full text link
    Traditional planning and control methods could fail to find a feasible trajectory for an autonomous vehicle to execute amongst dense traffic on roads. This is because the obstacle-free volume in spacetime is very small in these scenarios for the vehicle to drive through. However, that does not mean the task is infeasible since human drivers are known to be able to drive amongst dense traffic by leveraging the cooperativeness of other drivers to open a gap. The traditional methods fail to take into account the fact that the actions taken by an agent affect the behaviour of other vehicles on the road. In this work, we rely on the ability of deep reinforcement learning to implicitly model such interactions and learn a continuous control policy over the action space of an autonomous vehicle. The application we consider requires our agent to negotiate and open a gap in the road in order to successfully merge or change lanes. Our policy learns to repeatedly probe into the target road lane while trying to find a safe spot to move in to. We compare against two model-predictive control-based algorithms and show that our policy outperforms them in simulation.Comment: Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), 2020. Updated Github repository link
    corecore