27,436 research outputs found
AI for Classic Video Games using Reinforcement Learning
Deep reinforcement learning is a technique to teach machines tasks based on trial and error experiences in the way humans learn. In this paper, some preliminary research is done to understand how reinforcement learning and deep learning techniques can be combined to train an agent to play Archon, a classic video game. We compare two methods to estimate a Q function, the function used to compute the best action to take at each point in the game. In the first approach, we used a Q table to store the states and weights of the corresponding actions. In our experiments, this method converged very slowly. Our second approach was similar to that of [1]: We used a convolutional neural network (CNN) to determine a Q function. This deep neural network model successfully learnt to control the Archon player using keyboard event that it generated. We observed that the second approaches Q function converged faster than the first. For the latter method, the neural net was trained only using prediodic screenshots taken while it was playing. Experiments were conducted on a machine that did not have a GPU, so our training was slower as compared to [1]
Playing Atari with Deep Reinforcement Learning
We present the first deep learning model to successfully learn control
policies directly from high-dimensional sensory input using reinforcement
learning. The model is a convolutional neural network, trained with a variant
of Q-learning, whose input is raw pixels and whose output is a value function
estimating future rewards. We apply our method to seven Atari 2600 games from
the Arcade Learning Environment, with no adjustment of the architecture or
learning algorithm. We find that it outperforms all previous approaches on six
of the games and surpasses a human expert on three of them.Comment: NIPS Deep Learning Workshop 201
A Disentangled Recognition and Nonlinear Dynamics Model for Unsupervised Learning
This paper takes a step towards temporal reasoning in a dynamically changing
video, not in the pixel space that constitutes its frames, but in a latent
space that describes the non-linear dynamics of the objects in its world. We
introduce the Kalman variational auto-encoder, a framework for unsupervised
learning of sequential data that disentangles two latent representations: an
object's representation, coming from a recognition model, and a latent state
describing its dynamics. As a result, the evolution of the world can be
imagined and missing data imputed, both without the need to generate high
dimensional frames at each time step. The model is trained end-to-end on videos
of a variety of simulated physical systems, and outperforms competing methods
in generative and missing data imputation tasks.Comment: NIPS 201
Minimax Iterative Dynamic Game: Application to Nonlinear Robot Control Tasks
Multistage decision policies provide useful control strategies in
high-dimensional state spaces, particularly in complex control tasks. However,
they exhibit weak performance guarantees in the presence of disturbance, model
mismatch, or model uncertainties. This brittleness limits their use in
high-risk scenarios. We present how to quantify the sensitivity of such
policies in order to inform of their robustness capacity. We also propose a
minimax iterative dynamic game framework for designing robust policies in the
presence of disturbance/uncertainties. We test the quantification hypothesis on
a carefully designed deep neural network policy; we then pose a minimax
iterative dynamic game (iDG) framework for improving policy robustness in the
presence of adversarial disturbances. We evaluate our iDG framework on a
mecanum-wheeled robot, whose goal is to find a ocally robust optimal multistage
policy that achieve a given goal-reaching task. The algorithm is simple and
adaptable for designing meta-learning/deep policies that are robust against
disturbances, model mismatch, or model uncertainties, up to a disturbance
bound. Videos of the results are on the author's website,
http://ecs.utdallas.edu/~opo140030/iros18/iros2018.html, while the codes for
reproducing our experiments are on github,
https://github.com/lakehanne/youbot/tree/rilqg. A self-contained environment
for reproducing our results is on docker,
https://hub.docker.com/r/lakehanne/youbotbuntu14/Comment: 2018 International Conference on Intelligent Robots and System
- …