63,451 research outputs found
Deep Reinforcement Learning for Join Order Enumeration
Join order selection plays a significant role in query performance. However,
modern query optimizers typically employ static join enumeration algorithms
that do not receive any feedback about the quality of the resulting plan.
Hence, optimizers often repeatedly choose the same bad plan, as they do not
have a mechanism for "learning from their mistakes". In this paper, we argue
that existing deep reinforcement learning techniques can be applied to address
this challenge. These techniques, powered by artificial neural networks, can
automatically improve decision making by incorporating feedback from their
successes and failures. Towards this goal, we present ReJOIN, a
proof-of-concept join enumerator, and present preliminary results indicating
that ReJOIN can match or outperform the PostgreSQL optimizer in terms of plan
quality and join enumeration efficiency
Sampled Policy Gradient for Learning to Play the Game Agar.io
In this paper, a new offline actor-critic learning algorithm is introduced:
Sampled Policy Gradient (SPG). SPG samples in the action space to calculate an
approximated policy gradient by using the critic to evaluate the samples. This
sampling allows SPG to search the action-Q-value space more globally than
deterministic policy gradient (DPG), enabling it to theoretically avoid more
local optima. SPG is compared to Q-learning and the actor-critic algorithms
CACLA and DPG in a pellet collection task and a self play environment in the
game Agar.io. The online game Agar.io has become massively popular on the
internet due to intuitive game design and the ability to instantly compete
against players around the world. From the point of view of artificial
intelligence this game is also very intriguing: The game has a continuous input
and action space and allows to have diverse agents with complex strategies
compete against each other. The experimental results show that Q-Learning and
CACLA outperform a pre-programmed greedy bot in the pellet collection task, but
all algorithms fail to outperform this bot in a fighting scenario. The SPG
algorithm is analyzed to have great extendability through offline exploration
and it matches DPG in performance even in its basic form without extensive
sampling
Short-term plasticity as cause-effect hypothesis testing in distal reward learning
Asynchrony, overlaps and delays in sensory-motor signals introduce ambiguity
as to which stimuli, actions, and rewards are causally related. Only the
repetition of reward episodes helps distinguish true cause-effect relationships
from coincidental occurrences. In the model proposed here, a novel plasticity
rule employs short and long-term changes to evaluate hypotheses on cause-effect
relationships. Transient weights represent hypotheses that are consolidated in
long-term memory only when they consistently predict or cause future rewards.
The main objective of the model is to preserve existing network topologies when
learning with ambiguous information flows. Learning is also improved by biasing
the exploration of the stimulus-response space towards actions that in the past
occurred before rewards. The model indicates under which conditions beliefs can
be consolidated in long-term memory, it suggests a solution to the
plasticity-stability dilemma, and proposes an interpretation of the role of
short-term plasticity.Comment: Biological Cybernetics, September 201
Learning Policies from Self-Play with Policy Gradients and MCTS Value Estimates
In recent years, state-of-the-art game-playing agents often involve policies
that are trained in self-playing processes where Monte Carlo tree search (MCTS)
algorithms and trained policies iteratively improve each other. The strongest
results have been obtained when policies are trained to mimic the search
behaviour of MCTS by minimising a cross-entropy loss. Because MCTS, by design,
includes an element of exploration, policies trained in this manner are also
likely to exhibit a similar extent of exploration. In this paper, we are
interested in learning policies for a project with future goals including the
extraction of interpretable strategies, rather than state-of-the-art
game-playing performance. For these goals, we argue that such an extent of
exploration is undesirable, and we propose a novel objective function for
training policies that are not exploratory. We derive a policy gradient
expression for maximising this objective function, which can be estimated using
MCTS value estimates, rather than MCTS visit counts. We empirically evaluate
various properties of resulting policies, in a variety of board games.Comment: Accepted at the IEEE Conference on Games (CoG) 201
- …