1,510 research outputs found
Transfer from Multiple MDPs
Transfer reinforcement learning (RL) methods leverage on the experience
collected on a set of source tasks to speed-up RL algorithms. A simple and
effective approach is to transfer samples from source tasks and include them
into the training set used to solve a given target task. In this paper, we
investigate the theoretical properties of this transfer method and we introduce
novel algorithms adapting the transfer process on the basis of the similarity
between source and target tasks. Finally, we report illustrative experimental
results in a continuous chain problem.Comment: 201
Smoothing Policies and Safe Policy Gradients
Policy gradient algorithms are among the best candidates for the much
anticipated application of reinforcement learning to real-world control tasks,
such as the ones arising in robotics. However, the trial-and-error nature of
these methods introduces safety issues whenever the learning phase itself must
be performed on a physical system. In this paper, we address a specific safety
formulation, where danger is encoded in the reward signal and the learning
agent is constrained to never worsen its performance. By studying actor-only
policy gradient from a stochastic optimization perspective, we establish
improvement guarantees for a wide class of parametric policies, generalizing
existing results on Gaussian policies. This, together with novel upper bounds
on the variance of policy gradient estimators, allows to identify those
meta-parameter schedules that guarantee monotonic improvement with high
probability. The two key meta-parameters are the step size of the parameter
updates and the batch size of the gradient estimators. By a joint, adaptive
selection of these meta-parameters, we obtain a safe policy gradient algorithm
Unimodal Thompson Sampling for Graph-Structured Arms
We study, to the best of our knowledge, the first Bayesian algorithm for
unimodal Multi-Armed Bandit (MAB) problems with graph structure. In this
setting, each arm corresponds to a node of a graph and each edge provides a
relationship, unknown to the learner, between two nodes in terms of expected
reward. Furthermore, for any node of the graph there is a path leading to the
unique node providing the maximum expected reward, along which the expected
reward is monotonically increasing. Previous results on this setting describe
the behavior of frequentist MAB algorithms. In our paper, we design a Thompson
Sampling-based algorithm whose asymptotic pseudo-regret matches the lower bound
for the considered setting. We show that -as it happens in a wide number of
scenarios- Bayesian MAB algorithms dramatically outperform frequentist ones. In
particular, we provide a thorough experimental evaluation of the performance of
our and state-of-the-art algorithms as the properties of the graph vary
Coherent Transport of Quantum States by Deep Reinforcement Learning
Some problems in physics can be handled only after a suitable \textit{ansatz
}solution has been guessed. Such method is therefore resilient to
generalization, resulting of limited scope. The coherent transport by adiabatic
passage of a quantum state through an array of semiconductor quantum dots
provides a par excellence example of such approach, where it is necessary to
introduce its so called counter-intuitive control gate ansatz pulse sequence.
Instead, deep reinforcement learning technique has proven to be able to solve
very complex sequential decision-making problems involving competition between
short-term and long-term rewards, despite a lack of prior knowledge. We show
that in the above problem deep reinforcement learning discovers control
sequences outperforming the \textit{ansatz} counter-intuitive sequence. Even
more interesting, it discovers novel strategies when realistic disturbances
affect the ideal system, with better speed and fidelity when energy detuning
between the ground states of quantum dots or dephasing are added to the master
equation, also mitigating the effects of losses. This method enables online
update of realistic systems as the policy convergence is boosted by exploiting
the prior knowledge when available. Deep reinforcement learning proves
effective to control dynamics of quantum states, and more generally it applies
whenever an ansatz solution is unknown or insufficient to effectively treat the
problem.Comment: 5 figure
Eating at Home and "Dining" Out? Commensalities in the Neolithic and Late Chalcolithic in the Near East
This paper attempts to draw a picture of different kinds of commensalities in
the Near Eastern Pottery Neolithic (7th millennium BC) through an analysis of
consumption vessels. The case study will be the Syrian and Turkish regions of
the Northern Levant. I shall underline the strong symbolic function of vessels
in distinguishing commensal events and argue that the basic role of
commensality remains largely unmodified until the end of the Ubaid period (2nd
half of 5th millennium BC). The beginning of the Late Chalcolithic then marks
a major change. At this point, the development of different types of
commensalities leads to a decrease in the role of pottery as symbolic marker
of commensal events
- …