6,380 research outputs found
Performance Dynamics and Termination Errors in Reinforcement Learning: A Unifying Perspective
In reinforcement learning, a decision needs to be made at some point as to
whether it is worthwhile to carry on with the learning process or to terminate
it. In many such situations, stochastic elements are often present which govern
the occurrence of rewards, with the sequential occurrences of positive rewards
randomly interleaved with negative rewards. For most practical learners, the
learning is considered useful if the number of positive rewards always exceeds
the negative ones. A situation that often calls for learning termination is
when the number of negative rewards exceeds the number of positive rewards.
However, while this seems reasonable, the error of premature termination,
whereby termination is enacted along with the conclusion of learning failure
despite the positive rewards eventually far outnumber the negative ones, can be
significant. In this paper, using combinatorial analysis we study the error
probability in wrongly terminating a reinforcement learning activity which
undermines the effectiveness of an optimal policy, and we show that the
resultant error can be quite high. Whilst we demonstrate mathematically that
such errors can never be eliminated, we propose some practical mechanisms that
can effectively reduce such errors. Simulation experiments have been carried
out, the results of which are in close agreement with our theoretical findings.Comment: Short Paper in AIKE 201
Probing Electroweak Symmetry Breaking Mechanism at the LHC: A Guideline from Power Counting Analysis
We formulate the equivalence theorem as a theoretical criterion for
sensitively probing the electroweak symmetry breaking mechanism, and develop a
precise power counting method for the chiral Lagrangian formulated electroweak
theories. Armed with these, we perform a systematic analysis on the
sensitivities of the scattering processes
and for testing all possible effective bosonic
operators in the chiral Lagrangian formulated electroweak theories at the CERN
Large Hadron Collider (LHC). The analysis shows that these two kinds of
processes are "complementary" in probing the electroweak symmetry breaking
sector.Comment: Extended version, 11-page-Latex-file and 3 separate PS-Figs. To be
Published in Mod.Phys.Lett.
Stochastic Reinforcement Learning
In reinforcement learning episodes, the rewards and punishments are often
non-deterministic, and there are invariably stochastic elements governing the
underlying situation. Such stochastic elements are often numerous and cannot be
known in advance, and they have a tendency to obscure the underlying rewards
and punishments patterns. Indeed, if stochastic elements were absent, the same
outcome would occur every time and the learning problems involved could be
greatly simplified. In addition, in most practical situations, the cost of an
observation to receive either a reward or punishment can be significant, and
one would wish to arrive at the correct learning conclusion by incurring
minimum cost. In this paper, we present a stochastic approach to reinforcement
learning which explicitly models the variability present in the learning
environment and the cost of observation. Criteria and rules for learning
success are quantitatively analyzed, and probabilities of exceeding the
observation cost bounds are also obtained.Comment: AIKE 201
Theoretical studies of 63Cu Knight shifts of the normal state of YBa2Cu3O7
The 63Cu Knight shifts and g factors for the normal state of YBa2Cu3O7 in
tetragonal phase are theoretically studied in a uniform way from the high
(fourth-) order perturbation formulas of these parameters for a 3d9 ion under
tetragonally elongated octahedra. The calculations are quantitatively
correlated with the local structure of the Cu2+(2) site in YBa2Cu3O7. The
theoretical results show good agreement with the observed values, and the
improvements are achieved by adopting fewer adjustable parameters as compared
to the previous works. It is found that the significant anisotropy of the
Knight shifts is mainly attributed to the anisotropy of the g factors due to
the orbital interactions.Comment: 5 page
Coulomb Drag in Graphene
We study the Coulomb drag between two single graphene sheets in intrinsic and
extrinsic graphene systems with no interlayer tunneling. The general expression
for the nonlinear susceptibility appropriate for single-layer graphene systems
is derived using the diagrammatic perturbation theory, and the corresponding
exact zero-temperature expression is obtained analytically. We find that,
despite the existence of a non-zero conductivity in an intrinsic graphene
layer, the Coulomb drag between intrinsic graphene layers vanishes at all
temperatures. In extrinsic systems, we obtain numerical results and an
approximate analytical result for the drag resistivity , and
find that goes as at low temperature , as
for large bilayer separation and for high carrier density . We
also discuss qualitatively the effect of plasmon-induced enhancement on the
Coulomb drag, which should occur at a temperature of the order of or higher
than the Fermi temperature
- …