30,076 research outputs found
The Yang-Mills gradient flow and SU(3) gauge theory with 12 massless fundamental fermions in a colour-twisted box
We perform the step-scaling investigation of the running coupling constant,
using the gradient-flow scheme, in SU(3) gauge theory with twelve massless
fermions in the fundamental representation. The Wilson plaquette gauge action
and massless unimproved staggered fermions are used in the simulations. Our
lattice data are prepared at high accuracy, such that the statistical error for
the renormalised coupling, g_GF, is at the subpercentage level. To investigate
the reliability of the continuum extrapolation, we employ two different lattice
discretisations to obtain g_GF. For our simulation setting, the corresponding
gauge-field averaging radius in the gradient flow has to be almost half of the
lattice size, in order to have this extrapolation under control. We can
determine the renormalisation group evolution of the coupling up to g^2_GF ~ 6,
before the onset of the bulk phase structure. In this infrared regime, the
running of the coupling is significantly slower than the two-loop perturbative
prediction, although we cannot draw definite conclusion regarding possible
infrared conformality of this theory. Furthermore, we comment on the issue
regarding the continuum extrapolation near an infrared fixed point. In addition
to adopting the fit ansatz a'la Symanzik for performing this task, we discuss a
possible alternative procedure inspired by properties derived from low-energy
scale invariance at strong coupling. Based on this procedure, we propose a
finite-size scaling method for the renormalised coupling as a means to search
for infrared fixed point. Using this method, it can be shown that the behaviour
of the theory around g^2_GF ~ 6 is still not governed by possible infrared
conformality.Comment: 24 pages, 6 figures; Published version; Appendix A added for
tabulating data; One reference included; Typos correcte
Intellectual Capital Architectures and Bilateral Learning: A Framework For Human Resource Management
Both researchers and managers are increasingly interested in how firms can pursue bilateral learning; that is, simultaneously exploring new knowledge domains while exploiting current ones (cf., March, 1991). To address this issue, this paper introduces a framework of intellectual capital architectures that combine unique configurations of human, social, and organizational capital. These architectures support bilateral learning by helping to create supplementary alignment between human and social capital as well as complementary alignment between people-embodied knowledge (human and social capital) and organization-embodied knowledge (organizational capital). In order to establish the context for bilateral learning, the framework also identifies unique sets of HR practices that may influence the combinations of human, social, and organizational capital
Learning Equations for Extrapolation and Control
We present an approach to identify concise equations from data using a
shallow neural network approach. In contrast to ordinary black-box regression,
this approach allows understanding functional relations and generalizing them
from observed data to unseen parts of the parameter space. We show how to
extend the class of learnable equations for a recently proposed equation
learning network to include divisions, and we improve the learning and model
selection strategy to be useful for challenging real-world data. For systems
governed by analytical expressions, our method can in many cases identify the
true underlying equation and extrapolate to unseen domains. We demonstrate its
effectiveness by experiments on a cart-pendulum system, where only 2 random
rollouts are required to learn the forward dynamics and successfully achieve
the swing-up task.Comment: 9 pages, 9 figures, ICML 201
Spartan Random Processes in Time Series Modeling
A Spartan random process (SRP) is used to estimate the correlation structure
of time series and to predict (extrapolate) the data values. SRP's are
motivated from statistical physics, and they can be viewed as Ginzburg-Landau
models. The temporal correlations of the SRP are modeled in terms of
`interactions' between the field values. Model parameter inference employs the
computationally fast modified method of moments, which is based on matching
sample energy moments with the respective stochastic constraints. The
parameters thus inferred are then compared with those obtained by means of the
maximum likelihood method. The performance of the Spartan predictor (SP) is
investigated using real time series of the quarterly S&P 500 index. SP
prediction errors are compared with those of the Kolmogorov-Wiener predictor.
Two predictors, one of which explicit, are derived and used for extrapolation.
The performance of the predictors is similarly evaluated.Comment: 10 pages, 3 figures, Proceedings of APFA
Traversing non-convex regions
This paper considers a method for dealing with non-convex objective functions in optimization problems. It uses the Hessian matrix and combines features of trust-region techniques and continuous steepest descent trajectory-following in order to construct an algorithm which performs curvilinear searches away from the starting point of each iteration. A prototype implementation yields promising resultsPeer reviewe
Probabilistic Line Searches for Stochastic Optimization
In deterministic optimization, line searches are a standard tool ensuring
stability and efficiency. Where only stochastic gradients are available, no
direct equivalent has so far been formulated, because uncertain gradients do
not allow for a strict sequence of decisions collapsing the search space. We
construct a probabilistic line search by combining the structure of existing
deterministic methods with notions from Bayesian optimization. Our method
retains a Gaussian process surrogate of the univariate optimization objective,
and uses a probabilistic belief over the Wolfe conditions to monitor the
descent. The algorithm has very low computational cost, and no user-controlled
parameters. Experiments show that it effectively removes the need to define a
learning rate for stochastic gradient descent.Comment: Extended version of the NIPS '15 conference paper, includes detailed
pseudo-code, 59 pages, 35 figure
- …