1,594 research outputs found
On the Mutual Information in Conformal Field Theory
In this work, we study the universal behaviors in the mutual information of
two disjoint spheres in a conformal field theory(CFT). By using the operator
product expansion of the spherical twist operator in terms of the conformal
family, we show that the large distance expansion of the mutual information can
be cast in terms of the conformal blocks. We develop the prescription to
compute the coefficients before the conformal blocks. For a single conformal
family, the leading nonvanishing contribution to the mutual information comes
from the bilinear operators. We show that the coefficients of these operators
take universal forms and such universal behavior persists in the bilinear
operators with derivatives as well. Consequently the first few leading order
contributions to the mutual information in CFT take universal forms. To
illustrate our framework, we discuss the free scalars and free fermions in
various dimensions. For the free scalars, we compute the mutual information to
the next-to-leading order and find good agreement with the improved numerical
lattice result. For the free fermion, we compute the leading order result,
which is of universal form, and find the good match with the numerical study.
Our formalism could be applied to any CFT potentially.Comment: 27+14 pages, 8 figures; References adde
SCOPE: Scalable Composite Optimization for Learning on Spark
Many machine learning models, such as logistic regression~(LR) and support
vector machine~(SVM), can be formulated as composite optimization problems.
Recently, many distributed stochastic optimization~(DSO) methods have been
proposed to solve the large-scale composite optimization problems, which have
shown better performance than traditional batch methods. However, most of these
DSO methods are not scalable enough. In this paper, we propose a novel DSO
method, called \underline{s}calable \underline{c}omposite
\underline{op}timization for l\underline{e}arning~({SCOPE}), and implement it
on the fault-tolerant distributed platform \mbox{Spark}. SCOPE is both
computation-efficient and communication-efficient. Theoretical analysis shows
that SCOPE is convergent with linear convergence rate when the objective
function is convex. Furthermore, empirical results on real datasets show that
SCOPE can outperform other state-of-the-art distributed learning methods on
Spark, including both batch learning methods and DSO methods
- …