23,323 research outputs found
Explicit formulas of Euler sums via multiple zeta values
Flajolet and Salvy pointed out that every Euler sum is a -linear
combination of multiple zeta values. However, in the literature, there is no
formula completely revealing this relation. In this paper, using permutations
and compositions, we establish two explicit formulas for the Euler sums, and
show that all the Euler sums are indeed expressible in terms of MZVs. Moreover,
we apply this method to the alternating Euler sums, and show that all the
alternating Euler sums are reducible to alternating MZVs. Some famous results,
such as the Euler theorem, the Borwein--Borwein--Girgensohn theorems, and the
Flajolet--Salvy theorems can be obtained directly from our theory. Some other
special cases, such as the explicit expressions of ,
, and , are also
presented here. The corresponding Maple programs are developed to help us
compute all the sums of weight for non-alternating case and of
weight for alternating case
Unsupervised Learning of Frustrated Classical Spin Models I: Principle Component Analysis
This work aims at the goal whether the artificial intelligence can recognize
phase transition without the prior human knowledge. If this becomes successful,
it can be applied to, for instance, analyze data from quantum simulation of
unsolved physical models. Toward this goal, we first need to apply the machine
learning algorithm to well-understood models and see whether the outputs are
consistent with our prior knowledge, which serves as the benchmark of this
approach. In this work, we feed the compute with data generated by the
classical Monte Carlo simulation for the XY model in frustrated triangular and
union jack lattices, which has two order parameters and exhibits two phase
transitions. We show that the outputs of the principle component analysis agree
very well with our understanding of different orders in different phases, and
the temperature dependences of the major components detect the nature and the
locations of the phase transitions. Our work offers promise for using machine
learning techniques to study sophisticated statistical models, and our results
can be further improved by using principle component analysis with kernel
tricks and the neural network method.Comment: 8 pages, 11 figure
Machine Learning of Frustrated Classical Spin Models. II. Kernel Principal Component Analysis
In this work we apply the principal component analysis (PCA) method with
kernel trick to study classification of phases and phase transition in
classical XY models in frustrated lattices. Comparing to our previous work with
linear PCA method, the kernel PCA can capture non-linear function. In this
case, the Z2 chiral order of classical spins in these lattices are indeed a
non-linear function of the input spin configurations. In addition to the
principal component revealed by linear PCA, the kernel PCA can find out two
more principal components using data generated by Monte Carlo simulation at
various temperatures at input. One of them relates to the strength of the U(1)
order parameter and the other directly manifests the chiral order parameter
that characterizes the Z2 symmetry breaking. For a temperature resolved study,
the temperature dependence of the principal eigenvalue associated with the Z2
symmetry breaking clearly shows a second order phase transition behavior
Measuring Bayesian Robustness Using R\'enyi Divergence
This paper deals with measuring the Bayesian robustness of classes of
contaminated priors. Two different classes of priors in the neighborhood of the
elicited prior are considered. The first one is the well-known
-contaminated class, while the second one is the geometric mixing
class. The proposed measure of robustness is based on computing the curvature
of R\'enyi divergence between posterior distributions. Examples are used to
illustrate the results by using simulated and real data sets.Comment: 2
Label-Removed Generative Adversarial Networks Incorporating with K-Means
Generative Adversarial Networks (GANs) have achieved great success in
generating realistic images. Most of these are conditional models, although
acquisition of class labels is expensive and time-consuming in practice. To
reduce the dependence on labeled data, we propose an un-conditional generative
adversarial model, called K-Means-GAN (KM-GAN), which incorporates the idea of
updating centers in K-Means into GANs. Specifically, we redesign the framework
of GANs by applying K-Means on the features extracted from the discriminator.
With obtained labels from K-Means, we propose new objective functions from the
perspective of deep metric learning (DML). Distinct from previous works, the
discriminator is treated as a feature extractor rather than a classifier in
KM-GAN, meanwhile utilization of K-Means makes features of the discriminator
more representative. Experiments are conducted on various datasets, such as
MNIST, Fashion-10, CIFAR-10 and CelebA, and show that the quality of samples
generated by KM-GAN is comparable to some conditional generative adversarial
models
Linking invariant for the quench dynamics of a two-dimensional two-band Chern insulator
We discuss the topological invariant in the (2+1)-dimensional quench dynamics
of a two-dimensional two-band Chern insulator starting from a topological
initial state (i.e., with a nonzero Chern number ), evolved by a
post-quench Hamiltonian (with Chern number ). In contrast to the process
with studied in previous works, this process cannot be characterized by
the Hopf invariant that is described by the sphere homotopy group
. It is possible, however, to calculate a variant of the
Chern-Simons integral with a complementary part to cancel the Chern number of
the initial spin configuration, which at the same time does not affect the
(2+1)-dimensional topology. We show that the modified Chern-Simons integral
gives rise to a topological invariant of this quench process, i.e., the linking
invariant in the class: . We
give concrete examples to illustrate this result and also show the detailed
deduction to get this linking invariant
Representation Learning for Spatial Graphs
Recently, the topic of graph representation learning has received plenty of
attention. Existing approaches usually focus on structural properties only and
thus they are not sufficient for those spatial graphs where the nodes are
associated with some spatial information. In this paper, we present the first
deep learning approach called s2vec for learning spatial graph representations,
which is based on denoising autoencoders framework (DAF). We evaluate the
learned representations on real datasets and the results verified the
effectiveness of s2vec when used for spatial clustering.Comment: 4 pages, 1 figure, conferenc
Precise Box Score: Extract More Information from Datasets to Improve the Performance of Face Detection
For the training of face detection network based on R-CNN framework, anchors
are assigned to be positive samples if intersection-over-unions (IoUs) with
ground-truth are higher than the first threshold(such as 0.7); and to be
negative samples if their IoUs are lower than the second threshold(such as
0.3). And the face detection model is trained by the above labels. However,
anchors with IoU between first threshold and second threshold are not used. We
propose a novel training strategy, Precise Box Score(PBS), to train object
detection models. The proposed training strategy uses the anchors with IoUs
between the first and second threshold, which can consistently improve the
performance of face detection. Our proposed training strategy extracts more
information from datasets, making better utilization of existing datasets.
What's more, we also introduce a simple but effective model compression
method(SEMCM), which can boost the performance of face detectors further.
Experimental results show that the performance of face detection network can
consistently be improved based on our proposed scheme
Towards thinner convolutional neural networks through Gradually Global Pruning
Deep network pruning is an effective method to reduce the storage and
computation cost of deep neural networks when applying them to resource-limited
devices. Among many pruning granularities, neuron level pruning will remove
redundant neurons and filters in the model and result in thinner networks. In
this paper, we propose a gradually global pruning scheme for neuron level
pruning. In each pruning step, a small percent of neurons were selected and
dropped across all layers in the model. We also propose a simple method to
eliminate the biases in evaluating the importance of neurons to make the scheme
feasible. Compared with layer-wise pruning scheme, our scheme avoid the
difficulty in determining the redundancy in each layer and is more effective
for deep networks. Our scheme would automatically find a thinner sub-network in
original network under a given performance
NLO Effects for Doubly Heavy Baryon in QCD Sum Rules
With the QCD sum rules approach, we study the newly discovered doubly heavy
baryon . We analytically calculate the next-to-leading order
(NLO) contribution to the perturbative part of baryon
current with two identical heavy quarks, and then reanalyze the mass of
at the NLO level. We find that the NLO correction significantly
improves both scheme dependence and scale dependence, whereas it is hard to
control these theoretical uncertainties at leading order. With the NLO
contribution, the baryon mass is estimated to be , which is consistent with the LHCb
measurement.Comment: 13 pages, 6 figures, More detailed calculations are given by adding
(1) Appendix A: Analytical Result, (2) Appendix B: Higher Dimensional
Operators, (3) An ancillary file for the NLO result with the coefficients
related to the master integral
- β¦