23,207 research outputs found

    Explicit formulas of Euler sums via multiple zeta values

    Full text link
    Flajolet and Salvy pointed out that every Euler sum is a Q\mathbb{Q}-linear combination of multiple zeta values. However, in the literature, there is no formula completely revealing this relation. In this paper, using permutations and compositions, we establish two explicit formulas for the Euler sums, and show that all the Euler sums are indeed expressible in terms of MZVs. Moreover, we apply this method to the alternating Euler sums, and show that all the alternating Euler sums are reducible to alternating MZVs. Some famous results, such as the Euler theorem, the Borwein--Borwein--Girgensohn theorems, and the Flajolet--Salvy theorems can be obtained directly from our theory. Some other special cases, such as the explicit expressions of Srm,qS_{r^m,q}, Srm,qΛ‰S_{r^m,\bar{q}}, SrΛ‰m,qS_{\bar{r}^m,q} and SrΛ‰m,qΛ‰S_{\bar{r}^m,\bar{q}}, are also presented here. The corresponding Maple programs are developed to help us compute all the sums of weight w≀11w\leq 11 for non-alternating case and of weight w≀6w\leq 6 for alternating case

    Unsupervised Learning of Frustrated Classical Spin Models I: Principle Component Analysis

    Full text link
    This work aims at the goal whether the artificial intelligence can recognize phase transition without the prior human knowledge. If this becomes successful, it can be applied to, for instance, analyze data from quantum simulation of unsolved physical models. Toward this goal, we first need to apply the machine learning algorithm to well-understood models and see whether the outputs are consistent with our prior knowledge, which serves as the benchmark of this approach. In this work, we feed the compute with data generated by the classical Monte Carlo simulation for the XY model in frustrated triangular and union jack lattices, which has two order parameters and exhibits two phase transitions. We show that the outputs of the principle component analysis agree very well with our understanding of different orders in different phases, and the temperature dependences of the major components detect the nature and the locations of the phase transitions. Our work offers promise for using machine learning techniques to study sophisticated statistical models, and our results can be further improved by using principle component analysis with kernel tricks and the neural network method.Comment: 8 pages, 11 figure

    Machine Learning of Frustrated Classical Spin Models. II. Kernel Principal Component Analysis

    Full text link
    In this work we apply the principal component analysis (PCA) method with kernel trick to study classification of phases and phase transition in classical XY models in frustrated lattices. Comparing to our previous work with linear PCA method, the kernel PCA can capture non-linear function. In this case, the Z2 chiral order of classical spins in these lattices are indeed a non-linear function of the input spin configurations. In addition to the principal component revealed by linear PCA, the kernel PCA can find out two more principal components using data generated by Monte Carlo simulation at various temperatures at input. One of them relates to the strength of the U(1) order parameter and the other directly manifests the chiral order parameter that characterizes the Z2 symmetry breaking. For a temperature resolved study, the temperature dependence of the principal eigenvalue associated with the Z2 symmetry breaking clearly shows a second order phase transition behavior

    Measuring Bayesian Robustness Using R\'enyi Divergence

    Full text link
    This paper deals with measuring the Bayesian robustness of classes of contaminated priors. Two different classes of priors in the neighborhood of the elicited prior are considered. The first one is the well-known Ο΅\epsilon-contaminated class, while the second one is the geometric mixing class. The proposed measure of robustness is based on computing the curvature of R\'enyi divergence between posterior distributions. Examples are used to illustrate the results by using simulated and real data sets.Comment: 2

    Label-Removed Generative Adversarial Networks Incorporating with K-Means

    Full text link
    Generative Adversarial Networks (GANs) have achieved great success in generating realistic images. Most of these are conditional models, although acquisition of class labels is expensive and time-consuming in practice. To reduce the dependence on labeled data, we propose an un-conditional generative adversarial model, called K-Means-GAN (KM-GAN), which incorporates the idea of updating centers in K-Means into GANs. Specifically, we redesign the framework of GANs by applying K-Means on the features extracted from the discriminator. With obtained labels from K-Means, we propose new objective functions from the perspective of deep metric learning (DML). Distinct from previous works, the discriminator is treated as a feature extractor rather than a classifier in KM-GAN, meanwhile utilization of K-Means makes features of the discriminator more representative. Experiments are conducted on various datasets, such as MNIST, Fashion-10, CIFAR-10 and CelebA, and show that the quality of samples generated by KM-GAN is comparable to some conditional generative adversarial models

    Linking invariant for the quench dynamics of a two-dimensional two-band Chern insulator

    Full text link
    We discuss the topological invariant in the (2+1)-dimensional quench dynamics of a two-dimensional two-band Chern insulator starting from a topological initial state (i.e., with a nonzero Chern number cic_i), evolved by a post-quench Hamiltonian (with Chern number cfc_f). In contrast to the process with ci=0c_i=0 studied in previous works, this process cannot be characterized by the Hopf invariant that is described by the sphere homotopy group Ο€3(S2)=Z\pi_3(S^2)=\mathbb{Z}. It is possible, however, to calculate a variant of the Chern-Simons integral with a complementary part to cancel the Chern number of the initial spin configuration, which at the same time does not affect the (2+1)-dimensional topology. We show that the modified Chern-Simons integral gives rise to a topological invariant of this quench process, i.e., the linking invariant in the Z2ci\mathbb{Z}_{2c_i} class: Ξ½=(cfβˆ’ci)mod  (2ci)\nu = (c_f - c_i) \mod (2c_i). We give concrete examples to illustrate this result and also show the detailed deduction to get this linking invariant

    Representation Learning for Spatial Graphs

    Full text link
    Recently, the topic of graph representation learning has received plenty of attention. Existing approaches usually focus on structural properties only and thus they are not sufficient for those spatial graphs where the nodes are associated with some spatial information. In this paper, we present the first deep learning approach called s2vec for learning spatial graph representations, which is based on denoising autoencoders framework (DAF). We evaluate the learned representations on real datasets and the results verified the effectiveness of s2vec when used for spatial clustering.Comment: 4 pages, 1 figure, conferenc

    Precise Box Score: Extract More Information from Datasets to Improve the Performance of Face Detection

    Full text link
    For the training of face detection network based on R-CNN framework, anchors are assigned to be positive samples if intersection-over-unions (IoUs) with ground-truth are higher than the first threshold(such as 0.7); and to be negative samples if their IoUs are lower than the second threshold(such as 0.3). And the face detection model is trained by the above labels. However, anchors with IoU between first threshold and second threshold are not used. We propose a novel training strategy, Precise Box Score(PBS), to train object detection models. The proposed training strategy uses the anchors with IoUs between the first and second threshold, which can consistently improve the performance of face detection. Our proposed training strategy extracts more information from datasets, making better utilization of existing datasets. What's more, we also introduce a simple but effective model compression method(SEMCM), which can boost the performance of face detectors further. Experimental results show that the performance of face detection network can consistently be improved based on our proposed scheme

    Towards thinner convolutional neural networks through Gradually Global Pruning

    Full text link
    Deep network pruning is an effective method to reduce the storage and computation cost of deep neural networks when applying them to resource-limited devices. Among many pruning granularities, neuron level pruning will remove redundant neurons and filters in the model and result in thinner networks. In this paper, we propose a gradually global pruning scheme for neuron level pruning. In each pruning step, a small percent of neurons were selected and dropped across all layers in the model. We also propose a simple method to eliminate the biases in evaluating the importance of neurons to make the scheme feasible. Compared with layer-wise pruning scheme, our scheme avoid the difficulty in determining the redundancy in each layer and is more effective for deep networks. Our scheme would automatically find a thinner sub-network in original network under a given performance

    NLO Effects for Doubly Heavy Baryon in QCD Sum Rules

    Full text link
    With the QCD sum rules approach, we study the newly discovered doubly heavy baryon Ξcc++\Xi_{cc}^{++}. We analytically calculate the next-to-leading order (NLO) contribution to the perturbative part of JP=12+J^{P} = \frac{1}{2}^{+} baryon current with two identical heavy quarks, and then reanalyze the mass of Ξcc++\Xi_{cc}^{++} at the NLO level. We find that the NLO correction significantly improves both scheme dependence and scale dependence, whereas it is hard to control these theoretical uncertainties at leading order. With the NLO contribution, the baryon mass is estimated to be mΞcc++=3.66βˆ’0.10+0.08Β GeVm_{\Xi_{cc}^{++}} = 3.66_{-0.10}^{+0.08} \text{~GeV}, which is consistent with the LHCb measurement.Comment: 13 pages, 6 figures, More detailed calculations are given by adding (1) Appendix A: Analytical Result, (2) Appendix B: Higher Dimensional Operators, (3) An ancillary file for the NLO result with the coefficients related to the master integral
    • …
    corecore