15,297 research outputs found

    Localization of Multi-State Quantum Walk in One Dimension

    Full text link
    We show analytically that particle trapping appears in a quantum process called "quantum walk", in which the particle moves macroscopically correlating to the inner states. It has been well known that a particle in the ``Hadamard walk" with two inner states spreads away quickly on a line. In contrast, we find one-dimensional quantum walk with multi-state in which a particle stays at the starting point entirely with high positive probability. This striking difference is explained from difference between degeneration of eigenvalues of the time-evolution matrices.Comment: 4 pages RevTeX, 4 figures ep

    Improvement of the hot QCD pressure by the minimal sensitivity criterion

    Full text link
    The principles of minimal sensitivity (PMS) criterion is applied to the perturbative free energy density, or pressure, of hot QCD, which include the gs6lngs\sim g_s^6 \ln g_s and part of the gs6\sim g_s^6 terms. Applications are made separately to the short- and long-distance parts of the pressure. Comparison with the lattice results, at low temperatures, shows that the resultant `` optimal'' approximants are substantially improved when compared to the MSˉ\bar{MS} results. In particular, for the realistic case of three quark flavors, the `` optimal'' approximants are comparable with the lattice results.Comment: 14 pages, 9 figures, LaTe

    The Mechanism of Additive Composition

    Get PDF
    Additive composition (Foltz et al, 1998; Landauer and Dumais, 1997; Mitchell and Lapata, 2010) is a widely used method for computing meanings of phrases, which takes the average of vector representations of the constituent words. In this article, we prove an upper bound for the bias of additive composition, which is the first theoretical analysis on compositional frameworks from a machine learning point of view. The bound is written in terms of collocation strength; we prove that the more exclusively two successive words tend to occur together, the more accurate one can guarantee their additive composition as an approximation to the natural phrase vector. Our proof relies on properties of natural language data that are empirically verified, and can be theoretically derived from an assumption that the data is generated from a Hierarchical Pitman-Yor Process. The theory endorses additive composition as a reasonable operation for calculating meanings of phrases, and suggests ways to improve additive compositionality, including: transforming entries of distributional word vectors by a function that meets a specific condition, constructing a novel type of vector representations to make additive composition sensitive to word order, and utilizing singular value decomposition to train word vectors.Comment: More explanations on theory and additional experiments added. Accepted by Machine Learning Journa

    Mixture of Expert/Imitator Networks: Scalable Semi-supervised Learning Framework

    Full text link
    The current success of deep neural networks (DNNs) in an increasingly broad range of tasks involving artificial intelligence strongly depends on the quality and quantity of labeled training data. In general, the scarcity of labeled data, which is often observed in many natural language processing tasks, is one of the most important issues to be addressed. Semi-supervised learning (SSL) is a promising approach to overcoming this issue by incorporating a large amount of unlabeled data. In this paper, we propose a novel scalable method of SSL for text classification tasks. The unique property of our method, Mixture of Expert/Imitator Networks, is that imitator networks learn to "imitate" the estimated label distribution of the expert network over the unlabeled data, which potentially contributes a set of features for the classification. Our experiments demonstrate that the proposed method consistently improves the performance of several types of baseline DNNs. We also demonstrate that our method has the more data, better performance property with promising scalability to the amount of unlabeled data.Comment: Accepted by AAAI 201
    corecore