27,963 research outputs found

    Cancellation of divergences in unitary gauge calculation of H→γγH \to \gamma \gamma process via one W loop, and application

    Full text link
    Following the thread of R. Gastmans, S. L. Wu and T. T. Wu, the calculation in the unitary gauge for the H→γγH \to \gamma \gamma process via one W loop is repeated, without the specific choice of the independent integrated loop momentum at the beginning. We start from the 'original' definition of each Feynman diagram, and show that the 4-momentum conservation and the Ward identity of the W-W-photon vertex can guarantee the cancellation of all terms among the Feynman diagrams which are to be integrated to give divergences higher than logarithmic. The remaining terms are to the most logarithmically divergent, hence is independent from the set of integrated loop momentum. This way of doing calculation is applied to H→γZH \to \gamma Z process via one W loop in the unitary gauge, the divergences proportional to MZ2/M3M_Z^2/M^3 including quadratic ones are all cancelled, and terms proportional to MZ2/M3M_Z^2/M^3 are shown to be zero. The way of dealing with the quadratic divergences proportional to MZ2/M3M_Z^2/M^3 in H→γZH \to \gamma Z has subtle implication on the employment on the Feynman rules especially when those rules can lead to high level divergences. So calculation without integration on all the δ\delta functions until have to is a more proper or maybe necessary way of the employment of the Feynman rules.Comment: 1 figure, 34 pages (updated

    Entanglement-guided architectures of machine learning by quantum tensor network

    Full text link
    It is a fundamental, but still elusive question whether the schemes based on quantum mechanics, in particular on quantum entanglement, can be used for classical information processing and machine learning. Even partial answer to this question would bring important insights to both fields of machine learning and quantum mechanics. In this work, we implement simple numerical experiments, related to pattern/images classification, in which we represent the classifiers by many-qubit quantum states written in the matrix product states (MPS). Classical machine learning algorithm is applied to these quantum states to learn the classical data. We explicitly show how quantum entanglement (i.e., single-site and bipartite entanglement) can emerge in such represented images. Entanglement characterizes here the importance of data, and such information are practically used to guide the architecture of MPS, and improve the efficiency. The number of needed qubits can be reduced to less than 1/10 of the original number, which is within the access of the state-of-the-art quantum computers. We expect such numerical experiments could open new paths in charactering classical machine learning algorithms, and at the same time shed lights on the generic quantum simulations/computations of machine learning tasks.Comment: 10 pages, 5 figure
    • …
    corecore