6,351 research outputs found

    Glueball Masses from Hamiltonian Lattice QCD

    Full text link
    We calculate the masses of the 0++0^{++}, 0−−0^{--} and 1+−1^{+-} glueballs from QCD in 3+1 dimensions using an eigenvalue equation method for Hamiltonian lattice QCD developed and described elsewhere by the authors. The mass ratios become approximately constants in the coupling region 6/g2∈[6.0,6.4]6/g^2 \in [6.0,6.4], from which we estimate M(0−−)/M(0++)=2.44±0.05±0.20M(0^{--})/M(0^{++})=2.44 \pm 0.05 \pm 0.20 and M(1+−)/M(0++)=1.91±0.05±0.12M(1^{+-})/M(0^{++})=1.91 \pm 0.05 \pm 0.12.Comment: 12 pages, Latex, figures to be sent upon reques

    The Relationship Between Nitrogen Content in Soybean Leaves and Infestation Severity of Aphis glycines Mutsumura

    Get PDF
    Changes in nitrogen content of leaves in different soybean species during the infestation by Aphis glycines Mutsumura was determined. A correlation between the nitrogen content of soybean leaves and infestation severity of Aphis glycines Mutsumura was found. Therefore, the nitrogen content of soybean leaves could be regarded as one of the ecological factors used in prediction of infestation severity of Aphis glycines Mutsumura.Originating text in Chinese.Citation: Hu, Qi, Zhang, Weiqun, Yao, Yuxia, Yan, Shuqin. (1992). The Relationship Between Nitrogen Content in Soybean Leaves and Infestation Severity of Aphis glycines Mutsumura. Journal of Jilin Agricultural University, 14(4), 103-104

    Improved Visual Fine-tuning with Natural Language Supervision

    Full text link
    Fine-tuning a visual pre-trained model can leverage the semantic information from large-scale pre-training data and mitigate the over-fitting problem on downstream vision tasks with limited training examples. While the problem of catastrophic forgetting in pre-trained backbone has been extensively studied for fine-tuning, its potential bias from the corresponding pre-training task and data, attracts less attention. In this work, we investigate this problem by demonstrating that the obtained classifier after fine-tuning will be close to that induced by the pre-trained model. To reduce the bias in the classifier effectively, we introduce a reference distribution obtained from a fixed text classifier, which can help regularize the learned vision classifier. The proposed method, Text Supervised fine-tuning (TeS), is evaluated with diverse pre-trained vision models including ResNet and ViT, and text encoders including BERT and CLIP, on 11 downstream tasks. The consistent improvement with a clear margin over distinct scenarios confirms the effectiveness of our proposal. Code is available at \url{https://github.com/idstcv/TeS}.Comment: accepted by ICCV'2
    • …
    corecore