9,180 research outputs found

    VBF vs. GGF Higgs with Full-Event Deep Learning: Towards a Decay-Agnostic Tagger

    Full text link
    We study the benefits of jet- and event-level deep learning methods in distinguishing vector boson fusion (VBF) from gluon-gluon fusion (GGF) Higgs production at the LHC. We show that a variety of classifiers (CNNs, attention-based networks) trained on the complete low-level inputs of the full event achieve significant performance gains over shallow machine learning methods (BDTs) trained on jet kinematics and jet shapes, and we elucidate the reasons for these performance gains. Finally, we take initial steps towards the possibility of a VBF vs. GGF tagger that is agnostic to the Higgs decay mode, by demonstrating that the performance of our event-level CNN does not change when the Higgs decay products are removed. These results highlight the potentially powerful benefits of event-level deep learning at the LHC.Comment: 21 pages+appendices, 16 figures; added references, updated Pythia shower scheme for VBF, and added Appendix C for version

    The D→ρD\to \rho semileptonic and radiative decays within the light-cone sum rules

    Full text link
    The measured branching ratio of the DD meson semileptonic decay D→ρe+Ξ½eD \to \rho e^+ \nu_e, which is based on the 0.82Β fbβˆ’10.82~{\rm fb^{-1}} CLEO data taken at the peak of ψ(3770)\psi(3770) resonance, disagrees with the traditional SVZ sum rules analysis by about three times. In the paper, we show that this discrepancy can be eliminated by applying the QCD light-cone sum rules (LCSR) approach to calculate the D→ρD\to \rho transition form factors A1,2(q2)A_{1,2}(q^2) and V(q2)V(q^2). After extrapolating the LCSR predictions of these TFFs to whole q2q^2-region, we obtain 1/∣Vcd∣2Γ—Ξ“(D→ρeΞ½e)=(55.45βˆ’9.41+13.34)Γ—10βˆ’15Β GeV1/|V_{\rm cd}|^2 \times \Gamma(D \to \rho e \nu_e) =(55.45^{+13.34}_{-9.41})\times 10^{-15}~{\rm GeV}. Using the CKM matrix element and the D0(D+)D^0(D^+) lifetime from the Particle Data Group, we obtain B(D0β†’Οβˆ’e+Ξ½e)=(1.749βˆ’0.297+0.421Β±0.006)Γ—10βˆ’3{\cal B} (D^0\to \rho^- e^+ \nu_e) = (1.749^{+0.421}_{-0.297}\pm 0.006)\times 10^{-3} and B(D+→ρ0e+Ξ½e)=(2.217βˆ’0.376+0.534Β±0.015)Γ—10βˆ’3{\cal B} (D^+ \to \rho^0 e^+ \nu_e) = (2.217^{+0.534}_{-0.376}\pm 0.015)\times 10^{-3}, which agree with the CLEO measurements within errors. We also calculate the branching ratios of the two DD meson radiative processes and obtain B(D0→ρ0Ξ³)=(1.744βˆ’0.704+0.598)Γ—10βˆ’5{\cal B}(D^0\to \rho^0 \gamma)= (1.744^{+0.598}_{-0.704})\times 10^{-5} and B(D+→ρ+Ξ³)=(5.034βˆ’0.958+0.939)Γ—10βˆ’5{\cal B}(D^+ \to \rho^+ \gamma) = (5.034^{+0.939}_{-0.958})\times 10^{-5}, which also agree with the Belle measurements within errors. Thus we think the LCSR approach is applicable for dealing with the DD meson decays.Comment: 12 pages, 7 figures, version to be published in EPJ

    Positive solution for singular boundary value problems

    Get PDF
    AbstractA sufficient condition for the existence of positive solutions of the nonlinear boundary value problem uβ€³(t) + f(t, u(t)) = 0, 0<t<, uβ€²(0) = u(1) = 0is constructed, where f : [0, 1) Γ— (0, ∞) β†’ (0, ∞) is continuous, f(t, u) is locally Lipschitz continuous, and f(t, u)u is strictly decreasing in u > 0 for each t ∈ (0, 1)

    Theoretic Analysis and Extremely Easy Algorithms for Domain Adaptive Feature Learning

    Full text link
    Domain adaptation problems arise in a variety of applications, where a training dataset from the \textit{source} domain and a test dataset from the \textit{target} domain typically follow different distributions. The primary difficulty in designing effective learning models to solve such problems lies in how to bridge the gap between the source and target distributions. In this paper, we provide comprehensive analysis of feature learning algorithms used in conjunction with linear classifiers for domain adaptation. Our analysis shows that in order to achieve good adaptation performance, the second moments of the source domain distribution and target domain distribution should be similar. Based on our new analysis, a novel extremely easy feature learning algorithm for domain adaptation is proposed. Furthermore, our algorithm is extended by leveraging multiple layers, leading to a deep linear model. We evaluate the effectiveness of the proposed algorithms in terms of domain adaptation tasks on the Amazon review dataset and the spam dataset from the ECML/PKDD 2006 discovery challenge.Comment: ijca
    • …
    corecore