14,492 research outputs found

    Peak shifts due to B(βˆ—)βˆ’BΛ‰(βˆ—)B^{(*)}-\bar{B}^{(*)} rescattering in Ξ₯(5S)\Upsilon(5S) dipion transitions

    Full text link
    We study the energy distributions of dipion transitions Ξ₯(5S)\Upsilon(5S) to Ξ₯(1S,2S,3S)Ο€+Ο€βˆ’\Upsilon(1S,2S,3S)\pi^+\pi^- in the final state rescattering model. Since the Ξ₯(5S)\Upsilon(5S) is well above the open bottom thresholds, the dipion transitions are expected to mainly proceed through the real processes Ξ₯(5S)β†’B(βˆ—)BΛ‰(βˆ—)\Upsilon(5S)\to B^{(*)}\bar{B}^{(*)} and B(βˆ—)BΛ‰(βˆ—)β†’Ξ₯(1S,2S,3S)Ο€+Ο€βˆ’B^{(*)}\bar{B}^{(*)}\to \Upsilon(1S,2S,3S)\pi^+\pi^-. We find that the energy distributions of Ξ₯(1S,2S,3S)Ο€+Ο€βˆ’\Upsilon(1S,2S,3S)\pi^+\pi^- markedly differ from that of Ξ₯(5S)β†’B(βˆ—)BΛ‰(βˆ—)\Upsilon(5S)\to B^{(*)}\bar{B}^{(*)}. In particular, the resonance peak will be pushed up by about 7-20 MeV for these dipion transitions relative to the main hadronic decay modes. These predictions can be used to test the final state rescattering mechanism in hadronic transitions for heavy quarkonia above the open flavor thresholds.Comment: Version published in PRD, energy dependence of the total width in Eq.(12) restored and corresponding figure changed, more discussion and clarification adde

    QCD radiative correction to color-octet J/ψJ/\psi inclusive production at B Factories

    Full text link
    In nonrelativistic Quantum Chromodynamics (NRQCD), we study the next-to-leading order (NLO) QCD radiative correction to the color-octet J/ψJ/\psi inclusive production at B Factories. Compared with the leading-order (LO) result, the NLO QCD corrections are found to enhance the short-distance coefficients in the color-octet J/ψJ/\psi production e+eβˆ’β†’ccΛ‰(3P0(8)or3P0(8))g e^+ e^-\to c \bar c (^3P_0^{(8)} {\rm or} ^3P_0^{(8)})g by a factor of about 1.9. Moreover, the peak at the endpoint in the J/ψJ/\psi energy distribution predicted at LO can be smeared by the NLO corrections, but the major color-octet contribution still comes from the large energy region of J/ψJ/\psi. By fitting the latest data of Οƒ(e+eβˆ’β†’J/ψ+Xnonβˆ’ccΛ‰)\sigma(e^{+}e^{-}\to J/\psi+X_{\mathrm{non-c\bar{c}}}) observed by Belle, we find that the values of color-octet matrix elements are much smaller than expected earlier by using the naive velocity scaling rules or extracted from fitting experimental data with LO calculations. As the most stringent constraint by setting the color-singlet contribution to be zero in e+eβˆ’β†’J/ψ+Xnonβˆ’ccΛ‰e^{+}e^{-}\to J/\psi+X_{\mathrm{non-c\bar{c}}}, we get an upper limit of the color-octet matrix element, +4.0<0∣OJ/ψ[3P0(8)]∣0>/mc2<(2.0Β±0.6)Γ—10βˆ’2GeV3 + 4.0 <0| {\cal O}^{J/\psi} [{}^3P_0^{(8)}]|0>/m_c^2 <(2.0 \pm 0.6)\times 10^{-2} {\rm GeV}^3 at NLO in Ξ±s\alpha_s.Comment: 18 pages, 8 figure

    Learning from Multi-View Multi-Way Data via Structural Factorization Machines

    Full text link
    Real-world relations among entities can often be observed and determined by different perspectives/views. For example, the decision made by a user on whether to adopt an item relies on multiple aspects such as the contextual information of the decision, the item's attributes, the user's profile and the reviews given by other users. Different views may exhibit multi-way interactions among entities and provide complementary information. In this paper, we introduce a multi-tensor-based approach that can preserve the underlying structure of multi-view data in a generic predictive model. Specifically, we propose structural factorization machines (SFMs) that learn the common latent spaces shared by multi-view tensors and automatically adjust the importance of each view in the predictive model. Furthermore, the complexity of SFMs is linear in the number of parameters, which make SFMs suitable to large-scale problems. Extensive experiments on real-world datasets demonstrate that the proposed SFMs outperform several state-of-the-art methods in terms of prediction accuracy and computational cost.Comment: 10 page

    Online Unsupervised Multi-view Feature Selection

    Full text link
    In the era of big data, it is becoming common to have data with multiple modalities or coming from multiple sources, known as "multi-view data". Multi-view data are usually unlabeled and come from high-dimensional spaces (such as language vocabularies), unsupervised multi-view feature selection is crucial to many applications. However, it is nontrivial due to the following challenges. First, there are too many instances or the feature dimensionality is too large. Thus, the data may not fit in memory. How to select useful features with limited memory space? Second, how to select features from streaming data and handles the concept drift? Third, how to leverage the consistent and complementary information from different views to improve the feature selection in the situation when the data are too big or come in as streams? To the best of our knowledge, none of the previous works can solve all the challenges simultaneously. In this paper, we propose an Online unsupervised Multi-View Feature Selection, OMVFS, which deals with large-scale/streaming multi-view data in an online fashion. OMVFS embeds unsupervised feature selection into a clustering algorithm via NMF with sparse learning. It further incorporates the graph regularization to preserve the local structure information and help select discriminative features. Instead of storing all the historical data, OMVFS processes the multi-view data chunk by chunk and aggregates all the necessary information into several small matrices. By using the buffering technique, the proposed OMVFS can reduce the computational and storage cost while taking advantage of the structure information. Furthermore, OMVFS can capture the concept drifts in the data streams. Extensive experiments on four real-world datasets show the effectiveness and efficiency of the proposed OMVFS method. More importantly, OMVFS is about 100 times faster than the off-line methods

    Multi-view Graph Embedding with Hub Detection for Brain Network Analysis

    Full text link
    Multi-view graph embedding has become a widely studied problem in the area of graph learning. Most of the existing works on multi-view graph embedding aim to find a shared common node embedding across all the views of the graph by combining the different views in a specific way. Hub detection, as another essential topic in graph mining has also drawn extensive attentions in recent years, especially in the context of brain network analysis. Both the graph embedding and hub detection relate to the node clustering structure of graphs. The multi-view graph embedding usually implies the node clustering structure of the graph based on the multiple views, while the hubs are the boundary-spanning nodes across different node clusters in the graph and thus may potentially influence the clustering structure of the graph. However, none of the existing works in multi-view graph embedding considered the hubs when learning the multi-view embeddings. In this paper, we propose to incorporate the hub detection task into the multi-view graph embedding framework so that the two tasks could benefit each other. Specifically, we propose an auto-weighted framework of Multi-view Graph Embedding with Hub Detection (MVGE-HD) for brain network analysis. The MVGE-HD framework learns a unified graph embedding across all the views while reducing the potential influence of the hubs on blurring the boundaries between node clusters in the graph, thus leading to a clear and discriminative node clustering structure for the graph. We apply MVGE-HD on two real multi-view brain network datasets (i.e., HIV and Bipolar). The experimental results demonstrate the superior performance of the proposed framework in brain network analysis for clinical investigation and application
    • …
    corecore