4,459 research outputs found

    bcτνb\to c\tau\nu Transitions in the Standard Model Effective Field Theory

    Full text link
    The R(D())R(D^{(\ast)}) anomalies observed in BD()τνB\to D^{(\ast)}\tau\nu decays have attracted much attention in recent years. In this paper, we study the BD()τνB\to D^{(\ast)}\tau\nu, ΛbΛcτν\Lambda_b\to\Lambda_c\tau\nu, Bc(J/ψ,ηc)τνB_c\to (J/\psi,\,\eta_c)\tau\nu, BXcτνB\to X_c\tau\nu, and BcτνB_c\to\tau\nu decays, all being mediated by the same quark-level bcτνb\to c\tau\nu transition, in the Standard Model Effective Field Theory. The most relevant dimension-six operators for these processes are Qlq(3)Q_{lq}^{(3)}, QledqQ_{ledq}, Qlequ(1)Q^{(1)}_{lequ}, and Qlequ(3)Q^{(3)}_{lequ} in the Warsaw basis. Evolution of the corresponding Wilson coefficients from the new physics scale Λ=1\Lambda=1~TeV down to the characteristic scale μbmb\mu_b\simeq m_b is performed at three-loop in QCD and one-loop in EW/QED. It is found that, after taking into account the constraint B(Bcτν)10%{\cal B}(B_c\to\tau\nu)\lesssim 10\%, a single [Clq(3)]3323(Λ)\left[C_{lq}^{(3)}\right]_{3323}(\Lambda) or [Clequ(3)]3332(Λ)\left[C^{(3)}_{lequ}\right]_{3332}(\Lambda) can still be used to resolve the R(D())R(D^{(\ast)}) anomalies at 1σ1\sigma, while a single [Clequ(1)]3332(Λ)\left[C^{(1)}_{lequ}\right]_{3332}(\Lambda) is already ruled out by the measured R(D())R(D^{(\ast)}) at more than 3σ3\sigma. By minimizing the χ2(Ci)\chi^2(C_i) function constructed based on the current data on R(D)R(D), R(D)R(D^\ast), Pτ(D)P_\tau(D^\ast), R(J/ψ)R(J/\psi), and R(Xc)R(X_c), we obtain eleven most trustworthy scenarios, each of which can provide a good explanation of the R(D())R(D^{(\ast)}) anomalies at 1σ1\sigma. To further discriminate these different scenarios, we predict thirty-one observables associated with the processes considered under each NP scenario. It is found that most of the scenarios can be differentiated from each other by using these observables and their correlations.Comment: 43 pages, 3 figures and 5 tables; references updated and more discussions added, final version to be published in the journa

    Revisiting the BB-physics anomalies in RR-parity violating MSSM

    Full text link
    In recent years, several deviations from the Standard Model predictions in semileptonic decays of BB-meson might suggest the existence of new physics which would break the lepton-flavour universality. In this work, we have explored the possibility of using muon sneutrinos and right-handed sbottoms to solve these BB-physics anomalies simultaneously in RR-parity violating minimal supersymmetric standard model. We find that the photonic penguin induced by exchanging sneutrino can provide sizable lepton flavour universal contribution due to the existence of logarithmic enhancement for the first time. This prompts us to use the two-parameter scenario (C9V,C9U)(C^{\rm V}_9, \, C^{\rm U}_9) to explain bs+b \to s \ell^+ \ell^- anomaly. Finally, the numerical analyses show that the muon sneutrinos and right-handed sbottoms can explain bs+b \to s \ell^+ \ell^- and R(D())R(D^{(\ast)}) anomalies simultaneously, and satisfy the constraints of other related processes, such as BK()ννˉB \to K^{(\ast)} \nu \bar\nu decays, BsBˉsB_s-\bar B_s mixing, ZZ decays, as well as D0μ+μD^0 \to \mu^+ \mu^-, τμρ0\tau \to \mu \rho^0, BτνB \to \tau \nu, DsτνD_s \to \tau \nu, τKν\tau \to K \nu, τμγ\tau \to \mu \gamma, and τμμμ\tau \to \mu\mu\mu decays.Comment: 10 pages, 8 figures, matches to the version published in EPJ

    Efficient Multimodal Fusion via Interactive Prompting

    Full text link
    Large-scale pre-training has brought unimodal fields such as computer vision and natural language processing to a new era. Following this trend, the size of multi-modal learning models constantly increases, leading to an urgent need to reduce the massive computational cost of finetuning these models for downstream tasks. In this paper, we propose an efficient and flexible multimodal fusion method, namely PMF, tailored for fusing unimodally pre-trained transformers. Specifically, we first present a modular multimodal fusion framework that exhibits high flexibility and facilitates mutual interactions among different modalities. In addition, we disentangle vanilla prompts into three types in order to learn different optimizing objectives for multimodal learning. It is also worth noting that we propose to add prompt vectors only on the deep layers of the unimodal transformers, thus significantly reducing the training memory usage. Experiment results show that our proposed method achieves comparable performance to several other multimodal finetuning methods with less than 3% trainable parameters and up to 66% saving of training memory usage.Comment: Camera-ready version for CVPR202

    Action Sensitivity Learning for the Ego4D Episodic Memory Challenge 2023

    Full text link
    This report presents ReLER submission to two tracks in the Ego4D Episodic Memory Benchmark in CVPR 2023, including Natural Language Queries and Moment Queries. This solution inherits from our proposed Action Sensitivity Learning framework (ASL) to better capture discrepant information of frames. Further, we incorporate a series of stronger video features and fusion strategies. Our method achieves an average mAP of 29.34, ranking 1st in Moment Queries Challenge, and garners 19.79 mean R1, ranking 2nd in Natural Language Queries Challenge. Our code will be released.Comment: Accepted to CVPR 2023 Ego4D Workshop; 1st in Ego4D Moment Queries Challenge; 2nd in Ego4D Natural Language Queries Challeng

    Action Sensitivity Learning for Temporal Action Localization

    Full text link
    Temporal action localization (TAL), which involves recognizing and locating action instances, is a challenging task in video understanding. Most existing approaches directly predict action classes and regress offsets to boundaries, while overlooking the discrepant importance of each frame. In this paper, we propose an Action Sensitivity Learning framework (ASL) to tackle this task, which aims to assess the value of each frame and then leverage the generated action sensitivity to recalibrate the training procedure. We first introduce a lightweight Action Sensitivity Evaluator to learn the action sensitivity at the class level and instance level, respectively. The outputs of the two branches are combined to reweight the gradient of the two sub-tasks. Moreover, based on the action sensitivity of each frame, we design an Action Sensitive Contrastive Loss to enhance features, where the action-aware frames are sampled as positive pairs to push away the action-irrelevant frames. The extensive studies on various action localization benchmarks (i.e., MultiThumos, Charades, Ego4D-Moment Queries v1.0, Epic-Kitchens 100, Thumos14 and ActivityNet1.3) show that ASL surpasses the state-of-the-art in terms of average-mAP under multiple types of scenarios, e.g., single-labeled, densely-labeled and egocentric.Comment: Accepted to ICCV 202

    Light-LOAM: A Lightweight LiDAR Odometry and Mapping based on Graph-Matching

    Full text link
    Simultaneous Localization and Mapping (SLAM) plays an important role in robot autonomy. Reliability and efficiency are the two most valued features for applying SLAM in robot applications. In this paper, we consider achieving a reliable LiDAR-based SLAM function in computation-limited platforms, such as quadrotor UAVs based on graph-based point cloud association. First, contrary to most works selecting salient features for point cloud registration, we propose a non-conspicuous feature selection strategy for reliability and robustness purposes. Then a two-stage correspondence selection method is used to register the point cloud, which includes a KD-tree-based coarse matching followed by a graph-based matching method that uses geometric consistency to vote out incorrect correspondences. Additionally, we propose an odometry approach where the weight optimizations are guided by vote results from the aforementioned geometric consistency graph. In this way, the optimization of LiDAR odometry rapidly converges and evaluates a fairly accurate transformation resulting in the back-end module efficiently finishing the mapping task. Finally, we evaluate our proposed framework on the KITTI odometry dataset and real-world environments. Experiments show that our SLAM system achieves a comparative level or higher level of accuracy with more balanced computation efficiency compared with the mainstream LiDAR-based SLAM solutions
    corecore