6,066 research outputs found

    Multiple scattering effects on heavy meson production in p+A collisions at backward rapidity

    Get PDF
    We study the incoherent multiple scattering effects on heavy meson production in the backward rapidity region of p+A collisions within the generalized high-twist factorization formalism. We calculate explicitly the double scattering contributions to the heavy meson differential cross sections by taking into account both initial-state and final-state interactions, and find that these corrections are positive. We further evaluate the nuclear modification factor for muons that come form the semi-leptonic decays of heavy flavor mesons. Phenomenological applications in d+Au collisions at a center-of-mass energy s=200\sqrt{s}=200 GeV at RHIC and in p+Pb collisions at s=5.02\sqrt{s}=5.02 TeV at the LHC are presented. We find that incoherent multiple scattering can describe rather well the observed nuclear enhancement in the intermediate pTp_T region for such reactions.Comment: 10 pages, 6 figures, published version in PL

    Graph and Temporal Convolutional Networks for 3D Multi-person Pose Estimation in Monocular Videos

    Full text link
    Despite the recent progress, 3D multi-person pose estimation from monocular videos is still challenging due to the commonly encountered problem of missing information caused by occlusion, partially out-of-frame target persons, and inaccurate person detection.To tackle this problem, we propose a novel framework integrating graph convolutional networks (GCNs) and temporal convolutional networks (TCNs) to robustly estimate camera-centric multi-person 3D poses that do not require camera parameters. In particular, we introduce a human-joint GCN, which unlike the existing GCN, is based on a directed graph that employs the 2D pose estimator's confidence scores to improve the pose estimation results. We also introduce a human-bone GCN, which models the bone connections and provides more information beyond human joints. The two GCNs work together to estimate the spatial frame-wise 3D poses and can make use of both visible joint and bone information in the target frame to estimate the occluded or missing human-part information. To further refine the 3D pose estimation, we use our temporal convolutional networks (TCNs) to enforce the temporal and human-dynamics constraints. We use a joint-TCN to estimate person-centric 3D poses across frames, and propose a velocity-TCN to estimate the speed of 3D joints to ensure the consistency of the 3D pose estimation in consecutive frames. Finally, to estimate the 3D human poses for multiple persons, we propose a root-TCN that estimates camera-centric 3D poses without requiring camera parameters. Quantitative and qualitative evaluations demonstrate the effectiveness of the proposed method.Comment: 10 pages, 3 figures, Accepted to AAAI 202
    • …
    corecore