880 research outputs found

    Hybrid Weyl-type bound for pp-power twisted GL(2)\mathrm{GL} (2) LL-functions

    Full text link
    Let gg be a fixed holomorphic cusp form of arbitrary level and nebentypus. Let χ\chi be a primitive character of prime-power modulus q=pγq = p^{\gamma}. In this paper, we prove the following hybrid Weyl-type subconvexity bound L(1/2+it,g⊗χ)≪g,p,ε((1+∣t∣)q)1/3+ε L (1/2 + it, g \otimes \chi) \ll_{g, p, \varepsilon} ( (1+|t|) q )^{1/3+ \varepsilon} for any ε>0\varepsilon > 0.Comment: 24 pages; updated reference

    Refined Temporal Pyramidal Compression-and-Amplification Transformer for 3D Human Pose Estimation

    Full text link
    Accurately estimating the 3D pose of humans in video sequences requires both accuracy and a well-structured architecture. With the success of transformers, we introduce the Refined Temporal Pyramidal Compression-and-Amplification (RTPCA) transformer. Exploiting the temporal dimension, RTPCA extends intra-block temporal modeling via its Temporal Pyramidal Compression-and-Amplification (TPCA) structure and refines inter-block feature interaction with a Cross-Layer Refinement (XLR) module. In particular, TPCA block exploits a temporal pyramid paradigm, reinforcing key and value representation capabilities and seamlessly extracting spatial semantics from motion sequences. We stitch these TPCA blocks with XLR that promotes rich semantic representation through continuous interaction of queries, keys, and values. This strategy embodies early-stage information with current flows, addressing typical deficits in detail and stability seen in other transformer-based methods. We demonstrate the effectiveness of RTPCA by achieving state-of-the-art results on Human3.6M, HumanEva-I, and MPI-INF-3DHP benchmarks with minimal computational overhead. The source code is available at https://github.com/hbing-l/RTPCA.Comment: 11 pages, 5 figure

    DAMO-StreamNet: Optimizing Streaming Perception in Autonomous Driving

    Full text link
    Real-time perception, or streaming perception, is a crucial aspect of autonomous driving that has yet to be thoroughly explored in existing research. To address this gap, we present DAMO-StreamNet, an optimized framework that combines recent advances from the YOLO series with a comprehensive analysis of spatial and temporal perception mechanisms, delivering a cutting-edge solution. The key innovations of DAMO-StreamNet are: (1) A robust neck structure incorporating deformable convolution, enhancing the receptive field and feature alignment capabilities. (2) A dual-branch structure that integrates short-path semantic features and long-path temporal features, improving motion state prediction accuracy. (3) Logits-level distillation for efficient optimization, aligning the logits of teacher and student networks in semantic space. (4) A real-time forecasting mechanism that updates support frame features with the current frame, ensuring seamless streaming perception during inference. Our experiments demonstrate that DAMO-StreamNet surpasses existing state-of-the-art methods, achieving 37.8% (normal size (600, 960)) and 43.3% (large size (1200, 1920)) sAP without using extra data. This work not only sets a new benchmark for real-time perception but also provides valuable insights for future research. Additionally, DAMO-StreamNet can be applied to various autonomous systems, such as drones and robots, paving the way for real-time perception. The code is available at https://github.com/zhiqic/DAMO-StreamNet

    DCPT: Darkness Clue-Prompted Tracking in Nighttime UAVs

    Full text link
    Existing nighttime unmanned aerial vehicle (UAV) trackers follow an "Enhance-then-Track" architecture - first using a light enhancer to brighten the nighttime video, then employing a daytime tracker to locate the object. This separate enhancement and tracking fails to build an end-to-end trainable vision system. To address this, we propose a novel architecture called Darkness Clue-Prompted Tracking (DCPT) that achieves robust UAV tracking at night by efficiently learning to generate darkness clue prompts. Without a separate enhancer, DCPT directly encodes anti-dark capabilities into prompts using a darkness clue prompter (DCP). Specifically, DCP iteratively learns emphasizing and undermining projections for darkness clues. It then injects these learned visual prompts into a daytime tracker with fixed parameters across transformer layers. Moreover, a gated feature aggregation mechanism enables adaptive fusion between prompts and between prompts and the base model. Extensive experiments show state-of-the-art performance for DCPT on multiple dark scenario benchmarks. The unified end-to-end learning of enhancement and tracking in DCPT enables a more trainable system. The darkness clue prompting efficiently injects anti-dark knowledge without extra modules. Code and models will be released.Comment: Under revie

    Detection of STAT2 in early stage of cervical premalignancy and in cervical cancer

    Get PDF
    AbstractObjectiveTo measure the expression pattern of STAT2 in cervical cancer initiation and progression in tissue sections from patients with cervicitis, dysplasia, and cervical cancer.MethodsAntibody against human STAT2 was confirmed by plasmids transient transfection and Western blot. Immunohistochemistry was used to detect STAT2 expression in the cervical biopsies by using the confirmed antibody against STAT2 as the primary antibody.ResultsIt was found that the overall rate of positive STAT2 expression in the cervicitis, dysplasia and cervical cancer groups were 38.5%, 69.4% and 76.9%, respectively. The STAT2 levels are significantly increased in premalignant dysplasia and cervical cancer, as compared to cervicitis (P< 0.05). Noticeably, STAT2 signals were mainly found in the cytoplasm, implying that STAT2 was not biologically active.ConclusionsThese findings reveal an association between cervical cancer progression and augmented STAT2 expression. In conclusion, STAT2 increase appears to be an early detectable cellular event in cervical cancer development

    Entanglement Structure: Entanglement Partitioning in Multipartite Systems and Its Experimental Detection Using Optimizable Witnesses

    Full text link
    Creating large-scale entanglement lies at the heart of many quantum information processing protocols and the investigation of fundamental physics. For multipartite quantum systems, it is crucial to identify not only the presence of entanglement but also its detailed structure. This is because in a generic experimental situation with sufficiently many subsystems involved, the production of so-called genuine multipartite entanglement remains a formidable challenge. Consequently, focusing exclusively on the identification of this strongest type of entanglement may result in an all or nothing situation where some inherently quantum aspects of the resource are overlooked. On the contrary, even if the system is not genuinely multipartite entangled, there may still be many-body entanglement present in the system. An identification of the entanglement structure may thus provide us with a hint about where imperfections in the setup may occur, as well as where we can identify groups of subsystems that can still exhibit strong quantum-information-processing capabilities. However, there is no known efficient methods to identify the underlying entanglement structure. Here, we propose two complementary families of witnesses for the identification of such structures. They are based on the detection of entanglement intactness and entanglement depth, each requires only the implementation of solely two local measurements. Our method is also robust against noises and other imperfections, as reflected by our experimental implementation of these tools to verify the entanglement structure of five different eight-photon entangled states. We demonstrate how their entanglement structure can be precisely and systematically inferred from the experimental data. In achieving this goal, we also illustrate how the same set of data can be classically postprocessed to learn the most about the measured system.Comment: 21 pages, 13 figure

    Responsiveness of voltage-gated calcium channels in SH-SY5Y human neuroblastoma cells on quasi-three-dimensional micropatterns formed with poly (l-lactic acid)

    Get PDF
    Introduction: In this study, quasi-three-dimensional (3D) microwell patterns were fabricated with poly (l-lactic acid) for the development of cell-based assays, targeting voltage-gated calcium channels (VGCCs). Methods and materials: SH-SY5Y human neuroblastoma cells were interfaced with the microwell patterns and found to grow as two dimensional (2D), 3D, and near two dimensional (N2D), categorized on the basis of the cells’ location in the pattern. The capability of the microwell patterns to support 3D cell growth was evaluated in terms of the percentage of the cells in each growth category. Cell spreading was analyzed in terms of projection areas under light microscopy. SH-SY5Y cells’ VGCC responsiveness was evaluated with confocal microscopy and a calcium fluorescent indicator, Calcium GreenTM-1. The expression of L-type calcium channels was evaluated using immunofluorescence staining with DM-BODIPY. Results: It was found that cells within the microwells, either N2D or 3D, showed more rounded shapes and less projection areas than 2D cells on flat poly (l-lactic acid) substrates. Also, cells in microwells showed a significantly lower VGCC responsiveness than cells on flat substrates, in terms of both response magnitudes and percentages of responsive cells, upon depolarization with 50 mM K+. This lower VGCC responsiveness could not be explained by the difference in L-type calcium channel expression. For the two patterns addressed in this study, N2D cells consistently exhibited an intermediate value of either projection areas or VGCC responsiveness between those for 2D and 3D cells, suggesting a correlative relation between cell morphology and VGCC responsiveness. Conclusion: These results suggest that the pattern structure and therefore the cell growth characteristics were critical factors in determining cell VGCC responsiveness and thus provide an approach for engineering cell functionality in cell-based assay systems and tissue engineering scaffolds

    PoSynDA: Multi-Hypothesis Pose Synthesis Domain Adaptation for Robust 3D Human Pose Estimation

    Full text link
    Existing 3D human pose estimators face challenges in adapting to new datasets due to the lack of 2D-3D pose pairs in training sets. To overcome this issue, we propose \textit{Multi-Hypothesis \textbf{P}ose \textbf{Syn}thesis \textbf{D}omain \textbf{A}daptation} (\textbf{PoSynDA}) framework to bridge this data disparity gap in target domain. Typically, PoSynDA uses a diffusion-inspired structure to simulate 3D pose distribution in the target domain. By incorporating a multi-hypothesis network, PoSynDA generates diverse pose hypotheses and aligns them with the target domain. To do this, it first utilizes target-specific source augmentation to obtain the target domain distribution data from the source domain by decoupling the scale and position parameters. The process is then further refined through the teacher-student paradigm and low-rank adaptation. With extensive comparison of benchmarks such as Human3.6M and MPI-INF-3DHP, PoSynDA demonstrates competitive performance, even comparable to the target-trained MixSTE model\cite{zhang2022mixste}. This work paves the way for the practical application of 3D human pose estimation in unseen domains. The code is available at https://github.com/hbing-l/PoSynDA.Comment: Accepted to ACM Multimedia 2023; 10 pages, 4 figures, 8 tables; the code is at https://github.com/hbing-l/PoSynD
    • …
    corecore