292 research outputs found

    ParticleNet and its application on CEPC Jet Flavor Tagging

    Full text link
    Identification of quark flavor is essential for collider experiments in high-energy physics, relying on the flavor tagging algorithm. In this study, using a full simulation of the Circular Electron Positron Collider (CEPC), we investigated the flavor tagging performance of two different algorithms: ParticleNet, originally developed at CMS, and LCFIPlus, the current flavor tagging algorithm employed at CEPC. Compared to LCFIPlus, ParticleNet significantly enhances flavor tagging performance, resulting in a significant improvement in benchmark measurement accuracy, i.e., a 36% improvement for ννˉH→ccˉ\nu\bar{\nu}H\to c\bar{c} measurement and a 75% improvement for ∣Vcb∣|V_{cb}| measurement via W boson decay when CEPC operates as a Higgs factory at the center-of-mass energy of 240 GeV and integrated luminosity of 5.6 ab−1ab^{-1}. We compared the performance of ParticleNet and LCFIPlus at different vertex detector configurations, observing that the inner radius is the most sensitive parameter, followed by material budget and spatial resolution

    See More and Know More: Zero-shot Point Cloud Segmentation via Multi-modal Visual Data

    Full text link
    Zero-shot point cloud segmentation aims to make deep models capable of recognizing novel objects in point cloud that are unseen in the training phase. Recent trends favor the pipeline which transfers knowledge from seen classes with labels to unseen classes without labels. They typically align visual features with semantic features obtained from word embedding by the supervision of seen classes' annotations. However, point cloud contains limited information to fully match with semantic features. In fact, the rich appearance information of images is a natural complement to the textureless point cloud, which is not well explored in previous literature. Motivated by this, we propose a novel multi-modal zero-shot learning method to better utilize the complementary information of point clouds and images for more accurate visual-semantic alignment. Extensive experiments are performed in two popular benchmarks, i.e., SemanticKITTI and nuScenes, and our method outperforms current SOTA methods with 52% and 49% improvement on average for unseen class mIoU, respectively.Comment: Accepted by ICCV 202

    Towards Label-free Scene Understanding by Vision Foundation Models

    Full text link
    Vision foundation models such as Contrastive Vision-Language Pre-training (CLIP) and Segment Anything (SAM) have demonstrated impressive zero-shot performance on image classification and segmentation tasks. However, the incorporation of CLIP and SAM for label-free scene understanding has yet to be explored. In this paper, we investigate the potential of vision foundation models in enabling networks to comprehend 2D and 3D worlds without labelled data. The primary challenge lies in effectively supervising networks under extremely noisy pseudo labels, which are generated by CLIP and further exacerbated during the propagation from the 2D to the 3D domain. To tackle these challenges, we propose a novel Cross-modality Noisy Supervision (CNS) method that leverages the strengths of CLIP and SAM to supervise 2D and 3D networks simultaneously. In particular, we introduce a prediction consistency regularization to co-train 2D and 3D networks, then further impose the networks' latent space consistency using the SAM's robust feature representation. Experiments conducted on diverse indoor and outdoor datasets demonstrate the superior performance of our method in understanding 2D and 3D open environments. Our 2D and 3D network achieves label-free semantic segmentation with 28.4% and 33.5% mIoU on ScanNet, improving 4.7% and 7.9%, respectively. And for nuScenes dataset, our performance is 26.8% with an improvement of 6%. Code will be released (https://github.com/runnanchen/Label-Free-Scene-Understanding)

    The association between neuroendocrine/glucose metabolism and clinical outcomes and disease course in different clinical states of bipolar disorders

    Get PDF
    ObjectiveThe treatment of bipolar disorder (BD) remains challenging. The study evaluated the impact of the hypothalamic–pituitary–adrenal (HPA) axis/hypothalamic–pituitary-thyroid (HPT) axis and glucose metabolism on the clinical outcomes in patients with bipolar depression (BD-D) and manic bipolar (BD-M) disorders.MethodsThe research design involved a longitudinal prospective study. A total of 500 BD patients aged between 18 and 65 years treated in 15 hospitals located in Western China were enrolled in the study. The Young Mania Rating Scale (YMRS) and Montgomery and Asberg Depression Rating Scale (MADRS) were used to assess the BD symptoms. An effective treatment response was defined as a reduction in the symptom score of more than 25% after 12 weeks of treatment. The score of symptoms was correlated with the homeostatic model assessment of insulin resistance (HOMA-IR) index, the HPA axis hormone levels (adrenocorticotropic hormone (ACTH) and cortisol), and the HPT axis hormone levels (thyroid stimulating hormone (TSH), triiodothyronine (T3), thyroxine (T4), free triiodothyronine (fT3), and free thyroxine (fT4)).ResultsIn the BD-M group, the YMRS was positively correlated with baseline T4 (r = 0.349, p = 0.010) and fT4 (r = 0.335, p = 0.013) and negatively correlated with fasting insulin (r = −0.289, p = 0.013). The pre-treatment HOMA-IR was significantly correlated with adverse course (p = 0.045, OR = 0.728). In the BD-D group, the baseline MADRS was significantly positively correlated with baseline fT3 (r = 0.223, p = 0.032) and fT4 (r = 0.315, p = 0.002), while baseline T3 (p = 0.032, OR = 5.071) was significantly positively related to treatment response.ConclusionThe HPT axis and glucose metabolism were closely associated with clinical outcomes at 12 weeks in both BD-D and BD-M groups. If confirmed in further longitudinal studies, monitoring T3 in BD-D patients and HOMA-IR for BD-M could be used as potential treatment response biomarkers

    Jet origin identification and measurement of rare hadronic decays of Higgs boson at e+e−e^+e^- collider

    Full text link
    We propose to identify the jet origin using deep learning tools for experiments at the high energy frontier, where jet origins are categorized into 5 species of quarks, i.e., b,c,s,u,db,c,s,u,d, 5 species of anti-quarks, i.e., bˉ,cˉ,sˉ,uˉ,dˉ\bar{b},\bar{c},\bar{s},\bar{u},\bar{d}, and gluons. Using simulated physics events at the Circular Electron Positron Collider and the ParticleNet algorithm, we quantify the performance of jet origin identification using an 11-dimensional confusion matrix. This matrix exhibits flavor tagging efficiencies of 91% for bb and bˉ\bar{b}, 80% for cc and cˉ\bar{c}, and 64% for ss and sˉ\bar{s} quarks, as well as jet charge misidentification rates of 18% for bb and bˉ\bar{b}, 7% for cc and cˉ\bar{c}, and 16% for ss and sˉ\bar{s} quarks, respectively. We use this method to determine the upper limits on branching ratios of Higgs rare hadronic decays, specifically for ssˉs\bar{s}, uuˉu\bar{u}, and ddˉd\bar{d}, as well as for decays via flavor-changing neutral current, such as sbsb, sdsd, dbdb, cucu. We conclude that these Higgs decay branching ratios could be measured with typical upper limits of 0.02%-0.1% at 95% confidence level at CEPC nominal parameters. For the H→ssˉH\rightarrow s \bar{s} decay, this upper limit corresponds to three times the standard model prediction
    • …
    corecore