137 research outputs found

    RAHNet: Retrieval Augmented Hybrid Network for Long-tailed Graph Classification

    Full text link
    Graph classification is a crucial task in many real-world multimedia applications, where graphs can represent various multimedia data types such as images, videos, and social networks. Previous efforts have applied graph neural networks (GNNs) in balanced situations where the class distribution is balanced. However, real-world data typically exhibit long-tailed class distributions, resulting in a bias towards the head classes when using GNNs and limited generalization ability over the tail classes. Recent approaches mainly focus on re-balancing different classes during model training, which fails to explicitly introduce new knowledge and sacrifices the performance of the head classes. To address these drawbacks, we propose a novel framework called Retrieval Augmented Hybrid Network (RAHNet) to jointly learn a robust feature extractor and an unbiased classifier in a decoupled manner. In the feature extractor training stage, we develop a graph retrieval module to search for relevant graphs that directly enrich the intra-class diversity for the tail classes. Moreover, we innovatively optimize a category-centered supervised contrastive loss to obtain discriminative representations, which is more suitable for long-tailed scenarios. In the classifier fine-tuning stage, we balance the classifier weights with two weight regularization techniques, i.e., Max-norm and weight decay. Experiments on various popular benchmarks verify the superiority of the proposed method against state-of-the-art approaches.Comment: Accepted by the ACM International Conference on Multimedia (MM) 202

    ALEX: Towards Effective Graph Transfer Learning with Noisy Labels

    Full text link
    Graph Neural Networks (GNNs) have garnered considerable interest due to their exceptional performance in a wide range of graph machine learning tasks. Nevertheless, the majority of GNN-based approaches have been examined using well-annotated benchmark datasets, leading to suboptimal performance in real-world graph learning scenarios. To bridge this gap, the present paper investigates the problem of graph transfer learning in the presence of label noise, which transfers knowledge from a noisy source graph to an unlabeled target graph. We introduce a novel technique termed Balance Alignment and Information-aware Examination (ALEX) to address this challenge. ALEX first employs singular value decomposition to generate different views with crucial structural semantics, which help provide robust node representations using graph contrastive learning. To mitigate both label shift and domain shift, we estimate a prior distribution to build subgraphs with balanced label distributions. Building on this foundation, an adversarial domain discriminator is incorporated for the implicit domain alignment of complex multi-modal distributions. Furthermore, we project node representations into a different space, optimizing the mutual information between the projected features and labels. Subsequently, the inconsistency of similarity structures is evaluated to identify noisy samples with potential overfitting. Comprehensive experiments on various benchmark datasets substantiate the outstanding superiority of the proposed ALEX in different settings.Comment: Accepted by the ACM International Conference on Multimedia (MM) 202

    Concept for a Future Super Proton-Proton Collider

    Full text link
    Following the discovery of the Higgs boson at LHC, new large colliders are being studied by the international high-energy community to explore Higgs physics in detail and new physics beyond the Standard Model. In China, a two-stage circular collider project CEPC-SPPC is proposed, with the first stage CEPC (Circular Electron Positron Collier, a so-called Higgs factory) focused on Higgs physics, and the second stage SPPC (Super Proton-Proton Collider) focused on new physics beyond the Standard Model. This paper discusses this second stage.Comment: 34 pages, 8 figures, 5 table

    Neutrino Physics with JUNO

    Get PDF
    The Jiangmen Underground Neutrino Observatory (JUNO), a 20 kton multi-purposeunderground liquid scintillator detector, was proposed with the determinationof the neutrino mass hierarchy as a primary physics goal. It is also capable ofobserving neutrinos from terrestrial and extra-terrestrial sources, includingsupernova burst neutrinos, diffuse supernova neutrino background, geoneutrinos,atmospheric neutrinos, solar neutrinos, as well as exotic searches such asnucleon decays, dark matter, sterile neutrinos, etc. We present the physicsmotivations and the anticipated performance of the JUNO detector for variousproposed measurements. By detecting reactor antineutrinos from two power plantsat 53-km distance, JUNO will determine the neutrino mass hierarchy at a 3-4sigma significance with six years of running. The measurement of antineutrinospectrum will also lead to the precise determination of three out of the sixoscillation parameters to an accuracy of better than 1\%. Neutrino burst from atypical core-collapse supernova at 10 kpc would lead to ~5000inverse-beta-decay events and ~2000 all-flavor neutrino-proton elasticscattering events in JUNO. Detection of DSNB would provide valuable informationon the cosmic star-formation rate and the average core-collapsed neutrinoenergy spectrum. Geo-neutrinos can be detected in JUNO with a rate of ~400events per year, significantly improving the statistics of existing geoneutrinosamples. The JUNO detector is sensitive to several exotic searches, e.g. protondecay via the pK++νˉp\to K^++\bar\nu decay channel. The JUNO detector will providea unique facility to address many outstanding crucial questions in particle andastrophysics. It holds the great potential for further advancing our quest tounderstanding the fundamental properties of neutrinos, one of the buildingblocks of our Universe

    Potential of Core-Collapse Supernova Neutrino Detection at JUNO

    Get PDF
    JUNO is an underground neutrino observatory under construction in Jiangmen, China. It uses 20kton liquid scintillator as target, which enables it to detect supernova burst neutrinos of a large statistics for the next galactic core-collapse supernova (CCSN) and also pre-supernova neutrinos from the nearby CCSN progenitors. All flavors of supernova burst neutrinos can be detected by JUNO via several interaction channels, including inverse beta decay, elastic scattering on electron and proton, interactions on C12 nuclei, etc. This retains the possibility for JUNO to reconstruct the energy spectra of supernova burst neutrinos of all flavors. The real time monitoring systems based on FPGA and DAQ are under development in JUNO, which allow prompt alert and trigger-less data acquisition of CCSN events. The alert performances of both monitoring systems have been thoroughly studied using simulations. Moreover, once a CCSN is tagged, the system can give fast characterizations, such as directionality and light curve

    Detection of the Diffuse Supernova Neutrino Background with JUNO

    Get PDF
    As an underground multi-purpose neutrino detector with 20 kton liquid scintillator, Jiangmen Underground Neutrino Observatory (JUNO) is competitive with and complementary to the water-Cherenkov detectors on the search for the diffuse supernova neutrino background (DSNB). Typical supernova models predict 2-4 events per year within the optimal observation window in the JUNO detector. The dominant background is from the neutral-current (NC) interaction of atmospheric neutrinos with 12C nuclei, which surpasses the DSNB by more than one order of magnitude. We evaluated the systematic uncertainty of NC background from the spread of a variety of data-driven models and further developed a method to determine NC background within 15\% with {\it{in}} {\it{situ}} measurements after ten years of running. Besides, the NC-like backgrounds can be effectively suppressed by the intrinsic pulse-shape discrimination (PSD) capabilities of liquid scintillators. In this talk, I will present in detail the improvements on NC background uncertainty evaluation, PSD discriminator development, and finally, the potential of DSNB sensitivity in JUNO

    Real-time Monitoring for the Next Core-Collapse Supernova in JUNO

    Full text link
    Core-collapse supernova (CCSN) is one of the most energetic astrophysical events in the Universe. The early and prompt detection of neutrinos before (pre-SN) and during the SN burst is a unique opportunity to realize the multi-messenger observation of the CCSN events. In this work, we describe the monitoring concept and present the sensitivity of the system to the pre-SN and SN neutrinos at the Jiangmen Underground Neutrino Observatory (JUNO), which is a 20 kton liquid scintillator detector under construction in South China. The real-time monitoring system is designed with both the prompt monitors on the electronic board and online monitors at the data acquisition stage, in order to ensure both the alert speed and alert coverage of progenitor stars. By assuming a false alert rate of 1 per year, this monitoring system can be sensitive to the pre-SN neutrinos up to the distance of about 1.6 (0.9) kpc and SN neutrinos up to about 370 (360) kpc for a progenitor mass of 30MM_{\odot} for the case of normal (inverted) mass ordering. The pointing ability of the CCSN is evaluated by using the accumulated event anisotropy of the inverse beta decay interactions from pre-SN or SN neutrinos, which, along with the early alert, can play important roles for the followup multi-messenger observations of the next Galactic or nearby extragalactic CCSN.Comment: 24 pages, 9 figure

    Real-time visualization of 3D city models at street-level based on visual saliency

    No full text
    Street-level visualization is an important application of 3D city models. Challenges to street-level visualization include the cluttering of buildings due to fine detail and visualization performance. In this paper, a novel method is proposed for street-level visualization based on visual saliency evaluation. The basic idea of the method is to preserve these salient buildings in a scene while removing those that are non-salient. The method can be divided into pre-processing procedures and real-time visualization. The first step in pre-processing is to convert 3D building models at higher Levels of Detail (LoDs) into LoD1 models with simplified ground plans. Then, a number of index viewpoints are created along the streets; these indices refer to both the position and the direction of each street site. A visual saliency value is computed for each building, with respect to the index site, based on a visual difference between the original model and the generalized model. We calculate and evaluate three methods for visual saliency: local difference, global difference and minimum projection area. The real-time visualization process begins by mapping the observer to its closest indices. The street view is then generated based on the building information stored in those indexes. A user study shows that the local visual saliency method performs better than do the global visual saliency, area and image-based methods and that the framework proposed in this paper may improve the performance of 3D visualization

    A multiple representation data structure for dynamic visualisation of generalised 3D city models

    No full text
    In this paper, a novel multiple representation data structure for dynamic visualisation of 3D city models, called CityTree, is proposed. To create a CityTree, the ground plans of the buildings are generated and simplified. Then, the buildings are divided into clusters by the road network and one CityTree is created for each cluster. The leaf nodes of the CityTree represent the original 3D objects of each building, and the intermediate nodes represent groups of close buildings. By utilising CityTree, it is possible to have dynamic zoom functionality in real time. The CityTree methodology is implemented in a framework where the original city model is stored in CityGML and the CityTree is stored as X3D scenes. A case study confirms the applicability of the CityTree for dynamic visualisation of 3D city models. (C) 2010 International Society for Photogrammetry and Remote Sensing, Inc. (ISPRS). Published by Elsevier B.V. All rights reserved
    corecore