8 research outputs found

    More Interpretable Graph Similarity Computation via Maximum Common Subgraph Inference

    Full text link
    Graph similarity measurement, which computes the distance/similarity between two graphs, arises in various graph-related tasks. Recent learning-based methods lack interpretability, as they directly transform interaction information between two graphs into one hidden vector and then map it to similarity. To cope with this problem, this study proposes a more interpretable end-to-end paradigm for graph similarity learning, named Similarity Computation via Maximum Common Subgraph Inference (INFMCS). Our critical insight into INFMCS is the strong correlation between similarity score and Maximum Common Subgraph (MCS). We implicitly infer MCS to obtain the normalized MCS size, with the supervision information being only the similarity score during training. To capture more global information, we also stack some vanilla transformer encoder layers with graph convolution layers and propose a novel permutation-invariant node Positional Encoding. The entire model is quite simple yet effective. Comprehensive experiments demonstrate that INFMCS consistently outperforms state-of-the-art baselines for graph-graph classification and regression tasks. Ablation experiments verify the effectiveness of the proposed computation paradigm and other components. Also, visualization and statistics of results reveal the interpretability of INFMCS

    AEDNet: Adaptive Edge-Deleting Network For Subgraph Matching

    Full text link
    Subgraph matching is to find all subgraphs in a data graph that are isomorphic to an existing query graph. Subgraph matching is an NP-hard problem, yet has found its applications in many areas. Many learning-based methods have been proposed for graph matching, whereas few have been designed for subgraph matching. The subgraph matching problem is generally more challenging, mainly due to the different sizes between the two graphs, resulting in considerable large space of solutions. Also the extra edges existing in the data graph connecting to the matched nodes may lead to two matched nodes of two graphs having different adjacency structures and often being identified as distinct objects. Due to the extra edges, the existing learning based methods often fail to generate sufficiently similar node-level embeddings for matched nodes. This study proposes a novel Adaptive Edge-Deleting Network (AEDNet) for subgraph matching. The proposed method is trained in an end-to-end fashion. In AEDNet, a novel sample-wise adaptive edge-deleting mechanism removes extra edges to ensure consistency of adjacency structure of matched nodes, while a unidirectional cross-propagation mechanism ensures consistency of features of matched nodes. We applied the proposed method on six datasets with graph sizes varying from 20 to 2300. Our evaluations on six open datasets demonstrate that the proposed AEDNet outperforms six state-of-the-arts and is much faster than the exact methods on large graphs

    UAV-Pose: A Dual Capture Network Algorithm for Low Altitude UAV Attitude Detection and Tracking

    No full text
    This paper presents a low-altitude unmanned aerial vehicle (UAV) attitude detection and tracking algorithm, named UAV-Pose. In the context of low-altitude UAV countermeasure tasks, precise attitude detection and tracking are crucial for achieving laser-guided precision strikes. To meet the varying requirements during the tracking stages, this study designs two capture networks with different resolutions. Firstly, a lightweight bottleneck structure, GhostNeck, is introduced to accelerate detection speed. Secondly, a significant improvement in detection accuracy is achieved by integrating an attention mechanism and SimCC loss. Additionally, a data augmentation method is proposed to adapt to attitude detection under atmospheric turbulence. A self-collected dataset, named UAV-ADT (UAV Attitude Detection and Tracking), is constructed for training and evaluating the target detection algorithm. The algorithm is deployed using the TensorRT tool and tested on the UAV-ADT dataset, demonstrating a detection speed of 300 frames per second (FPS) with a map75 reaching 97.8% and a PCK (Percentage of Correct Keypoints) metric reaching 99.3%. Real-world field experiments further validate the accurate detection and continuous tracking of UAV attitudes, providing essential support for counter-UAV operations

    A Dynamic Effective Class Balanced Approach for Remote Sensing Imagery Semantic Segmentation of Imbalanced Data

    No full text
    The wide application and rapid development of satellite remote sensing technology have put higher requirements on remote sensing image segmentation methods. Because of its characteristics of large image size, large data volume, and complex segmentation background, not only are the traditional image segmentation methods difficult to apply effectively, but the image segmentation methods based on deep learning are faced with the problem of extremely unbalanced data between categories. In order to solve this problem, first of all, according to the existing effective sample theory, the effective sample calculation method in the context of semantic segmentation is firstly proposed in the highly unbalanced dataset. Then, a dynamic weighting method based on the effective sample concept is proposed, which can be applied to the semantic segmentation of remote sensing images. Finally, the applicability of this method to different loss functions and different network structures is verified on the self-built Landsat8-OLI remote sensing image-based tri-classified forest fire burning area dataset and the LoveDA dataset, which is for land-cover semantic segmentation. It has been concluded that this weighting algorithm can enhance the minimal-class segmentation accuracy while ensuring that the overall segmentation performance in multi-class segmentation tasks is verified in two different semantic segmentation tasks, including the land use and land cover (LULC) and the forest fire burning area segmentation In addition, this proposed method significantly improves the recall of forest fire burning area segmentation by as much as about 30%, which is of great reference value for forest fire research based on remote sensing images

    CEPC Conceptual Design Report: Volume 2 - Physics & Detector

    No full text
    The Circular Electron Positron Collider (CEPC) is a large international scientific facility proposed by the Chinese particle physics community to explore the Higgs boson and provide critical tests of the underlying fundamental physics principles of the Standard Model that might reveal new physics. The CEPC, to be hosted in China in a circular underground tunnel of approximately 100 km in circumference, is designed to operate as a Higgs factory producing electron-positron collisions with a center-of-mass energy of 240 GeV. The collider will also operate at around 91.2 GeV, as a Z factory, and at the WW production threshold (around 160 GeV). The CEPC will produce close to one trillion Z bosons, 100 million W bosons and over one million Higgs bosons. The vast amount of bottom quarks, charm quarks and tau-leptons produced in the decays of the Z bosons also makes the CEPC an effective B-factory and tau-charm factory. The CEPC will have two interaction points where two large detectors will be located. This document is the second volume of the CEPC Conceptual Design Report (CDR). It presents the physics case for the CEPC, describes conceptual designs of possible detectors and their technological options, highlights the expected detector and physics performance, and discusses future plans for detector R&D and physics investigations. The final CEPC detectors will be proposed and built by international collaborations but they are likely to be composed of the detector technologies included in the conceptual designs described in this document. A separate volume, Volume I, recently released, describes the design of the CEPC accelerator complex, its associated civil engineering, and strategic alternative scenarios

    CEPC Conceptual Design Report: Volume 2 - Physics & Detector

    No full text
    The Circular Electron Positron Collider (CEPC) is a large international scientific facility proposed by the Chinese particle physics community to explore the Higgs boson and provide critical tests of the underlying fundamental physics principles of the Standard Model that might reveal new physics. The CEPC, to be hosted in China in a circular underground tunnel of approximately 100 km in circumference, is designed to operate as a Higgs factory producing electron-positron collisions with a center-of-mass energy of 240 GeV. The collider will also operate at around 91.2 GeV, as a Z factory, and at the WW production threshold (around 160 GeV). The CEPC will produce close to one trillion Z bosons, 100 million W bosons and over one million Higgs bosons. The vast amount of bottom quarks, charm quarks and tau-leptons produced in the decays of the Z bosons also makes the CEPC an effective B-factory and tau-charm factory. The CEPC will have two interaction points where two large detectors will be located. This document is the second volume of the CEPC Conceptual Design Report (CDR). It presents the physics case for the CEPC, describes conceptual designs of possible detectors and their technological options, highlights the expected detector and physics performance, and discusses future plans for detector R&D and physics investigations. The final CEPC detectors will be proposed and built by international collaborations but they are likely to be composed of the detector technologies included in the conceptual designs described in this document. A separate volume, Volume I, recently released, describes the design of the CEPC accelerator complex, its associated civil engineering, and strategic alternative scenarios

    CEPC Conceptual Design Report: Volume 2 - Physics & Detector

    No full text
    The Circular Electron Positron Collider (CEPC) is a large international scientific facility proposed by the Chinese particle physics community to explore the Higgs boson and provide critical tests of the underlying fundamental physics principles of the Standard Model that might reveal new physics. The CEPC, to be hosted in China in a circular underground tunnel of approximately 100 km in circumference, is designed to operate as a Higgs factory producing electron-positron collisions with a center-of-mass energy of 240 GeV. The collider will also operate at around 91.2 GeV, as a Z factory, and at the WW production threshold (around 160 GeV). The CEPC will produce close to one trillion Z bosons, 100 million W bosons and over one million Higgs bosons. The vast amount of bottom quarks, charm quarks and tau-leptons produced in the decays of the Z bosons also makes the CEPC an effective B-factory and tau-charm factory. The CEPC will have two interaction points where two large detectors will be located. This document is the second volume of the CEPC Conceptual Design Report (CDR). It presents the physics case for the CEPC, describes conceptual designs of possible detectors and their technological options, highlights the expected detector and physics performance, and discusses future plans for detector R&D and physics investigations. The final CEPC detectors will be proposed and built by international collaborations but they are likely to be composed of the detector technologies included in the conceptual designs described in this document. A separate volume, Volume I, recently released, describes the design of the CEPC accelerator complex, its associated civil engineering, and strategic alternative scenarios
    corecore