29 research outputs found

    Rumor Detection with Diverse Counterfactual Evidence

    Full text link
    The growth in social media has exacerbated the threat of fake news to individuals and communities. This draws increasing attention to developing efficient and timely rumor detection methods. The prevailing approaches resort to graph neural networks (GNNs) to exploit the post-propagation patterns of the rumor-spreading process. However, these methods lack inherent interpretation of rumor detection due to the black-box nature of GNNs. Moreover, these methods suffer from less robust results as they employ all the propagation patterns for rumor detection. In this paper, we address the above issues with the proposed Diverse Counterfactual Evidence framework for Rumor Detection (DCE-RD). Our intuition is to exploit the diverse counterfactual evidence of an event graph to serve as multi-view interpretations, which are further aggregated for robust rumor detection results. Specifically, our method first designs a subgraph generation strategy to efficiently generate different subgraphs of the event graph. We constrain the removal of these subgraphs to cause the change in rumor detection results. Thus, these subgraphs naturally serve as counterfactual evidence for rumor detection. To achieve multi-view interpretation, we design a diversity loss inspired by Determinantal Point Processes (DPP) to encourage diversity among the counterfactual evidence. A GNN-based rumor detection model further aggregates the diverse counterfactual evidence discovered by the proposed DCE-RD to achieve interpretable and robust rumor detection results. Extensive experiments on two real-world datasets show the superior performance of our method. Our code is available at https://github.com/Vicinity111/DCE-RD

    Event-Based Fusion for Motion Deblurring with Cross-modal Attention

    Get PDF
    Traditional frame-based cameras inevitably suffer from motion blur due to long exposure times. As a kind of bio-inspired camera, the event camera records the intensity changes in an asynchronous way with high temporal resolution, providing valid image degradation information within the exposure time. In this paper, we rethink the event-based image deblurring problem and unfold it into an end-to-end two-stage image restoration network. To effectively fuse event and image features, we design an event-image cross-modal attention module applied at multiple levels of our network, which allows to focus on relevant features from the event branch and filter out noise. We also introduce a novel symmetric cumulative event representation specifically for image deblurring as well as an event mask gated connection between the two stages of our network which helps avoid information loss. At the dataset level, to foster event-based motion deblurring and to facilitate evaluation on challenging real-world images, we introduce the Real Event Blur (REBlur) dataset, captured with an event camera in an illumination controlled optical laboratory. Our Event Fusion Network (EFNet) sets the new state of the art in motion deblurring, surpassing both the prior best-performing image-based method and all event-based methods with public implementations on the GoPro dataset (by up to 2.47dB) and on our REBlur dataset, even in extreme blurry conditions. The code and our REBlur dataset will be made publicly available

    Event-Based Fusion for Motion Deblurring with Cross-modal Attention

    Get PDF
    Traditional frame-based cameras inevitably suffer from motion blur due to long exposure times. As a kind of bio-inspired camera, the event camera records the intensity changes in an asynchronous way with high temporal resolution, providing valid image degradation information within the exposure time. In this paper, we rethink the event-based image deblurring problem and unfold it into an end-to-end two-stage image restoration network. To effectively fuse event and image features, we design an event-image cross-modal attention module applied at multiple levels of our network, which allows to focus on relevant features from the event branch and filter out noise. We also introduce a novel symmetric cumulative event representation specifically for image deblurring as well as an event mask gated connection between the two stages of our network which helps avoid information loss. At the dataset level, to foster event-based motion deblurring and to facilitate evaluation on challenging real-world images, we introduce the Real Event Blur (REBlur) dataset, captured with an event camera in an illumination controlled optical laboratory. Our Event Fusion Network (EFNet) sets the new state of the art in motion deblurring, surpassing both the prior best-performing image-based method and all event-based methods with public implementations on the GoPro dataset (by up to 2.47dB) and on our REBlur dataset, even in extreme blurry conditions. The code and our REBlur dataset will be made publicly available

    Convolutional Recurrent Neural Network for Fault Diagnosis of High-Speed Train Bogie

    No full text
    Timely detection and efficient recognition of fault are challenging for the bogie of high-speed train (HST), owing to the fact that different types of fault signals have similar characteristics in the same frequency range. Notice that convolutional neural networks (CNNs) are powerful in extracting high-level local features and that recurrent neural networks (RNNs) are capable of learning long-term context dependencies in vibration signals. In this paper, by combining CNN and RNN, a so-called convolutional recurrent neural network (CRNN) is proposed to diagnose various faults of the HST bogie, where the capabilities of CNN and RNN are inherited simultaneously. Within the novel architecture, the proposed CRNN first filters out the features from the original data through convolutional layers. Then, four recurrent layers with simple recurrent cell are used to model the context information in the extracted features. By comparing the performance of the presented CRNN with CNN, RNN, and ensemble learning, experimental results show that CRNN achieves not only the best performance with accuracy of 97.8% but also the least time spent in training model

    Method Validation, Residues and Dietary Risk Assessment for Procymidone in Green Onion and Garlic Plant

    No full text
    Procymidone is used as a preventive and curative fungicide to control fungal growth on edible crops and ornamental plants. It is one of the most frequently used pesticides and has a high detection rate, but its residue behaviors remain unclear in green onion and garlic plants (including garlic, garlic chive, and serpent garlic). In this study, the dissipation and terminal residues of procymidone in four matrices were investigated, along with the validation of the method and risk assessment. The analytical method for the target compound was developed using gas chromatography-tandem mass spectrometry (GC-MS/MS), which was preceded by a Florisil cleanup. The linearities of this proposed method for investigating procymidone in green onion, garlic, garlic chive, and serpent garlic were satisfied in the range from 0.010 to 2.5 mg/L with R2 > 0.9985. At the same time, the limits of quantification in the four matrices were 0.020 mg/kg, and the fortified recoveries of procymidone ranged from 86% to 104%, with relative standard deviations of 0.92% to 13%. The dissipation of procymidone in green onion and garlic chive followed first-order kinetics, while the half-lives were less than 8.35 days and 5.73 days, respectively. The terminal residue levels in garlic chive were much higher than those in green onion and serpent garlic because of morphological characteristics. The risk quotients of different Chinese consumer groups to procymidone in green onion, garlic chive, and serpent garlic were in the range from 5.79% to 25.07%, which is comparably acceptable. These data could provide valuable information on safe and reasonable use of procymidone in its increasing applications

    Coral aggregate concrete: Numerical description of physical, chemical and morphological properties of coral aggregate

    No full text
    With the increasing development of oceanic resources, coral aggregate concrete has wide potentials in the construction of islands and reefs, as well as the flood embankment, road and airport in coastal areas. However, the complex particle composition of coral aggregates, shape, surface structure and pores lead to unusual microstructure, workability, mechanical property and durability of resulting concrete. In this paper, the physical and chemical characteristics of coral aggregates are studied, and the particle shape features are analyzed. Quantitative parameters such as sphericity (ψ), angular number (AN), and index of aggregate particle shape and texture (IAPST) are used to characterize the features of aggregates. An indicator, namely texture index (TI), is proposed to characterize the surface microstructure of coral aggregate. The results show that the coral aggregates with rough surface and porous interior have unique tree-shaped and rod-shaped particles and the former accounts for 41.3% of the total weight. Coral aggregates have typical ‘concave hole’ characteristic, porosity of 48.2–55.6% and >12% water absorption. The average sphericity and AN of coral aggregate are 0.5–0.6 and 27.5–30.3, respectively. At the same particle size, the ψ of natural limestone aggregates is significantly larger than that of coral aggregates. The AN of coral aggregates is 2.4–3.0 times larger than that of limestone aggregates at a single grain size. At the same single-grain level, the IAPST of coral aggregates and natural limestone aggregates are between 31.6-34.3 and 18.6–19.6, respectively. The TI of the coral aggregates of 4.75–16 mm and 16–31.5 mm are 16.2 and 15.9, while the limestone aggregates are only 1.22 and 1.17, respectively. Compared with IAPST, the use of TI is more suitable to characterize the ‘concave hole’ feature of coral aggregate
    corecore