6 research outputs found

    CG-fusion CAM: Online segmentation of laser-induced damage on large-aperture optics

    Full text link
    Online segmentation of laser-induced damage on large-aperture optics in high-power laser facilities is challenged by complicated damage morphology, uneven illumination and stray light interference. Fully supervised semantic segmentation algorithms have achieved state-of-the-art performance, but rely on plenty of pixel-level labels, which are time-consuming and labor-consuming to produce. LayerCAM, an advanced weakly supervised semantic segmentation algorithm, can generate pixel-accurate results using only image-level labels, but its scattered and partially under-activated class activation regions degrade segmentation performance. In this paper, we propose a weakly supervised semantic segmentation method with Continuous Gradient CAM and its nonlinear multi-scale fusion (CG-fusion CAM). The method redesigns the way of back-propagating gradients and non-linearly activates the multi-scale fused heatmaps to generate more fine-grained class activation maps with appropriate activation degree for different sizes of damage sites. Experiments on our dataset show that the proposed method can achieve segmentation performance comparable to that of fully supervised algorithms

    Deep Learning-Based Remaining Useful Life Estimation of Bearings with Time-Frequency Information

    No full text
    In modern industrial production, the prediction ability of remaining useful life of bearings directly affects the safety and stability of the system. Traditional methods require rigorous physical modeling and perform poorly for complex systems. In this paper, an end-to-end remaining useful life prediction method is proposed, which uses short-time Fourier transform (STFT) as preprocessing. Considering the time correlation of signal sequences, a long and short-term memory network is designed in CNN, incorporating the convolutional block attention module, and understanding the decision-making process of the network from the interpretability level. Experiments were carried out on the 2012PHM dataset and compared with other methods, and the results proved the effectiveness of the method

    Weakly-Supervised Video Anomaly Detection with MTDA-Net

    No full text
    Weakly supervised anomalous behavior detection is a popular area at present. Compared to semi-supervised anomalous behavior detection, weakly-supervised learning both eliminates the need to crop videos and solves the problem of semi-supervised learning’s difficulty in handling long videos. Previous work has used graph convolution or self-attention mechanisms to model temporal relationships. However, these methods tend to model temporal relationships at a single scale and lack consideration of the aggregation problem for different temporal relationships. In this paper, we propose a weakly supervised anomaly detection framework, MTDA-Net, with emphasis on modeling different temporal relationships and enhanced semantic discrimination. To this end, we construct a new plug-and-play module, MTDA, which uses three branches, Multi-headed Attention (MHA), Temporal Shift (TS), and Dilated Aggregation (DA), to extract different temporal sequences. Specifically, the MHA branch can globally model the video information and project the features into different semantic spaces to enhance the expressiveness and discrimination of the features. The DA branch extracts temporal information of different scales via dilated convolution and captures the temporal features of local regions in the video. The TS branch can fuse the features of adjacent frames on a local scale and enhance the information flow. MTDA-Net can learn the temporal relationships between video segments on different branches and learn powerful video representations based on these relationships. The experimental results on the XD-Violence dataset show that MTDA-Net can significantly improve the detection accuracy of abnormal behaviors

    MTR-SAM: Visual Multimodal Text Recognition and Sentiment Analysis in Public Opinion Analysis on the Internet

    No full text
    Existing methods for monitoring internet public opinion rely primarily on regular crawling of textual information on web pages but cannot quickly and accurately acquire and identify textual information in images and videos and discriminate sentiment. The problems make this a challenging research point for multimodal information detection in an internet public opinion scenario. In this paper, we look at how to dynamically monitor the internet opinion information (mostly images and videos) that different websites post. Based on the most recent advancements in text recognition, this paper proposes a new method of visual multimodal text recognition and sentiment analysis (MTR-SAM) for internet public opinion analysis scenarios. In the detection module, a LK-PAN network with large sensory fields is proposed to enhance the CML distillation strategy, and an RSE-FPN with a residual attention mechanism is used to improve feature map representation. Second, it proposes that the original CTC decoder be replaced with a GTC method to solve earlier problems with text detection at arbitrary rotation angles. Additionally, the performance of scene text detection for arbitrary rotation angles is improved using a sinusoidal loss function for rotation recognition. Finally, the improved sentiment analysis model is used to predict the sentiment polarity of the text recognition results. The experimental results show that the new method proposed in this paper improves recognition speed by 31.77%, recognition accuracy by 10.78% on the video dataset, and the F1 score of the multimodal sentiment analysis model by 4.42% on the self-built internet public opinion dataset (lab dataset). The method proposed provides significant technical support for internet public opinion analysis in multimodal domains
    corecore