738 research outputs found

    The Verification of Rail Thermal Stress Measurement System

    Get PDF
    Continuous Welded Rail (CWR) is widely used in modern railways. With the absence of the expansion joints, CWR cannot expansion freely when the temperature changes, which could cause buckling in hot weather or breakage in cold weather. Therefore, rail thermal stress measuring system plays an important role in the safe operation of railways. This paper designed a thermal stress measurement system based on the acoustoelastic effect of the ultrasonic guided wave. A large-scale rail testbed was built to simulate the thermal stress in the rail track, and to establish the relationship of time-delay of guided wave and thermal stress. After laboratory testing, the system was installed in several railway lines in China for field tests. The results showed that the system was stable and accurate in stress measurement. The performance and potentials of the system were discussed

    Network Binarization via Contrastive Learning

    Full text link
    Neural network binarization accelerates deep models by quantizing their weights and activations into 1-bit. However, there is still a huge performance gap between Binary Neural Networks (BNNs) and their full-precision (FP) counterparts. As the quantization error caused by weights binarization has been reduced in earlier works, the activations binarization becomes the major obstacle for further improvement of the accuracy. BNN characterises a unique and interesting structure, where the binary and latent FP activations exist in the same forward pass (i.e., Binarize(aF)=aB\text{Binarize}(\mathbf{a}_F) = \mathbf{a}_B). To mitigate the information degradation caused by the binarization operation from FP to binary activations, we establish a novel contrastive learning framework while training BNNs through the lens of Mutual Information (MI) maximization. MI is introduced as the metric to measure the information shared between binary and FP activations, which assists binarization with contrastive learning. Specifically, the representation ability of the BNNs is greatly strengthened via pulling the positive pairs with binary and FP activations from the same input samples, as well as pushing negative pairs from different samples (the number of negative pairs can be exponentially large). This benefits the downstream tasks, not only classification but also segmentation and depth estimation, etc. The experimental results show that our method can be implemented as a pile-up module on existing state-of-the-art binarization methods and can remarkably improve the performance over them on CIFAR-10/100 and ImageNet, in addition to the great generalization ability on NYUD-v2.Comment: Accepted to ECCV 202

    Lipschitz Continuity Retained Binary Neural Network

    Full text link
    Relying on the premise that the performance of a binary neural network can be largely restored with eliminated quantization error between full-precision weight vectors and their corresponding binary vectors, existing works of network binarization frequently adopt the idea of model robustness to reach the aforementioned objective. However, robustness remains to be an ill-defined concept without solid theoretical support. In this work, we introduce the Lipschitz continuity, a well-defined functional property, as the rigorous criteria to define the model robustness for BNN. We then propose to retain the Lipschitz continuity as a regularization term to improve the model robustness. Particularly, while the popular Lipschitz-involved regularization methods often collapse in BNN due to its extreme sparsity, we design the Retention Matrices to approximate spectral norms of the targeted weight matrices, which can be deployed as the approximation for the Lipschitz constant of BNNs without the exact Lipschitz constant computation (NP-hard). Our experiments prove that our BNN-specific regularization method can effectively strengthen the robustness of BNN (testified on ImageNet-C), achieving state-of-the-art performance on CIFAR and ImageNet.Comment: Paper accepted to ECCV 202
    corecore