28 research outputs found

    Masked Autoencoder for Pre-Training on 3D Point Cloud Object Detection

    No full text
    In autonomous driving, the 3D LiDAR (Light Detection and Ranging) point cloud data of the target are missing due to long distance and occlusion. It makes object detection more difficult. This paper proposes Point Cloud Masked Autoencoder (PCMAE), which can provide pre-training for most voxel-based point cloud object detection algorithms. PCMAE improves the feature representation ability of the 3D backbone for long-distance and occluded objects through self-supervised learning. First, a point cloud masking strategy for autonomous driving scenes named PC-Mask is proposed. It is used to simulate the problem of missing point cloud data information due to occlusion and distance in autonomous driving scenarios. Then, a symmetrical encoder–decoder architecture is designed for pre-training. The encoder is used to extract the high-level features of the point cloud after PC-Mask, and the decoder is used to reconstruct the complete point cloud. Finally, the pre-training method proposed in this paper is applied to SECOND (Sparsely Embedded Convolutional Detection) and Part-A2-Net (Part-aware and Aggregate Neural Network) object detection algorithms. The experimental results show that our method can speed up the model convergence speed and improve the detection accuracy, especially the detection effect of long-distance and occluded objects

    A Novel Effective Vehicle Detection Method Based on Swin Transformer in Hazy Scenes

    No full text
    Under bad weather, the ability of intelligent vehicles to perceive the environment accurately is an important research content in many practical applications such as smart cities and unmanned driving. In order to improve vehicle environment perception technology in real hazy scenes, we propose an effective detection algorithm based on Swin Transformer for hazy vehicle detection. This algorithm includes two aspects. First of all, for the aspect of the difficulty in extracting haze features with poor visibility, a dehazing network is designed to obtain high-quality haze-free output through encoding and decoding methods using Swin Transformer blocks. In addition, for the aspect of the difficulty of vehicle detection in hazy images, a new end-to-end vehicle detection model in hazy days is constructed by fusing the dehazing module and the Swin Transformer detection module. In the training stage, the self-made dataset Haze-Car is used, and the haze detection model parameters are initialized by using the dehazing model and Swin-T through transfer learning. Finally, the final haze detection model is obtained by fine tuning. Through the joint learning of dehazing and object detection and comparative experiments on the self-made real hazy image dataset, it can be seen that the detection performance of the model in real-world scenes is improved by 12.5%

    PVformer: Pedestrian and Vehicle Detection Algorithm Based on Swin Transformer in Rainy Scenes

    No full text
    Pedestrian and vehicle detection plays a key role in the safe driving of autonomous vehicles. Although transformer-based object detection algorithms have made great progress, the accuracy of detection in rainy scenarios is still challenging. Based on the Swin Transformer, this paper proposes an end-to-end pedestrian and vehicle detection algorithm (PVformer) with deraining module, which improves the image quality and detection accuracy in rainy scenes. Based on Transformer blocks, a four-branch feature mapping model was introduced to achieve deraining from a single image, thereby mitigating the influence of rain streak occlusion on the detector performance. According to the trouble of small object detection only by visual transformer, we designed a local enhancement perception block based on CNN and Transformer. In addition, the deraining module and the detection module were combined to train the PVformer model through transfer learning. The experimental results show that the algorithm performed well on rainy days and significantly improved the accuracy of pedestrian and vehicle detection

    Masked Autoencoder for Pre-Training on 3D Point Cloud Object Detection

    No full text
    In autonomous driving, the 3D LiDAR (Light Detection and Ranging) point cloud data of the target are missing due to long distance and occlusion. It makes object detection more difficult. This paper proposes Point Cloud Masked Autoencoder (PCMAE), which can provide pre-training for most voxel-based point cloud object detection algorithms. PCMAE improves the feature representation ability of the 3D backbone for long-distance and occluded objects through self-supervised learning. First, a point cloud masking strategy for autonomous driving scenes named PC-Mask is proposed. It is used to simulate the problem of missing point cloud data information due to occlusion and distance in autonomous driving scenarios. Then, a symmetrical encoder–decoder architecture is designed for pre-training. The encoder is used to extract the high-level features of the point cloud after PC-Mask, and the decoder is used to reconstruct the complete point cloud. Finally, the pre-training method proposed in this paper is applied to SECOND (Sparsely Embedded Convolutional Detection) and Part-A2-Net (Part-aware and Aggregate Neural Network) object detection algorithms. The experimental results show that our method can speed up the model convergence speed and improve the detection accuracy, especially the detection effect of long-distance and occluded objects

    A Novel VHH Antibody Targeting the B Cell-Activating Factor for B-Cell Lymphoma

    No full text
    Objective: To construct an immune alpaca phage display library, in order to obtain a single domain anti-BAFF (B cell-activating factor) antibody. Methods: Using phage display technology, we constructed an immune alpaca phage display library, selected anti-BAFF single domain antibodies (sdAbs), cloned three anti-BAFF single-domain antibody genes into expression vector pSJF2, and expressed them efficiently in Escherichia coli. The affinity of different anti-BAFF sdAbs were measured by Bio layer interferometry. The in vitro biological function of three sdAbs was investigated by cell counting kit-8 (CCK-8) assay and a competitive enzyme-linked immunosorbent assay (ELISA). Results: We obtained three anti-BAFF single domain antibodies (anti-BAFF64, anti-BAFF52 and anti-BAFFG3), which were produced in high yield in Escherichia coli and inhibited tumor cell proliferation in vitro. Conclusion: The selected anti-BAFF antibodies could be candidates for B-cell lymphoma therapies

    Regional Time-Series Coding Network and Multi-View Image Generation Network for Short-Time Gait Recognition

    No full text
    Gait recognition is one of the important research directions of biometric authentication technology. However, in practical applications, the original gait data is often short, and a long and complete gait video is required for successful recognition. Also, the gait images from different views have a great influence on the recognition effect. To address the above problems, we designed a gait data generation network for expanding the cross-view image data required for gait recognition, which provides sufficient data input for feature extraction branching with gait silhouette as the criterion. In addition, we propose a gait motion feature extraction network based on regional time-series coding. By independently time-series coding the joint motion data within different regions of the body, and then combining the time-series data features of each region with secondary coding, we obtain the unique motion relationships between regions of the body. Finally, bilinear matrix decomposition pooling is used to fuse spatial silhouette features and motion time-series features to obtain complete gait recognition under shorter time-length video input. We use the OUMVLP-Pose and CASIA-B datasets to validate the silhouette image branching and motion time-series branching, respectively, and employ evaluation metrics such as IS entropy value and Rank-1 accuracy to demonstrate the effectiveness of our design network. Finally, we also collect gait-motion data in the real world and test them in a complete two-branch fusion network. The experimental results show that the network we designed can effectively extract the time-series features of human motion and achieve the expansion of multi-view gait data. The real-world tests also prove that our designed method has good results and feasibility in the problem of gait recognition with short-time video as input data
    corecore