15 research outputs found

    Keypoints-based deep feature fusion for cooperative vehicle detection of autonomous driving

    Get PDF
    Sharing collective perception messages (CPM) between vehicles is investigated to decrease occlusions so as to improve the perception accuracy and safety of autonomous driving. However, highly accurate data sharing and low communication overhead is a big challenge for collective perception, especially when real-time communication is required among connected and automated vehicles. In this paper, we propose an efficient and effective keypoints-based deep feature fusion framework built on the 3D object detector PV-RCNN, called Fusion PV-RCNN (FPV-RCNN for short), for collective perception. We introduce a high-performance bounding box proposal matching module and a keypoints selection strategy to compress the CPM size and solve the multi-vehicle data fusion problem. Besides, we also propose an effective localization error correction module based on the maximum consensus principle to increase the robustness of the data fusion. Compared to a bird's-eye view (BEV) keypoints feature fusion, FPV-RCNN achieves improved detection accuracy by about 9% at a high evaluation criterion (IoU 0.7) on the synthetic dataset COMAP dedicated to collective perception. In addition, its performance is comparable to two raw data fusion baselines that have no data loss in sharing. Moreover, our method also significantly decreases the CPM size to less than 0.3 KB, and is thus about 50 times smaller than the BEV feature map sharing used in previous works. Even with further decreased CPM feature channels, i.e., from 128 to 32, the detection performance does not show apparent drops. The code of our method is available at https://github.com/YuanYunshuang/FPV_RCNN

    Generating evidential BEV maps in continuous driving space

    Get PDF
    Safety is critical for autonomous driving, and one aspect of improving safety is to accurately capture the uncertainties of the perception system, especially knowing the unknown. Different from only providing deterministic or probabilistic results, e.g., probabilistic object detection, that only provide partial information for the perception scenario, we propose a complete probabilistic model named GevBEV. It interprets the 2D driving space as a probabilistic Bird's Eye View (BEV) map with point-based spatial Gaussian distributions, from which one can draw evidence as the parameters for the categorical Dirichlet distribution of any new sample point in the continuous driving space. The experimental results show that GevBEV not only provides more reliable uncertainty quantification but also outperforms the previous works on the benchmarks OPV2V and V2V4Real of BEV map interpretation for cooperative perception in simulated and real-world driving scenarios, respectively. A critical factor in cooperative perception is the data transmission size through the communication channels. GevBEV helps reduce communication overhead by selecting only the most important information to share from the learned uncertainty, reducing the average information communicated by 87% with only a slight performance drop. Our code is published at https://github.com/YuanYunshuang/GevBEV

    Generating Evidential BEV Maps in Continuous Driving Space

    Full text link
    Safety is critical for autonomous driving, and one aspect of improving safety is to accurately capture the uncertainties of the perception system, especially knowing the unknown. Different from only providing deterministic or probabilistic results, e.g., probabilistic object detection, that only provide partial information for the perception scenario, we propose a complete probabilistic model named GevBEV. It interprets the 2D driving space as a probabilistic Bird's Eye View (BEV) map with point-based spatial Gaussian distributions, from which one can draw evidence as the parameters for the categorical Dirichlet distribution of any new sample point in the continuous driving space. The experimental results show that GevBEV not only provides more reliable uncertainty quantification but also outperforms the previous works on the benchmarks OPV2V and V2V4Real of BEV map interpretation for cooperative perception in simulated and real-world driving scenarios, respectively. A critical factor in cooperative perception is the data transmission size through the communication channels. GevBEV helps reduce communication overhead by selecting only the most important information to share from the learned uncertainty, reducing the average information communicated by 87% with only a slight performance drop. Our code is published at https://github.com/YuanYunshuang/GevBEV

    Leveraging Dynamic Objects for Relative Localization Correction in a Connected Autonomous Vehicle Network

    Full text link
    High-accurate localization is crucial for the safety and reliability of autonomous driving, especially for the information fusion of collective perception that aims to further improve road safety by sharing information in a communication network of ConnectedAutonomous Vehicles (CAV). In this scenario, small localization errors can impose additional difficulty on fusing the information from different CAVs. In this paper, we propose a RANSAC-based (RANdom SAmple Consensus) method to correct the relative localization errors between two CAVs in order to ease the information fusion among the CAVs. Different from previous LiDAR-based localization algorithms that only take the static environmental information into consideration, this method also leverages the dynamic objects for localization thanks to the real-time data sharing between CAVs. Specifically, in addition to the static objects like poles, fences, and facades, the object centers of the detected dynamic vehicles are also used as keypoints for the matching of two point sets. The experiments on the synthetic dataset COMAP show that the proposed method can greatly decrease the relative localization error between two CAVs to less than 20cmas far as there are enough vehicles and poles are correctly detected by bothCAVs. Besides, our proposed method is also highly efficient in runtime and can be used in real-time scenarios of autonomous driving.Comment: ISPRS congress 202

    Nutrition-Related Knowledge, Attitudes, and Practices (KAP) among Kindergarten Teachers in Chongqing, China: A Cross-Sectional Survey

    No full text
    Kindergarten teachers play an important role in providing kindergarten children with education on nutrition. However, few studies have been published on nutrition-related knowledge, attitudes, and practices (KAP) of Chinese kindergarten teachers. This study aimed to assess the nutrition-related knowledge, attitudes, and practices (KAP) of kindergarten teachers in Chongqing, China. Thus, a cross-sectional survey was conducted using a structured KAP model questionnaire administered to 222 kindergarten teachers, who were senior teachers from 80 kindergartens in 19 districts and 20 counties in Chongqing. Multiple regression analysis was used to analyze the influential factors. Among the participants, 54.2% were familiar with simple nutrition-related knowledge; only 9.9% of them were satisfied with their knowledge of childhood nutrition; and 97.7% of them had a positive attitude to learn nutrition-related knowledge. Only 38.7% of the participants had attended pediatric nutrition knowledge courses or training. Multiple regression analysis confirmed significant independent effects on the nutrition knowledge score (p < 0.0001) of respondents on age, type of residence, type of kindergarten, body mass index(BMI), professional training of kindergarten teachers, behavior of having ever participated in childhood nutrition education knowledge courses or training, and behavior of having ever paid attention to children’s nutrition knowledge. The model indicated that independent variables explained 45.4% (adjusted R2) of the variance found in the knowledge scores of respondents. While there were low levels of nutrition knowledge and training, it was still encouraging to note that there were positive attitudes towards acquiring nutrition-related knowledge among kindergarten teachers in Chongqing, China. These findings provide some implications that necessary training measures need to be carried out to improve the nutrition-related knowledge level among kindergarten teachers in China

    Additional file 1: Table S1. of Genome-wide survey and expression analysis of the OSCA gene family in rice

    No full text
    General information and sequence characterisation of 45 OSCA genes from Oryza sativa L. ssp. Japonica, Oryza sativa L. ssp. Indica, Oryza glaberrima, and Oryza brachyantha. (DOC 89 kb
    corecore