3 research outputs found

    Informed anytime fast marching tree for asymptotically-optimal motion planning

    Get PDF
    In many applications, it is necessary for motion planning planners to get high-quality solutions in high-dimensional complex problems. In this paper, we propose an anytime asymptotically-optimal sampling-based algorithm, namely Informed Anytime Fast Marching Tree (IAFMT*), designed for solving motion planning problems. Employing a hybrid incremental search and a dynamic optimal search, the IAFMT* fast finds a feasible solution, if time permits, it can efficiently improve the solution toward the optimal solution. This paper also presents the theoretical analysis of probabilistic completeness, asymptotic optimality, and computational complexity on the proposed algorithm. Its ability to converge to a high-quality solution with the efficiency, stability, and self-adaptability has been tested by challenging simulations and a humanoid mobile robot

    PGA-Net: pyramid feature fusion and global context attention network for automated surface defect detection

    No full text
    Surface defect detection is a critical task in industrial production process. Nowadays, there are lots of detection methods based on computer vision and have been successfully applied in industry, they also achieved good results. However, achieving full automation of surface defect detection remains a challenge, due to the complexity of surface defect, in intra-class, while the defects between inter-class contain similar parts, there are large differences in appearance of the defects. To address these issues, this paper proposes a pyramid feature fusion and global context attention network for pixel-wise detection of surface defect, called PGA-Net. In the framework, the multi-scale features are extracted at first from backbone network. Then the pyramid feature fusion module is used to fuse these features into five resolutions through some efficient dense skip connections. Finally, the global context attention module is applied to the fusion feature maps of adjacent resolution, which allows effective information propagate from low-resolution fusion feature maps to high-resolution fusion ones. In addition, the boundary refinement block is added to the framework to refine the boundary of defect and improve the result of predict. The final prediction is the fusion of the five resolutions fusion feature maps. The results of evaluation on four real-world defect datasets demonstrate that the proposed method outperforms the state-of-the-art methods on mean Intersection of Union and mean Pixel Accuracy (NEU-Seg: 82.15%, DAGM 2007: 74.78%, MT_defect: 71.31%, Road_defect: 79.54%)

    Learning object-centric complementary features for zero-shot learning

    No full text
    Zero-shot learning (ZSL) aims to recognize new objects that have never seen before by associating categories with their semantic knowledge. Existing works mainly focus on learning better visual-semantic mapping to align the visual and semantic space, while the effectiveness of learning discriminative visual features is neglected. In this paper, we propose an object-centric complementary features (OCF) learning model to take full advantage of visual information of objects with the guidance of semantic knowledge. This model can automatically discover the object region and obtain fine-scale samples without any human annotation. Then, the attention mechanism is used in our model to capture long-range visual features corresponding to semantic knowledge like ‘four legs’ and subtle visual differences between similar categories. Finally, we train our model with the guidance of semantic knowledge in an end-to-end manner. Our method is evaluated on three widely used ZSL datasets, CUB, AwA2, and FLO, and the experiment results demonstrate the efficacy of the object-centric complementary features, and our proposed method outperforms the state-of-the-art methods
    corecore