697 research outputs found

    Model-Based Environmental Visual Perception for Humanoid Robots

    Get PDF
    The visual perception of a robot should answer two fundamental questions: What? and Where? In order to properly and efficiently reply to these questions, it is essential to establish a bidirectional coupling between the external stimuli and the internal representations. This coupling links the physical world with the inner abstraction models by sensor transformation, recognition, matching and optimization algorithms. The objective of this PhD is to establish this sensor-model coupling

    Boundary-semantic collaborative guidance network with dual-stream feedback mechanism for salient object detection in optical remote sensing imagery

    Full text link
    With the increasing application of deep learning in various domains, salient object detection in optical remote sensing images (ORSI-SOD) has attracted significant attention. However, most existing ORSI-SOD methods predominantly rely on local information from low-level features to infer salient boundary cues and supervise them using boundary ground truth, but fail to sufficiently optimize and protect the local information, and almost all approaches ignore the potential advantages offered by the last layer of the decoder to maintain the integrity of saliency maps. To address these issues, we propose a novel method named boundary-semantic collaborative guidance network (BSCGNet) with dual-stream feedback mechanism. First, we propose a boundary protection calibration (BPC) module, which effectively reduces the loss of edge position information during forward propagation and suppresses noise in low-level features without relying on boundary ground truth. Second, based on the BPC module, a dual feature feedback complementary (DFFC) module is proposed, which aggregates boundary-semantic dual features and provides effective feedback to coordinate features across different layers, thereby enhancing cross-scale knowledge communication. Finally, to obtain more complete saliency maps, we consider the uniqueness of the last layer of the decoder for the first time and propose the adaptive feedback refinement (AFR) module, which further refines feature representation and eliminates differences between features through a unique feedback mechanism. Extensive experiments on three benchmark datasets demonstrate that BSCGNet exhibits distinct advantages in challenging scenarios and outperforms the 17 state-of-the-art (SOTA) approaches proposed in recent years. Codes and results have been released on GitHub: https://github.com/YUHsss/BSCGNet.Comment: Accepted by TGR

    APTIVE IMAGE SEGMENTATION BASED ON SALIENCY DETECTION

    Get PDF
    • …
    corecore