2 research outputs found

    Learning How to Detect Salient Objects in Nighttime Scenes

    Get PDF
    The detection of salient objects in nighttime scene settings is an essential research issue in computer vision. None of the known approaches can accurately anticipate salient objects in the nighttime scenes. Due to the lack of visible light, spatial visual information cannot be accurately perceived by traditional and deep network models. This paper proposed a Mountain Basin Network (MBNet) to identify salient objects for distinguishing the pixel-level saliency of low-light images. To improve the objects localizations and pixel classification performances, the proposed model incorporated a High-Low Feature Aggregation Module (HLFA) to synchronize the information from a high-level branch (named Bal-Net) and a low-level branch (called Mol-Net) to fuse the global and local context, and a Hierarchical Supervision Module (HSM) was embedded to aid in obtaining accurate salient objects, particularly the small ones. In addition, a multi-supervised integration technique was explored to optimize the structure and borders of salient objects. In the meantime, to facilitate more investigation into nighttime scenes and assessment of visual saliency models, we created a new nighttime dataset consisting of thirteen categories and a total of one thousand low-light images. Our experimental results demonstrated that the suggested MBNet model outperforms seven current state-of-the-art methods for salient object detection in nighttime scenes

    Automated polyp segmentation based on a multi-distance feature dissimilarity-guided fully convolutional network

    Get PDF
    Colorectal malignancies often arise from adenomatous polyps, which typically begin as solitary, asymptomatic growths before progressing to malignancy. Colonoscopy is widely recognized as a highly efficacious clinical polyp detection method, offering valuable visual data that facilitates precise identification and subsequent removal of these tumors. Nevertheless, accurately segmenting individual polyps poses a considerable difficulty because polyps exhibit intricate and changeable characteristics, including shape, size, color, quantity and growth context during different stages. The presence of similar contextual structures around polyps significantly hampers the performance of commonly used convolutional neural network (CNN)-based automatic detection models to accurately capture valid polyp features, and these large receptive field CNN models often overlook the details of small polyps, which leads to the occurrence of false detections and missed detections. To tackle these challenges, we introduce a novel approach for automatic polyp segmentation, known as the multi-distance feature dissimilarity-guided fully convolutional network. This approach comprises three essential components, i.e., an encoder-decoder, a multi-distance difference (MDD) module and a hybrid loss (HL) module. Specifically, the MDD module primarily employs a multi-layer feature subtraction (MLFS) strategy to propagate features from the encoder to the decoder, which focuses on extracting information differences between neighboring layers' features at short distances, and both short and long-distance feature differences across layers. Drawing inspiration from pyramids, the MDD module effectively acquires discriminative features from neighboring layers or across layers in a continuous manner, which helps to strengthen feature complementary across different layers. The HL module is responsible for supervising the feature maps extracted at each layer of the network to improve prediction accuracy. Our experimental results on four challenge datasets demonstrate that the proposed approach exhibits superior automatic polyp performance in terms of the six evaluation criteria compared to five current state-of-the-art approaches
    corecore