183 research outputs found

    Semantic-aware Texture-Structure Feature Collaboration for Underwater Image Enhancement

    Full text link
    Underwater image enhancement has become an attractive topic as a significant technology in marine engineering and aquatic robotics. However, the limited number of datasets and imperfect hand-crafted ground truth weaken its robustness to unseen scenarios, and hamper the application to high-level vision tasks. To address the above limitations, we develop an efficient and compact enhancement network in collaboration with a high-level semantic-aware pretrained model, aiming to exploit its hierarchical feature representation as an auxiliary for the low-level underwater image enhancement. Specifically, we tend to characterize the shallow layer features as textures while the deep layer features as structures in the semantic-aware model, and propose a multi-path Contextual Feature Refinement Module (CFRM) to refine features in multiple scales and model the correlation between different features. In addition, a feature dominative network is devised to perform channel-wise modulation on the aggregated texture and structure features for the adaptation to different feature patterns of the enhancement network. Extensive experiments on benchmarks demonstrate that the proposed algorithm achieves more appealing results and outperforms state-of-the-art methods by large margins. We also apply the proposed algorithm to the underwater salient object detection task to reveal the favorable semantic-aware ability for high-level vision tasks. The code is available at STSC.Comment: Accepted by ICRA202

    Facile and Label-Free Electrochemical Biosensors for MicroRNA Detection based on DNA Origami Nanostructures

    Get PDF
    MicroRNAs (miRNAs) have emerged as the promising molecular biomarkers for early diagnosis and enhanced understanding of the molecular pathogenesis of cancers as well as certain diseases. Here, a facile, label-free, and amplification-free electrochemical biosensor was developed to detect miRNA by using DNA origami nanostructure-supported DNA probes, with methylene blue (MB) serving as the hybridization redox indicator, for the first time. Specifically, the use of cross-shaped DNA origami nanostructures containing multiple single-stranded DNA probes at preselected locations on each DNA nanostructure could increase the accessibility and the recognition efficiency of the probes (due to the rational controlled density of DNA probes). The successful immobilization of DNA origami probes and their hybridization with targeted miRNA-21 molecules was confirmed by electrochemical impedance spectroscopy and cyclic voltammetry methods. A differential pulse voltammetry technique was employed to record the oxidation peak current of MB before and after target hybridization. The linear detection range of this biosensor was from 0.1 pM to 10.0 nM, with a lower detection limit of 79.8 fM. The selectivity of the miRNA biosensor was also studied by observing the discrimination ability of single-base mismatched sequences. Because of the larger surface area and unprecedented customizability of DNA origami nanostructures, this strategy demonstrated great potential for sensitive, selective, and label-free determination of miRNA for translational biomedical research and clinical applications

    Improving Misaligned Multi-modality Image Fusion with One-stage Progressive Dense Registration

    Full text link
    Misalignments between multi-modality images pose challenges in image fusion, manifesting as structural distortions and edge ghosts. Existing efforts commonly resort to registering first and fusing later, typically employing two cascaded stages for registration,i.e., coarse registration and fine registration. Both stages directly estimate the respective target deformation fields. In this paper, we argue that the separated two-stage registration is not compact, and the direct estimation of the target deformation fields is not accurate enough. To address these challenges, we propose a Cross-modality Multi-scale Progressive Dense Registration (C-MPDR) scheme, which accomplishes the coarse-to-fine registration exclusively using a one-stage optimization, thus improving the fusion performance of misaligned multi-modality images. Specifically, two pivotal components are involved, a dense Deformation Field Fusion (DFF) module and a Progressive Feature Fine (PFF) module. The DFF aggregates the predicted multi-scale deformation sub-fields at the current scale, while the PFF progressively refines the remaining misaligned features. Both work together to accurately estimate the final deformation fields. In addition, we develop a Transformer-Conv-based Fusion (TCF) subnetwork that considers local and long-range feature dependencies, allowing us to capture more informative features from the registered infrared and visible images for the generation of high-quality fused images. Extensive experimental analysis demonstrates the superiority of the proposed method in the fusion of misaligned cross-modality images
    • …
    corecore