7 research outputs found

    Multi-view Self-supervised Disentanglement for General Image Denoising

    Get PDF
    With its significant performance improvements, the deep learning paradigm has become a standard tool for modern image denoisers. While promising performance has been shown on seen noise distributions, existing approaches often suffer from generalisation to unseen noise types or general and real noise. It is understandable as the model is designed to learn paired mapping (e.g. from a noisy image to its clean version). In this paper, we instead propose to learn to disentangle the noisy image, under the intuitive assumption that different corrupted versions of the same clean image share a common latent space. A self-supervised learning framework is proposed to achieve the goal, without looking at the latent clean image. By taking two different corrupted versions of the same image as input, the proposed Multi-view Self-supervised Disentanglement (MeD) approach learns to disentangle the latent clean features from the corruptions and recover the clean image consequently. Extensive experimental analysis on both synthetic and real noise shows the superiority of the proposed method over prior self-supervised approaches, especially on unseen novel noise types. On real noise, the proposed method even outperforms its supervised counterparts by over 3 dB

    Multi-view Self-supervised Disentanglement for General Image Denoising

    Full text link
    With its significant performance improvements, the deep learning paradigm has become a standard tool for modern image denoisers. While promising performance has been shown on seen noise distributions, existing approaches often suffer from generalisation to unseen noise types or general and real noise. It is understandable as the model is designed to learn paired mapping (e.g. from a noisy image to its clean version). In this paper, we instead propose to learn to disentangle the noisy image, under the intuitive assumption that different corrupted versions of the same clean image share a common latent space. A self-supervised learning framework is proposed to achieve the goal, without looking at the latent clean image. By taking two different corrupted versions of the same image as input, the proposed Multi-view Self-supervised Disentanglement (MeD) approach learns to disentangle the latent clean features from the corruptions and recover the clean image consequently. Extensive experimental analysis on both synthetic and real noise shows the superiority of the proposed method over prior self-supervised approaches, especially on unseen novel noise types. On real noise, the proposed method even outperforms its supervised counterparts by over 3 dB.Comment: International Conference on Computer Vision 2023 (ICCV 2023

    Prompt-Enhanced Software Vulnerability Detection Using ChatGPT

    Full text link
    With the increase in software vulnerabilities that cause significant economic and social losses, automatic vulnerability detection has become essential in software development and maintenance. Recently, large language models (LLMs) like GPT have received considerable attention due to their stunning intelligence, and some studies consider using ChatGPT for vulnerability detection. However, they do not fully consider the characteristics of LLMs, since their designed questions to ChatGPT are simple without a specific prompt design tailored for vulnerability detection. This paper launches a study on the performance of software vulnerability detection using ChatGPT with different prompt designs. Firstly, we complement previous work by applying various improvements to the basic prompt. Moreover, we incorporate structural and sequential auxiliary information to improve the prompt design. Besides, we leverage ChatGPT's ability of memorizing multi-round dialogue to design suitable prompts for vulnerability detection. We conduct extensive experiments on two vulnerability datasets to demonstrate the effectiveness of prompt-enhanced vulnerability detection using ChatGPT. We also analyze the merit and demerit of using ChatGPT for vulnerability detection.Comment: 13 Pages, 4 figure

    360+x : A Panoptic Multi-modal Scene Understanding Dataset

    No full text
    60+x dataset introduces a unique panoptic perspective to scene understanding, differentiating itself from existing datasets, by offering multiple viewpoints and modalities, captured from a variety of scenes. Our dataset contains: 1. 2,152 multi-model videos captured by 360° cameras and Spectacles cameras (8,579k frames in total) 2. Capture in 17 cities across 5 countries. 3. Capture in 28 Scenes from Artistic Spaces to Natural Landscapes. 4. Temporal Activity Localisation Labels for 38 action instances for each video. IMPORTANT NOTICE: Due to the large volume of the data files, they have been stored in the BEAR Research Data Store space. If you wish to access the data, please contact [email protected] to request and receive the relevant link to the data folder

    Product-specific active site motifs of Cu for electrochemical COâ‚‚ reduction

    No full text
    Electrochemical COâ‚‚ reduction (COâ‚‚R) to fuels is a promising route to close the anthropogenic carbon cycle and store renewable energy. Cu is the only metal catalyst that produces Câ‚‚â‚Š fuels, yet challenges remain in the improvement of electrosynthesis pathways for highly selective fuel production. To achieve this, mechanistically understanding COâ‚‚R on Cu, particularly identifying the product-specific active sites, is crucial. We rationally designed and fabricated nine large-area single-crystal Cu foils with various surface orientations as electrocatalysts and monitored their surface reconstructions using operando grazing incidence X-ray diffraction (GIXRD) and electron back-scattered diffraction (EBSD). We quantitatively established correlations between the Cu atomic configurations and the selectivities toward multiple products and provide a paradigm to understand the structure-function correlation in catalysis.This research was supported by the National Natural Science Foundation of China (grants 21872039, 51991340, and 51991342), Science and Technology Commission of Shanghai Municipality (grant 18JC1411700), National Key Research and Development Program of China (2016YFA0300903 and 2016YFA0300804), Beijing Natural Science Foundation (JQ19004), Beijing Excellent Talents Training Support (2017000026833ZK11), Beijing Municipal Science & Technology Commission (Z191100007219005), Beijing Graphene Innovation Program (Z181100004818003), and the Key Research and Development Program of Guangdong Province (2020B010189001, 2019B010931001, and 2018B030327001). We sincerely thank Dr. Bing Deng, Prof. Hailin Peng, and Prof. Zhongfan Liu for providing some low-index single-crystal Cu electrodes when initiating the work
    corecore