387 research outputs found

    Progressive Poisoned Data Isolation for Training-time Backdoor Defense

    Full text link
    Deep Neural Networks (DNN) are susceptible to backdoor attacks where malicious attackers manipulate the model's predictions via data poisoning. It is hence imperative to develop a strategy for training a clean model using a potentially poisoned dataset. Previous training-time defense mechanisms typically employ an one-time isolation process, often leading to suboptimal isolation outcomes. In this study, we present a novel and efficacious defense method, termed Progressive Isolation of Poisoned Data (PIPD), that progressively isolates poisoned data to enhance the isolation accuracy and mitigate the risk of benign samples being misclassified as poisoned ones. Once the poisoned portion of the dataset has been identified, we introduce a selective training process to train a clean model. Through the implementation of these techniques, we ensure that the trained model manifests a significantly diminished attack success rate against the poisoned data. Extensive experiments on multiple benchmark datasets and DNN models, assessed against nine state-of-the-art backdoor attacks, demonstrate the superior performance of our PIPD method for backdoor defense. For instance, our PIPD achieves an average True Positive Rate (TPR) of 99.95% and an average False Positive Rate (FPR) of 0.06% for diverse attacks over CIFAR-10 dataset, markedly surpassing the performance of state-of-the-art methods.Comment: Accepted to AAAI202

    Rethinking Image Forgery Detection via Contrastive Learning and Unsupervised Clustering

    Full text link
    Image forgery detection aims to detect and locate forged regions in an image. Most existing forgery detection algorithms formulate classification problems to classify pixels into forged or pristine. However, the definition of forged and pristine pixels is only relative within one single image, e.g., a forged region in image A is actually a pristine one in its source image B (splicing forgery). Such a relative definition has been severely overlooked by existing methods, which unnecessarily mix forged (pristine) regions across different images into the same category. To resolve this dilemma, we propose the FOrensic ContrAstive cLustering (FOCAL) method, a novel, simple yet very effective paradigm based on contrastive learning and unsupervised clustering for the image forgery detection. Specifically, FOCAL 1) utilizes pixel-level contrastive learning to supervise the high-level forensic feature extraction in an image-by-image manner, explicitly reflecting the above relative definition; 2) employs an on-the-fly unsupervised clustering algorithm (instead of a trained one) to cluster the learned features into forged/pristine categories, further suppressing the cross-image influence from training data; and 3) allows to further boost the detection performance via simple feature-level concatenation without the need of retraining. Extensive experimental results over six public testing datasets demonstrate that our proposed FOCAL significantly outperforms the state-of-the-art competing algorithms by big margins: +24.3% on Coverage, +18.6% on Columbia, +17.5% on FF++, +14.2% on MISD, +13.5% on CASIA and +10.3% on NIST in terms of IoU. The paradigm of FOCAL could bring fresh insights and serve as a novel benchmark for the image forgery detection task. The code is available at https://github.com/HighwayWu/FOCAL

    Under-Display Camera Image Restoration with Scattering Effect

    Full text link
    The under-display camera (UDC) provides consumers with a full-screen visual experience without any obstruction due to notches or punched holes. However, the semi-transparent nature of the display inevitably introduces the severe degradation into UDC images. In this work, we address the UDC image restoration problem with the specific consideration of the scattering effect caused by the display. We explicitly model the scattering effect by treating the display as a piece of homogeneous scattering medium. With the physical model of the scattering effect, we improve the image formation pipeline for the image synthesis to construct a realistic UDC dataset with ground truths. To suppress the scattering effect for the eventual UDC image recovery, a two-branch restoration network is designed. More specifically, the scattering branch leverages global modeling capabilities of the channel-wise self-attention to estimate parameters of the scattering effect from degraded images. While the image branch exploits the local representation advantage of CNN to recover clear scenes, implicitly guided by the scattering branch. Extensive experiments are conducted on both real-world and synthesized data, demonstrating the superiority of the proposed method over the state-of-the-art UDC restoration techniques. The source code and dataset are available at \url{https://github.com/NamecantbeNULL/SRUDC}.Comment: Accepted to ICCV202

    CNN Injected Transformer for Image Exposure Correction

    Full text link
    Capturing images with incorrect exposure settings fails to deliver a satisfactory visual experience. Only when the exposure is properly set, can the color and details of the images be appropriately preserved. Previous exposure correction methods based on convolutions often produce exposure deviation in images as a consequence of the restricted receptive field of convolutional kernels. This issue arises because convolutions are not capable of capturing long-range dependencies in images accurately. To overcome this challenge, we can apply the Transformer to address the exposure correction problem, leveraging its capability in modeling long-range dependencies to capture global representation. However, solely relying on the window-based Transformer leads to visually disturbing blocking artifacts due to the application of self-attention in small patches. In this paper, we propose a CNN Injected Transformer (CIT) to harness the individual strengths of CNN and Transformer simultaneously. Specifically, we construct the CIT by utilizing a window-based Transformer to exploit the long-range interactions among different regions in the entire image. Within each CIT block, we incorporate a channel attention block (CAB) and a half-instance normalization block (HINB) to assist the window-based self-attention to acquire the global statistics and refine local features. In addition to the hybrid architecture design for exposure correction, we apply a set of carefully formulated loss functions to improve the spatial coherence and rectify potential color deviations. Extensive experiments demonstrate that our image exposure correction method outperforms state-of-the-art approaches in terms of both quantitative and qualitative metrics

    Gaussian entanglement witness and refined Werner-Wolf criterion for continuous variables

    Full text link
    We use matched quantum entanglement witnesses to study the separable criteria of continuous variable states. The witness can be written as an identity operator minus a Gaussian operator. The optimization of the witness then is transformed to an eigenvalue problem of a Gaussian kernel integral equation. It follows a separable criterion not only for symmetric Gaussian quantum states, but also for non-Gaussian states prepared by photon adding to or/and subtracting from symmetric Gaussian states. Based on Fock space numeric calculation, we obtain an entanglement witness for more general two-mode states. A necessary criterion of separability follows for two-mode states and it is shown to be necessary and sufficient for a two mode squeezed thermal state and the related two-mode non-Gaussian states. We also connect the witness based criterion with Werner-Wolf criterion and refine the Werner-Wolf criterion.Comment: 11pages, 2 figure

    Distributed Spectrum and Power Allocation for D2D-U Networks: A Scheme based on NN and Federated Learning

    Full text link
    In this paper, a Device-to-Device communication on unlicensed bands (D2D-U) enabled network is studied. To improve the spectrum efficiency (SE) on the unlicensed bands and fit its distributed structure while ensuring the fairness among D2D-U links and the harmonious coexistence with WiFi networks, a distributed joint power and spectrum scheme is proposed. In particular, a parameter, named as price, is defined, which is updated at each D2D-U pair by a online trained Neural network (NN) according to the channel state and traffic load. In addition, the parameters used in the NN are updated by two ways, unsupervised self-iteration and federated learning, to guarantee the fairness and harmonious coexistence. Then, a non-convex optimization problem with respect to the spectrum and power is formulated and solved on each D2D-U link to maximize its own data rate. Numerical simulation results are demonstrated to verify the effectiveness of the proposed scheme

    LucidDreamer: Towards High-Fidelity Text-to-3D Generation via Interval Score Matching

    Full text link
    The recent advancements in text-to-3D generation mark a significant milestone in generative models, unlocking new possibilities for creating imaginative 3D assets across various real-world scenarios. While recent advancements in text-to-3D generation have shown promise, they often fall short in rendering detailed and high-quality 3D models. This problem is especially prevalent as many methods base themselves on Score Distillation Sampling (SDS). This paper identifies a notable deficiency in SDS, that it brings inconsistent and low-quality updating direction for the 3D model, causing the over-smoothing effect. To address this, we propose a novel approach called Interval Score Matching (ISM). ISM employs deterministic diffusing trajectories and utilizes interval-based score matching to counteract over-smoothing. Furthermore, we incorporate 3D Gaussian Splatting into our text-to-3D generation pipeline. Extensive experiments show that our model largely outperforms the state-of-the-art in quality and training efficiency.Comment: The first two authors contributed equally to this work. Our code will be available at: https://github.com/EnVision-Research/LucidDreame

    Genetic code expansion in \u3ci\u3ePseudomonas putida\u3c/i\u3e KT2440

    Get PDF
    Pseudomonas putida KT2440 is an emerging microbial chassis for bio-based chemical production from renewable feedstocks and environmental bioremediation. However, tools for studying, engineering, and modulating protein complexes and biosynthetic enzymes in this organism are largely underdeveloped. Genetic code expansion for the incorporation of unnatural amino acids (unAAs) into proteins can advance such efforts and, furthermore, enable additional controls of biological processes of the strain. In this work, we established the orthogonality of two widely used archaeal tRNA synthetase and tRNA pairs in KT2440. Following the optimization of decoding systems, four unAAs were incorporated into proteins in response to a UAG stop codon at 34.6-78% efficiency. In addition, we demonstrated the utility of genetic code expansion through the incorporation of a photocrosslinking amino acid, p-benzoyl-L-phenylalanine (pBpa), into glutathione S-transferase (GstA) and a chemosensory response regulator (CheY) for protein-protein interaction studies in KT2440. This work reported the successful genetic code expansion in KT2440 for the first time. Given the diverse structure and functions of unAAs that have been added to protein syntheses using the archaeal systems, our research lays down a solid foundation for future work to study and enhance the biological functions of KT2440
    corecore