88 research outputs found

    NegDL: Privacy-Preserving Deep Learning Based on Negative Database

    Full text link
    In the era of big data, deep learning has become an increasingly popular topic. It has outstanding achievements in the fields of image recognition, object detection, and natural language processing et al. The first priority of deep learning is exploiting valuable information from a large amount of data, which will inevitably induce privacy issues that are worthy of attention. Presently, several privacy-preserving deep learning methods have been proposed, but most of them suffer from a non-negligible degradation of either efficiency or accuracy. Negative database (\textit{NDB}) is a new type of data representation which can protect data privacy by storing and utilizing the complementary form of original data. In this paper, we propose a privacy-preserving deep learning method named NegDL based on \textit{NDB}. Specifically, private data are first converted to \textit{NDB} as the input of deep learning models by a generation algorithm called \textit{QK}-hidden algorithm, and then the sketches of \textit{NDB} are extracted for training and inference. We demonstrate that the computational complexity of NegDL is the same as the original deep learning model without privacy protection. Experimental results on Breast Cancer, MNIST, and CIFAR-10 benchmark datasets demonstrate that the accuracy of NegDL could be comparable to the original deep learning model in most cases, and it performs better than the method based on differential privacy

    Iris Template Protection Based on Local Ranking

    Get PDF
    Biometrics have been widely studied in recent years, and they are increasingly employed in real-world applications. Meanwhile, a number of potential threats to the privacy of biometric data arise. Iris template protection demands that the privacy of iris data should be protected when performing iris recognition. According to the international standard ISO/IEC 24745, iris template protection should satisfy the irreversibility, revocability, and unlinkability. However, existing works about iris template protection demonstrate that it is difficult to satisfy the three privacy requirements simultaneously while supporting effective iris recognition. In this paper, we propose an iris template protection method based on local ranking. Specifically, the iris data are first XORed (Exclusive OR operation) with an application-specific string; next, we divide the results into blocks and then partition the blocks into groups. The blocks in each group are ranked according to their decimal values, and original blocks are transformed to their rank values for storage. We also extend the basic method to support the shifting strategy and masking strategy, which are two important strategies for iris recognition. We demonstrate that the proposed method satisfies the irreversibility, revocability, and unlinkability. Experimental results on typical iris datasets (i.e., CASIA-IrisV3-Interval, CASIA-IrisV4-Lamp, UBIRIS-V1-S1, and MMU-V1) show that the proposed method could maintain the recognition performance while protecting the privacy of iris data

    Genomic signatures and prognosis of advanced stage Chinese pediatric T cell lymphoblastic lymphoma by whole exome sequencing

    Get PDF
    ObjectiveTo investigate the genomic signatures and prognosis of advanced-stage T cell lymphoblastic lymphoma (T-LBL) and to examine the relationship between T-LBL and T cell acute lymphoblastic leukemia (T-ALL).Methods35 Chinese T-LBL children with stage III or IV disease were recruited for this study. They were treated with combination chemotherapy and whole exome sequencing. The relationship of the clinical features, prognosis and specific gene mutations was researched. Gene chips of T-LBL and T-ALL were downloaded from a database, and differential gene expression was analyzed.ResultsGermline causal gene mutations (CARS or MAP2K2) were detected in 2 patients; 3.06 ± 2.21 somatic causal gene mutations were identified in the 35 patients, and somatic mutations were observed in the NOTCH1, FBXW7, PHF6 and JAK3 genes. NOTCH1 mutations were significantly associated with FBXW7 mutations, and the age at diagnosis of patients with NOTCH1-FBXW7 mutations was less than that of patients without such mutations (P < 0.05). 32 patients achieved complete remission (CR), and 14 and 18 patients were classified into the intermediate risk (IR) group and high risk (HR) group. During a median follow-up of 44 months, 3 patients relapsed. Three-year prospective event free survival (pEFS) was 82.286%, and no significant differences of pEFS were found for different sexes, ages, or statuses of NOTCH1-FBXW7 mutations, (P > 0.05); however, the mean survival time of the IR group was longer than that of the HR group (P < 0.05). Differential expression of genes in the T-LBL and/or T-ALL datasets was analyzed using the R package limma, and 1/3 of the differentially expressed genes were found in both the T-ALL and T-LBL datasets. High expression of PI3K-Akt signal pathway genes and the USP34 gene was found in the T-LBL dataset.ConclusionAlthough T-ALL and T-LBL both originate from precursor T-cells and are considered different manifestations of the same disease and the outcome of T-LBL is favorable when using T-ALL-based chemotherapy, there are differences in the gene distribution between T-LBL and T-ALL. It seems that the PI3K-Akt signaling pathway and the USP34 gene play important roles in T-LBL, but medicines targeting the USP34 gene or the PI3K-Akt pathway may be invalid

    RLIPv2: Fast Scaling of Relational Language-Image Pre-training

    Full text link
    Relational Language-Image Pre-training (RLIP) aims to align vision representations with relational texts, thereby advancing the capability of relational reasoning in computer vision tasks. However, hindered by the slow convergence of RLIPv1 architecture and the limited availability of existing scene graph data, scaling RLIPv1 is challenging. In this paper, we propose RLIPv2, a fast converging model that enables the scaling of relational pre-training to large-scale pseudo-labelled scene graph data. To enable fast scaling, RLIPv2 introduces Asymmetric Language-Image Fusion (ALIF), a mechanism that facilitates earlier and deeper gated cross-modal fusion with sparsified language encoding layers. ALIF leads to comparable or better performance than RLIPv1 in a fraction of the time for pre-training and fine-tuning. To obtain scene graph data at scale, we extend object detection datasets with free-form relation labels by introducing a captioner (e.g., BLIP) and a designed Relation Tagger. The Relation Tagger assigns BLIP-generated relation texts to region pairs, thus enabling larger-scale relational pre-training. Through extensive experiments conducted on Human-Object Interaction Detection and Scene Graph Generation, RLIPv2 shows state-of-the-art performance on three benchmarks under fully-finetuning, few-shot and zero-shot settings. Notably, the largest RLIPv2 achieves 23.29mAP on HICO-DET without any fine-tuning, yields 32.22mAP with just 1% data and yields 45.09mAP with 100% data. Code and models are publicly available at https://github.com/JacobYuan7/RLIPv2.Comment: Accepted to ICCV 2023. Code and models: https://github.com/JacobYuan7/RLIPv
    corecore