917 research outputs found

    DeepMarks: A Digital Fingerprinting Framework for Deep Neural Networks

    Get PDF
    This paper proposes DeepMarks, a novel end-to-end framework for systematic fingerprinting in the context of Deep Learning (DL). Remarkable progress has been made in the area of deep learning. Sharing the trained DL models has become a trend that is ubiquitous in various fields ranging from biomedical diagnosis to stock prediction. As the availability and popularity of pre-trained models are increasing, it is critical to protect the Intellectual Property (IP) of the model owner. DeepMarks introduces the first fingerprinting methodology that enables the model owner to embed unique fingerprints within the parameters (weights) of her model and later identify undesired usages of her distributed models. The proposed framework embeds the fingerprints in the Probability Density Function (pdf) of trainable weights by leveraging the extra capacity available in contemporary DL models. DeepMarks is robust against fingerprints collusion as well as network transformation attacks, including model compression and model fine-tuning. Extensive proof-of-concept evaluations on MNIST and CIFAR10 datasets, as well as a wide variety of deep neural networks architectures such as Wide Residual Networks (WRNs) and Convolutional Neural Networks (CNNs), corroborate the effectiveness and robustness of DeepMarks framework

    On the role of distance transformations in Baddeley’s Delta Metric

    Get PDF
    Comparison and similarity measurement have been a key topic in computer vision for a long time. There is, indeed, an extensive list of algorithms and measures for image or subimage comparison. The superiority or inferiority of different measures is hard to scrutinize, especially considering the dimensionality of their parameter space and their many different configurations. In this work, we focus on the comparison of binary images, and study different variations of Baddeley's Delta Metric, a popular metric for such images. We study the possible parameterizations of the metric, stressing the numerical and behavioural impact of different settings. Specifically, we consider the parameter settings proposed by the original author, as well as the substitution of distance transformations by regularized distance transformations, as recently presented by Brunet and Sills. We take a qualitative perspective on the effects of the settings, and also perform quantitative experiments on separability of datasets for boundary evaluation.The authors gratefully acknowledge the financial support by the Spanish Ministry of Science (project PID2019-108392GB-I00 AEI/FEDER, UE), as well as that by Navarra Servicios y TecnologĂ­as S.A. (NASERTIC)

    InterFace:Adjustable Angular Margin Inter-class Loss for Deep Face Recognition

    Full text link
    In the field of face recognition, it is always a hot research topic to improve the loss solution to make the face features extracted by the network have greater discriminative power. Research works in recent years has improved the discriminative power of the face model by normalizing softmax to the cosine space step by step and then adding a fixed penalty margin to reduce the intra-class distance to increase the inter-class distance. Although a great deal of previous work has been done to optimize the boundary penalty to improve the discriminative power of the model, adding a fixed margin penalty to the depth feature and the corresponding weight is not consistent with the pattern of data in the real scenario. To address this issue, in this paper, we propose a novel loss function, InterFace, releasing the constraint of adding a margin penalty only between the depth feature and the corresponding weight to push the separability of classes by adding corresponding margin penalties between the depth features and all weights. To illustrate the advantages of InterFace over a fixed penalty margin, we explained geometrically and comparisons on a set of mainstream benchmarks. From a wider perspective, our InterFace has advanced the state-of-the-art face recognition performance on five out of thirteen mainstream benchmarks. All training codes, pre-trained models, and training logs, are publicly released \footnote{https://github.com/iamsangmeng/InterFacehttps://github.com/iamsangmeng/InterFace}.Comment: arXiv admin note: text overlap with arXiv:2109.09416 by other author

    POOR MAN’S TRACE CACHE: A VARIABLE DELAY SLOT ARCHITECTURE

    Get PDF
    We introduce a novel fetch architecture called Poor Man’s Trace Cache (PMTC). PMTC constructs taken-path instruction traces via instruction replication in static code and inserts them after unconditional direct and select conditional direct control transfer instructions. These traces extend to the end of the cache line. Since available space for trace insertion may vary by the position of the control transfer instruction within the line, we refer to these fetch slots as variable delay slots. This approach ensures traces are fetched along with the control transfer instruction that initiated the trace. Branch, jump and return instruction semantics as well as the fetch unit are modified to utilize traces in delay slots. PMTC yields the following benefits: 1. Average fetch bandwidth increases as the front end can fetch across taken control transfer instructions in a single cycle. 2. The dynamic number of instruction cache lines fetched by the processor is reduced as multiple non contiguous basic blocks along a given path are encountered in one fetch cycle. 3. Replication of a branch instruction along multiple paths provides path separability for branches, which positively impacts branch prediction accuracy. PMTC mechanism requires minimal modifications to the processor’s fetch unit and the trace insertion algorithm can easily be implemented within the assembler without compiler support

    On the Role of Context-Awareness in Binary Image Comparison

    Get PDF
    The quantification of image similarity has been a key topic in the computer vision literature for the past few years. Different mathematical theories have been used in the development of these measures, which we will refer to as comparison measures. An interesting aspect in the study of comparison measures is the natural requirement to replicate human behavior. In almost all cases, it is appropriate for a comparison measure to produce results that are consistent with how humans would perform that assessment. However, despite accepting this premise, most of the proposals in the literature ignore a fundamental characteristic of the way in which humans carry out this evaluation: the context of comparison. In this work we present a comparison measure for binary images that incorporates the context of comparison; more precisely, we introduce an approach for the generation of ultrametrics for the context-aware comparison of binary images

    Rethinking Domain Generalization for Face Anti-spoofing: Separability and Alignment

    Full text link
    This work studies the generalization issue of face anti-spoofing (FAS) models on domain gaps, such as image resolution, blurriness and sensor variations. Most prior works regard domain-specific signals as a negative impact, and apply metric learning or adversarial losses to remove them from feature representation. Though learning a domain-invariant feature space is viable for the training data, we show that the feature shift still exists in an unseen test domain, which backfires on the generalizability of the classifier. In this work, instead of constructing a domain-invariant feature space, we encourage domain separability while aligning the live-to-spoof transition (i.e., the trajectory from live to spoof) to be the same for all domains. We formulate this FAS strategy of separability and alignment (SA-FAS) as a problem of invariant risk minimization (IRM), and learn domain-variant feature representation but domain-invariant classifier. We demonstrate the effectiveness of SA-FAS on challenging cross-domain FAS datasets and establish state-of-the-art performance.Comment: Accepted in CVPR202
    • 

    corecore