14 research outputs found
Watermarking security part II: practice
This second part focuses on estimation of secret parameters of some practical watermarking techniques. The first part reveals some theoretical bounds of information leakage about secret keys from observations. However, as usual in information theory, nothing has been said about practical algorithms which pirates use in real life application. Whereas Part One deals with the necessary number of observations to disclose secret keys (see definitions of security levels), this part focuses on the complexity or the computing power of practical estimators. Again, we are inspired here by the work of Shannon as in his famous article [15], he has already made a clear cut between the unicity distance and the work of opponents' algorithm. Our experimental work also illustrates how Blind Source Separation (especially Independent Component Analysis) algorithms help the opponent exploiting this information leakage to disclose the secret carriers in the spread spectrum case. Simulations assess the security levels theoretically derived in Part One
DATA ACROSS IN MALIGNANT SECURITY GUARANTEES IN PUBLIC NETS
Within this work, we present a normal data lineage framework LIME for data flow across multiple entities that take two characteristic, principal roles. In some instances, identification from the leaker is thanks to forensic techniques, but these are typically costly and don't always create the preferred results. We present LIME, one for accountable bandwidth across multiple entities. We define participating parties, their inter-relationships and provide a concrete instantiation for any bandwidth protocol utilizing a novel mixture of oblivious transfer, robust watermarking and digital signatures. Â We define the precise security guarantees needed by this type of data lineage mechanism toward identification of the guilty entity, and find out the simplifying non-repudiation and honesty assumptions. Then we develop and evaluate a singular accountable bandwidth protocol between two entities inside a malicious atmosphere because they build upon oblivious transfer, robust watermarking, and signature primitives. Finally, we perform an experimental evaluation to show the functionality in our protocol and apply our framework towards the important data leakage scenarios of information outsourcing and social systems. Generally, we consider LIME, our lineage framework for bandwidth, to become a key step towards achieving accountability by design. The important thing benefit of our model is it enforces accountability by design i.e., it drives the machine designer to think about possible data leakages and also the corresponding accountability constraints in the design stage
Lime: Data Lineage in the Malicious Environment
Intentional or unintentional leakage of confidential data is undoubtedly one
of the most severe security threats that organizations face in the digital era.
The threat now extends to our personal lives: a plethora of personal
information is available to social networks and smartphone providers and is
indirectly transferred to untrustworthy third party and fourth party
applications.
In this work, we present a generic data lineage framework LIME for data flow
across multiple entities that take two characteristic, principal roles (i.e.,
owner and consumer). We define the exact security guarantees required by such a
data lineage mechanism toward identification of a guilty entity, and identify
the simplifying non repudiation and honesty assumptions. We then develop and
analyze a novel accountable data transfer protocol between two entities within
a malicious environment by building upon oblivious transfer, robust
watermarking, and signature primitives. Finally, we perform an experimental
evaluation to demonstrate the practicality of our protocol
Evading Classifiers by Morphing in the Dark
Learning-based systems have been shown to be vulnerable to evasion through
adversarial data manipulation. These attacks have been studied under
assumptions that the adversary has certain knowledge of either the target model
internals, its training dataset or at least classification scores it assigns to
input samples. In this paper, we investigate a much more constrained and
realistic attack scenario wherein the target classifier is minimally exposed to
the adversary, revealing on its final classification decision (e.g., reject or
accept an input sample). Moreover, the adversary can only manipulate malicious
samples using a blackbox morpher. That is, the adversary has to evade the
target classifier by morphing malicious samples "in the dark". We present a
scoring mechanism that can assign a real-value score which reflects evasion
progress to each sample based on the limited information available. Leveraging
on such scoring mechanism, we propose an evasion method -- EvadeHC -- and
evaluate it against two PDF malware detectors, namely PDFRate and Hidost. The
experimental evaluation demonstrates that the proposed evasion attacks are
effective, attaining evasion rate on the evaluation dataset.
Interestingly, EvadeHC outperforms the known classifier evasion technique that
operates based on classification scores output by the classifiers. Although our
evaluations are conducted on PDF malware classifier, the proposed approaches
are domain-agnostic and is of wider application to other learning-based
systems
Joint Detection-Estimation Games for Sensitivity Analysis Attacks
ABSTRACT Sensitivity analysis attacks aim at estimating a watermark from multiple observations of the detector's output. Subsequently, the attacker removes the estimated watermark from the watermarked signal. In order to measure the vulnerability of a detector against such attacks, we evaluate the fundamental performance limits for the attacker's estimation problem. The inverse of the Fisher information matrix provides a bound on the covariance matrix of the estimation error. A general strategy for the attacker is to select the distribution of auxiliary test signals that minimizes the trace of the inverse Fisher information matrix. The watermark detector must trade off two conflicting requirements: (1) reliability, and (2) security against sensitivity attacks. We explore this tradeoff and design the detection function that maximizes the trace of the attacker's inverse Fisher information matrix while simultaneously guaranteeing a bound on the error probability. Game theory is the natural framework to study this problem, and considerable insights emerge from this analysis
Watermarking security: theory and practice
This article proposes a theory of watermarking security based on a cryptanalysis point of view. The main idea is that information about the secret key leaks from the observations, for instance watermarked pieces of content, available to the opponent. Tools from information theory (Shannon's mutual information and Fisher's information matrix) can measure this leakage of information. The security level is then defined as the number of observations the attacker needs to successfully estimate the secret key. This theory is applied to two common watermarking methods: the substitutive scheme and the spread spectrum based techniques. Their security levels are calculated against three kinds of attack. The experimental work illustrates how Blind Source Separation (especially Independent Component Analysis) algorithms help the opponent exploiting this information leakage to disclose the secret carriers in the spread spectrum case. Simulations assess the security levels derived in the theoretical part of the article
Secure Detection of Image Manipulation by means of Random Feature Selection
We address the problem of data-driven image manipulation detection in the
presence of an attacker with limited knowledge about the detector.
Specifically, we assume that the attacker knows the architecture of the
detector, the training data and the class of features V the detector can rely
on. In order to get an advantage in his race of arms with the attacker, the
analyst designs the detector by relying on a subset of features chosen at
random in V. Given its ignorance about the exact feature set, the adversary
attacks a version of the detector based on the entire feature set. In this way,
the effectiveness of the attack diminishes since there is no guarantee that
attacking a detector working in the full feature space will result in a
successful attack against the reduced-feature detector. We theoretically prove
that, thanks to random feature selection, the security of the detector
increases significantly at the expense of a negligible loss of performance in
the absence of attacks. We also provide an experimental validation of the
proposed procedure by focusing on the detection of two specific kinds of image
manipulations, namely adaptive histogram equalization and median filtering. The
experiments confirm the gain in security at the expense of a negligible loss of
performance in the absence of attacks