87 research outputs found
Optimization techniques and new methods for boradcast encryption and traitor tracing schemes
Ankara : The Department of Computer Engineering and the Graduate School of Engineering and Science of Bilkent University, 2012.Thesis (Ph. D.) -- Bilkent University, 2012.Includes bibliographical refences.In the last few decades, the use of digital content increased dramatically. Many
forms of digital products in the form of CDs, DVDs, TV broadcasts, data over
the Internet, entered our life. Classical cryptography, where encryption is
done for only one recipient, was not able to handle this change, since its direct
use leads to intolerably expensive transmissions. Moreover, new concerns
regarding the commercial aspect arised. Since digital commercial contents are
sold to various customers, unauthorized copying by malicious actors became
a major concern and it needed to be prevented carefully. Therefore, a new
research area called digital rights management (DRM) has emerged. Within
the scope of DRM, new cryptographic primitives are proposed. In this thesis,
we consider three of these: broadcast encryption (BE), traitor tracing (TT),
and trace and revoke (T&R) schemes and propose methods to improve the performances
and capabilities of these primitives. Particularly, we first consider
profiling the recipient set in order to improve transmission size in the most
popular BE schemes. We then investigate and solve the optimal free rider
assignment problem for one of the most efficient BE schemes so far. Next, we
attempt to close the non-trivial gap between BE and T&R schemes by proposing
a generic method for adding traitor tracing capability to BE schemes and
thus obtaining a T&R scheme. Finally, we investigate an overlooked problem:
privacy of the recipient set in T&R schemes. Right now, most schemes do not
keep the recipient set anonymous, and everybody can see who received a particular
content. As a generic solution to this problem, we propose a method
for obtaining anonymous T&R scheme by using anonymous BE schemes as a
primitive.Ak, MuratPh.D
A Joint Coding and Embedding Framework for Multimedia Fingerprinting
Technology advancement has made multimedia content widely available and easy to process. These benefits also bring ease to unauthorized users who can duplicate and manipulate multimedia content, and redistribute it to a large audience. Unauthorized distribution of information has posed serious threats to government and commercial operations. Digital fingerprinting is an emerging technology to protect multimedia content from such illicit redistribution by uniquely marking every copy of the content distributed to each user. One of the most powerful attacks from adversaries is collusion attack where several different fingerprinted copies of the same content are combined together to attenuate or even remove the fingerprints. An ideal fingerprinting system should be able to resist such collusion attacks and also have low embedding and detection computational complexity, and require low transmission bandwidth.
To achieve aforementioned requirements, this thesis presents a joint coding and embedding framework by employing a code layer for efficient fingerprint construction and leveraging the embedding layer to achieve high collusion resistance. Based on this framework, we propose two new joint-coding-embedding techniques, namely, permuted subsegment embedding and group-based joint-coding-embedding fingerprinting. We show that the proposed fingerprinting framework provides an excellent balance between collusion resistance, efficient construction, and efficient detection. The proposed joint coding and embedding techniques allow us to model both coded and non-coded fingerprinting under the same theoretical model, which can be used to provide guidelines of choosing parameters.
Based on the proposed joint coding and embedding techniques, we then consider real-world applications, such as DVD movie mass distribution and cable TV, and develop practical algorithms to fingerprint video in such challenging practical settings as to accommodate more than ten million users and resist hundreds of users' collusion. Our studies show a high potential of joint coding and embedding to meet the needs of real-world large-scale fingerprinting applications. The popularity of the subscription based content services, such as cable TV, inspires us to study the content protection in such scenario where users have access to multiple contents and thus the colluders may pirate multiple movie signals. To address this issue, we exploit the temporal dimension and propose a dynamic fingerprinting scheme that adjusts the fingerprint design based on the detection results of previously pirated signals. We demonstrate the advantages of the proposed dynamic fingerprinting over conventional static fingerprinting. Other issues related to multimedia fingerprinting, such as fingerprinting via QIM embedding, are also discussed in this thesis
Secure fingerprinting on sound foundations
The rapid development and the advancement of digital technologies open a variety of opportunities to consumers and content providers for using and trading digital goods. In this context, particularly the Internet has gained a major ground as a worldwiede platform for exchanging and distributing digital goods. Beside all its possibilities and advantages digital technology can be misuesd to breach copyright regulations: unauthorized use and illegal distribution of intellectual property cause authors and content providers considerable loss. Protections of intellectual property has therefore become one of the major challenges of our information society. Fingerprinting is a key technology in copyright protection of intellectual property. Its goal is to deter people from copyright violation by allowing to provably identify the source of illegally copied and redistributed content. As one of its focuses, this thesis considers the design and construction of various fingerprinting schemes and presents the first explicit, secure and reasonably efficient construction for a fingerprinting scheme which fulfills advanced security requirements such as collusion-tolerance, asymmetry, anonymity and direct non-repudiation. Crucial for the security of such s is a careful study of the underlying cryptographic assumptions. In case of the fingerprinting scheme presented here, these are mainly assumptions related to discrete logarithms. The study and analysis of these assumptions is a further focus of this thesis. Based on the first thorough classification of assumptions related to discrete logarithms, this thesis gives novel insights into the relations between these assumptions. In particular, depending on the underlying probability space we present new reuslts on the reducibility between some of these assumptions as well as on their reduction efficency.Die Fortschritte im Bereich der Digitaltechnologien bieten Konsumenten,
Urhebern und Anbietern große Potentiale für innovative Geschäftsmodelle
zum Handel mit digitalen GĂĽtern und zu deren Nutzung. Das Internet stellt
hierbei eine interessante Möglichkeit zum Austausch und zur Verbreitung
digitaler GĂĽter dar. Neben vielen Vorteilen kann die Digitaltechnik jedoch
auch missbräuchlich eingesetzt werden, wie beispielsweise zur Verletzung
von Urheberrechten durch illegale Nutzung und Verbreitung von Inhalten,
wodurch involvierten Parteien erhebliche Schäden entstehen können. Der
Schutz des geistigen Eigentums hat sich deshalb zu einer der besonderen
Herausforderungen unseres Digitalzeitalters entwickelt.
Fingerprinting ist eine SchlĂĽsseltechnologie zum Urheberschutz. Sie hat
das Ziel, vor illegaler Vervielfältigung und Verteilung digitaler Werke abzuschrecken, indem sie die Identifikation eines Betrügers und das Nachweisen
seines Fehlverhaltens ermöglicht. Diese Dissertation liefert als eines ihrer Ergebnisse die erste explizite, sichere und effiziente Konstruktion, welche die
BerĂĽcksichtigung besonders fortgeschrittener Sicherheitseigenschaften wie
Kollusionstoleranz, Asymmetrie, Anonymität und direkte Unabstreitbarkeit
erlaubt.
Entscheidend für die Sicherheit kryptographischer Systeme ist die präzise
Analyse der ihnen zugrunde liegenden kryptographischen Annahmen. Den
im Rahmen dieser Dissertation konstruierten Fingerprintingsystemen liegen
hauptsächlich kryptographische Annahmen zugrunde, welche auf diskreten
Logarithmen basieren. Die Untersuchung dieser Annahmen stellt einen weiteren
Schwerpunkt dieser Dissertation dar. Basierend auf einer hier erstmals
in der Literatur vorgenommenen Klassifikation dieser Annahmen werden
neue und weitreichende Kenntnisse über deren Zusammenhänge gewonnen.
Insbesondere werden, in Abhängigkeit von dem zugrunde liegenden Wahrscheinlichkeitsraum, neue Resultate hinsichtlich der Reduzierbarkeit dieser
Annahmen und ihrer Reduktionseffizienz erzielt
Interaction between High-Level and Low-Level Image Analysis for Semantic Video Object Extraction
The task of extracting a semantic video object is split into two subproblems, namely, object segmentation and region segmentation. Object segmentation relies on a priori assumptions, whereas region segmentation is data-driven and can be solved in an automatic manner. These two subproblems are not mutually independent, and they can benefit from interactions with each other. In this paper, a framework for such interaction is formulated. This representation scheme based on region segmentation and semantic segmentation is compatible with the view that image analysis and scene understanding problems can be decomposed into low-level and high-level tasks. Low-level tasks pertain to region-oriented processing, whereas the high-level tasks are closely related to object-level processing. This approach emulates the human visual system: what one “sees” in a scene depends on the scene itself (region segmentation) as well as on the cognitive task (semantic segmentation) at hand. The higher-level segmentation results in a partition corresponding to semantic video objects. Semantic video objects do not usually have invariant physical properties and the definition depends on the application. Hence, the definition incorporates complex domain-specific knowledge and is not easy to generalize. For the specific implementation used in this paper, motion is used as a clue to semantic information. In this framework, an automatic algorithm is presented for computing the semantic partition based on color change detection. The change detection strategy is designed to be immune to the sensor noise and local illumination variations. The lower-level segmentation identifies the partition corresponding to perceptually uniform regions. These regions are derived by clustering in an N-dimensional feature space, composed of static as well as dynamic image attributes. We propose an interaction mechanism between the semantic and the region partitions which allows to cope with multiple simultaneous objects. Experimental results show that the proposed method extracts semantic video objects with high spatial accuracy and temporal coherence
On Improving Generalization of CNN-Based Image Classification with Delineation Maps Using the CORF Push-Pull Inhibition Operator
Deployed image classification pipelines are typically dependent on the images captured in real-world environments. This means that images might be affected by different sources of perturbations (e.g. sensor noise in low-light environments). The main challenge arises by the fact that image quality directly impacts the reliability and consistency of classification tasks. This challenge has, hence, attracted wide interest within the computer vision communities. We propose a transformation step that attempts to enhance the generalization ability of CNN models in the presence of unseen noise in the test set. Concretely, the delineation maps of given images are determined using the CORF push-pull inhibition operator. Such an operation transforms an input image into a space that is more robust to noise before being processed by a CNN. We evaluated our approach on the Fashion MNIST data set with an AlexNet model. It turned out that the proposed CORF-augmented pipeline achieved comparable results on noise-free images to those of a conventional AlexNet classification model without CORF delineation maps, but it consistently achieved significantly superior performance on test images perturbed with different levels of Gaussian and uniform noise
- …