19 research outputs found

    Collusion-resistant fingerprinting for multimedia in a broadcast channel environment

    Get PDF
    Digital fingerprinting is a method by which a copyright owner can uniquely embed a buyer-dependent, inconspicuous serial number (representing the fingerprint) into every copy of digital data that is legally sold. The buyer of a legal copy is then deterred from distributing further copies, because the unique fingerprint can be used to trace back the origin of the piracy. The major challenge in fingerprinting is collusion, an attack in which a coalition of pirates compare several of their uniquely fingerprinted copies for the purpose of detecting and removing the fingerprints. The objectives of this work are two-fold. First, we investigate the need for robustness against large coalitions of pirates by introducing the concept of a malicious distributor that has been overlooked in prior work. A novel fingerprinting code that has superior codeword length in comparison to existing work under this novel malicious distributor scenario is developed. In addition, ideas presented in the proposed fingerprinting design can easily be applied to existing fingerprinting schemes, making them more robust to collusion attacks. Second, a new framework termed Joint Source Fingerprinting that integrates the processes of watermarking and codebook design is introduced. The need for this new paradigm is motivated by the fact that existing fingerprinting methods result in a perceptually undistorted multimedia after collusion is applied. In contrast, the new paradigm equates the process of collusion amongst a coalition of pirates, to degrading the perceptual characteristics, and hence commercial value of the multimedia in question. Thus by enforcing that the process of collusion diminishes the commercial value of the content, the pirates are deterred from attacking the fingerprints. A fingerprinting algorithm for video as well as an efficient means of broadcasting or distributing fingerprinted video is also presented. Simulation results are provided to verify our theoretical and empirical observations

    AN INVESTIGATION OF DIFFERENT VIDEO WATERMARKING TECHNIQUES

    Get PDF
    Watermarking is an advanced technology that identifies to solve the problem of illegal manipulation and distribution of digital data. It is the art of hiding the copyright information into host such that the embedded data is imperceptible. The covers in the forms of digital multimedia object, namely image, audio and video. The extensive literature collected related to the performance improvement of video watermarking techniques is critically reviewed and presented in this paper. Also, comprehensive review of the literature on the evolution of various video watermarking techniques to achieve robustness and to maintain the quality of watermarked video sequences

    AN INVESTIGATION OF DIFFERENT VIDEO WATERMARKING TECHNIQUES

    Get PDF

    Digital Watermarking for Verification of Perception-based Integrity of Audio Data

    Get PDF
    In certain application fields digital audio recordings contain sensitive content. Examples are historical archival material in public archives that preserve our cultural heritage, or digital evidence in the context of law enforcement and civil proceedings. Because of the powerful capabilities of modern editing tools for multimedia such material is vulnerable to doctoring of the content and forgery of its origin with malicious intent. Also inadvertent data modification and mistaken origin can be caused by human error. Hence, the credibility and provenience in terms of an unadulterated and genuine state of such audio content and the confidence about its origin are critical factors. To address this issue, this PhD thesis proposes a mechanism for verifying the integrity and authenticity of digital sound recordings. It is designed and implemented to be insensitive to common post-processing operations of the audio data that influence the subjective acoustic perception only marginally (if at all). Examples of such operations include lossy compression that maintains a high sound quality of the audio media, or lossless format conversions. It is the objective to avoid de facto false alarms that would be expectedly observable in standard crypto-based authentication protocols in the presence of these legitimate post-processing. For achieving this, a feasible combination of the techniques of digital watermarking and audio-specific hashing is investigated. At first, a suitable secret-key dependent audio hashing algorithm is developed. It incorporates and enhances so-called audio fingerprinting technology from the state of the art in contentbased audio identification. The presented algorithm (denoted as ”rMAC” message authentication code) allows ”perception-based” verification of integrity. This means classifying integrity breaches as such not before they become audible. As another objective, this rMAC is embedded and stored silently inside the audio media by means of audio watermarking technology. This approach allows maintaining the authentication code across the above-mentioned admissible post-processing operations and making it available for integrity verification at a later date. For this, an existent secret-key ependent audio watermarking algorithm is used and enhanced in this thesis work. To some extent, the dependency of the rMAC and of the watermarking processing from a secret key also allows authenticating the origin of a protected audio. To elaborate on this security aspect, this work also estimates the brute-force efforts of an adversary attacking this combined rMAC-watermarking approach. The experimental results show that the proposed method provides a good distinction and classification performance of authentic versus doctored audio content. It also allows the temporal localization of audible data modification within a protected audio file. The experimental evaluation finally provides recommendations about technical configuration settings of the combined watermarking-hashing approach. Beyond the main topic of perception-based data integrity and data authenticity for audio, this PhD work provides new general findings in the fields of audio fingerprinting and digital watermarking. The main contributions of this PhD were published and presented mainly at conferences about multimedia security. These publications were cited by a number of other authors and hence had some impact on their works

    Schémas de tatouage d'images, schémas de tatouage conjoint à la compression, et schémas de dissimulation de données

    Get PDF
    In this manuscript we address data-hiding in images and videos. Specifically we address robust watermarking for images, robust watermarking jointly with compression, and finally non robust data-hiding.The first part of the manuscript deals with high-rate robust watermarking. After having briefly recalled the concept of informed watermarking, we study the two major watermarking families : trellis-based watermarking and quantized-based watermarking. We propose, firstly to reduce the computational complexity of the trellis-based watermarking, with a rotation based embedding, and secondly to introduce a trellis-based quantization in a watermarking system based on quantization.The second part of the manuscript addresses the problem of watermarking jointly with a JPEG2000 compression step or an H.264 compression step. The quantization step and the watermarking step are achieved simultaneously, so that these two steps do not fight against each other. Watermarking in JPEG2000 is achieved by using the trellis quantization from the part 2 of the standard. Watermarking in H.264 is performed on the fly, after the quantization stage, choosing the best prediction through the process of rate-distortion optimization. We also propose to integrate a Tardos code to build an application for traitors tracing.The last part of the manuscript describes the different mechanisms of color hiding in a grayscale image. We propose two approaches based on hiding a color palette in its index image. The first approach relies on the optimization of an energetic function to get a decomposition of the color image allowing an easy embedding. The second approach consists in quickly obtaining a color palette of larger size and then in embedding it in a reversible way.Dans ce manuscrit nous abordons l’insertion de données dans les images et les vidéos. Plus particulièrement nous traitons du tatouage robuste dans les images, du tatouage robuste conjointement à la compression et enfin de l’insertion de données (non robuste).La première partie du manuscrit traite du tatouage robuste à haute capacité. Après avoir brièvement rappelé le concept de tatouage informé, nous étudions les deux principales familles de tatouage : le tatouage basé treillis et le tatouage basé quantification. Nous proposons d’une part de réduire la complexité calculatoire du tatouage basé treillis par une approche d’insertion par rotation, ainsi que d’autre part d’introduire une approche par quantification basée treillis au seind’un système de tatouage basé quantification.La deuxième partie du manuscrit aborde la problématique de tatouage conjointement à la phase de compression par JPEG2000 ou par H.264. L’idée consiste à faire en même temps l’étape de quantification et l’étape de tatouage, de sorte que ces deux étapes ne « luttent pas » l’une contre l’autre. Le tatouage au sein de JPEG2000 est effectué en détournant l’utilisation de la quantification basée treillis de la partie 2 du standard. Le tatouage au sein de H.264 est effectué à la volée, après la phase de quantification, en choisissant la meilleure prédiction via le processus d’optimisation débit-distorsion. Nous proposons également d’intégrer un code de Tardos pour construire une application de traçage de traîtres.La dernière partie du manuscrit décrit les différents mécanismes de dissimulation d’une information couleur au sein d’une image en niveaux de gris. Nous proposons deux approches reposant sur la dissimulation d’une palette couleur dans son image d’index. La première approche consiste à modéliser le problème puis à l’optimiser afin d’avoir une bonne décomposition de l’image couleur ainsi qu’une insertion aisée. La seconde approche consiste à obtenir, de manière rapide et sûre, une palette de plus grande dimension puis à l’insérer de manière réversible

    On Improving Generalization of CNN-Based Image Classification with Delineation Maps Using the CORF Push-Pull Inhibition Operator

    Get PDF
    Deployed image classification pipelines are typically dependent on the images captured in real-world environments. This means that images might be affected by different sources of perturbations (e.g. sensor noise in low-light environments). The main challenge arises by the fact that image quality directly impacts the reliability and consistency of classification tasks. This challenge has, hence, attracted wide interest within the computer vision communities. We propose a transformation step that attempts to enhance the generalization ability of CNN models in the presence of unseen noise in the test set. Concretely, the delineation maps of given images are determined using the CORF push-pull inhibition operator. Such an operation transforms an input image into a space that is more robust to noise before being processed by a CNN. We evaluated our approach on the Fashion MNIST data set with an AlexNet model. It turned out that the proposed CORF-augmented pipeline achieved comparable results on noise-free images to those of a conventional AlexNet classification model without CORF delineation maps, but it consistently achieved significantly superior performance on test images perturbed with different levels of Gaussian and uniform noise

    Cloud-based Indoor Positioning Platform for Context-adaptivity in GNSS-denied Scenarios

    Get PDF
    The demand for positioning, localisation and navigation services is on the rise, largely owing to the fact that such services form an integral part of applications in areas such as human activity recognition, robotics, and eHealth. Depending on the field of application, these services must accomplish high levels of accuracy, massive device connectivity, real-time response, flexibility, and integrability. Although many current solutions have succeeded in fulfilling these requirements, numerous challenges remain in terms of providing robust and reliable indoor positioning solutions. This dissertation has a core focus on improving computing efficiency, data pre-processing, and software architecture for Indoor Positioning Systems (IPSs), without throwing out position and location accuracy. Fingerprinting is the main positioning technique used in this dissertation, as it is one of the approaches used most frequently in indoor positioning solutions. The dissertation begins by presenting a systematic review of current cloud-based indoor positioning solutions for Global Navigation Satellite System (GNSS) denied scenarios. This first contribution identifies the current challenges and trends in indoor positioning applications over the last seven years (from January 2015 to May 2022). Secondly, we focus on the study of data optimisation techniques such as data cleansing and data augmentation. This second contribution is devoted to reducing the number of outliers fingerprints in radio maps and, therefore, reducing the error in position estimation. The data cleansing algorithm relies on the correlation between fingerprints, taking into account the maximum Received Signal Strength (RSS) values, whereas the Generative Adversarial Network (GAN) network is used for data augmentation in order to generate synthetic fingerprints that are barely distinguishable from real ones. Consequently, the positioning error is reduced by more than 3.5% after applying the data cleansing. Similarly, the positioning error is reduced in 8 from 11 datasets after generating new synthetic fingerprints. The third contribution suggests two algorithms which group similar fingerprints into clusters. To that end, a new post-processing algorithm for Density-based Spatial Clustering of Applications with Noise (DBSCAN) clustering is developed to redistribute noisy fingerprints to the formed clusters, enhancing the mean positioning accuracy by more than 20% in comparison with the plain DBSCAN. A new lightweight clustering algorithm is also introduced, which joins similar fingerprints based on the maximum RSS values and Access Point (AP) identifiers. This new clustering algorithm reduces the time required to form the clusters by more than 60% compared with two traditional clustering algorithms. The fourth contribution explores the use of Machine Learning (ML) models to enhance the accuracy of position estimation. These models are based on Deep Neural Network (DNN) and Extreme Learning Machine (ELM). The first combines Convolutional Neural Network (CNN) and Long short-term memory (LSTM) to learn the complex patterns in fingerprinting radio maps and improve position accuracy. The second model uses CNN and ELM to provide a fast and accurate solution for the classification of fingerprints into buildings and floors. Both models offer better performance in terms of floor hit rate than the baseline (more than 8% on average), and also outperform some machine learning models from the literature. Finally, this dissertation summarises the key findings of the previous chapters in an open-source cloud platform for indoor positioning. This software developed in this dissertation follows the guidelines provided by current standards in positioning, mapping, and software architecture to provide a reliable and scalable system

    Trustworthiness in Mobile Cyber Physical Systems

    Get PDF
    Computing and communication capabilities are increasingly embedded in diverse objects and structures in the physical environment. They will link the ‘cyberworld’ of computing and communications with the physical world. These applications are called cyber physical systems (CPS). Obviously, the increased involvement of real-world entities leads to a greater demand for trustworthy systems. Hence, we use "system trustworthiness" here, which can guarantee continuous service in the presence of internal errors or external attacks. Mobile CPS (MCPS) is a prominent subcategory of CPS in which the physical component has no permanent location. Mobile Internet devices already provide ubiquitous platforms for building novel MCPS applications. The objective of this Special Issue is to contribute to research in modern/future trustworthy MCPS, including design, modeling, simulation, dependability, and so on. It is imperative to address the issues which are critical to their mobility, report significant advances in the underlying science, and discuss the challenges of development and implementation in various applications of MCPS

    A comprehensive survey of V2X cybersecurity mechanisms and future research paths

    Get PDF
    Recent advancements in vehicle-to-everything (V2X) communication have notably improved existing transport systems by enabling increased connectivity and driving autonomy levels. The remarkable benefits of V2X connectivity come inadvertently with challenges which involve security vulnerabilities and breaches. Addressing security concerns is essential for seamless and safe operation of mission-critical V2X use cases. This paper surveys current literature on V2X security and provides a systematic and comprehensive review of the most relevant security enhancements to date. An in-depth classification of V2X attacks is first performed according to key security and privacy requirements. Our methodology resumes with a taxonomy of security mechanisms based on their proactive/reactive defensive approach, which helps identify strengths and limitations of state-of-the-art countermeasures for V2X attacks. In addition, this paper delves into the potential of emerging security approaches leveraging artificial intelligence tools to meet security objectives. Promising data-driven solutions tailored to tackle security, privacy and trust issues are thoroughly discussed along with new threat vectors introduced inevitably by these enablers. The lessons learned from the detailed review of existing works are also compiled and highlighted. We conclude this survey with a structured synthesis of open challenges and future research directions to foster contributions in this prominent field.This work is supported by the H2020-INSPIRE-5Gplus project (under Grant agreement No. 871808), the ”Ministerio de Asuntos Económicos y Transformacion Digital” and the European Union-NextGenerationEU in the frameworks of the ”Plan de Recuperación, Transformación y Resiliencia” and of the ”Mecanismo de Recuperación y Resiliencia” under references TSI-063000-2021-39/40/41, and the CHIST-ERA-17-BDSI-003 FIREMAN project funded by the Spanish National Foundation (Grant PCI2019-103780).Peer ReviewedPostprint (published version

    A Risk And Trust Security Framework For The Pervasive Mobile Environment

    Get PDF
    A pervasive mobile computing environment is typically composed of multiple fixed and mobile entities that interact autonomously with each other with very little central control. Many of these interactions may occur between entities that have not interacted with each other previously. Conventional security models are inadequate for regulating access to data and services, especially when the identities of a dynamic and growing community of entities are not known in advance. In order to cope with this drawback, entities may rely on context data to make security and trust decisions. However, risk is introduced in this process due to the variability and uncertainty of context information. Moreover, by the time the decisions are made, the context data may have already changed and, in which case, the security decisions could become invalid.With this in mind, our goal is to develop mechanisms or models, to aid trust decision-making by an entity or agent (the truster), when the consequences of its decisions depend on context information from other agents (the trustees). To achieve this, in this dissertation, we have developed ContextTrust a framework to not only compute the risk associated with a context variable, but also to derive a trust measure for context data producing agents. To compute the context data risk, ContextTrust uses Monte Carlo based method to model the behavior of a context variable. Moreover, ContextTrust makes use of time series classifiers and other simple statistical measures to derive an entity trust value.We conducted empirical analyses to evaluate the performance of ContextTrust using two real life data sets. The evaluation results show that ContextTrust can be effective in helping entities render security decisions
    corecore