13 research outputs found

    Sensor Data Integrity Verification for Real-time and Resource Constrained Systems

    Full text link
    Sensors are used in multiple applications that touch our lives and have become an integral part of modern life. They are used in building intelligent control systems in various industries like healthcare, transportation, consumer electronics, military, etc. Many mission-critical applications require sensor data to be secure and authentic. Sensor data security can be achieved using traditional solutions like cryptography and digital signatures, but these techniques are computationally intensive and cannot be easily applied to resource constrained systems. Low complexity data hiding techniques, on the contrary, are easy to implement and do not need substantial processing power or memory. In this applied research, we use and configure the established low complexity data hiding techniques from the multimedia forensics domain. These techniques are used to secure the sensor data transmissions in resource constrained and real-time environments such as an autonomous vehicle. We identify the areas in an autonomous vehicle that require sensor data integrity and propose suitable water-marking techniques to verify the integrity of the data and evaluate the performance of the proposed method against different attack vectors. In our proposed method, sensor data is embedded with application specific metadata and this process introduces some distortion. We analyze this embedding induced distortion and its impact on the overall sensor data quality to conclude that watermarking techniques, when properly configured, can solve sensor data integrity verification problems in an autonomous vehicle.Ph.D.College of Engineering & Computer ScienceUniversity of Michigan-Dearbornhttp://deepblue.lib.umich.edu/bitstream/2027.42/167387/3/Raghavendar Changalvala Final Dissertation.pdfDescription of Raghavendar Changalvala Final Dissertation.pdf : Dissertatio

    Perfectly Secure Steganography: Capacity, Error Exponents, and Code Constructions

    Full text link
    An analysis of steganographic systems subject to the following perfect undetectability condition is presented in this paper. Following embedding of the message into the covertext, the resulting stegotext is required to have exactly the same probability distribution as the covertext. Then no statistical test can reliably detect the presence of the hidden message. We refer to such steganographic schemes as perfectly secure. A few such schemes have been proposed in recent literature, but they have vanishing rate. We prove that communication performance can potentially be vastly improved; specifically, our basic setup assumes independently and identically distributed (i.i.d.) covertext, and we construct perfectly secure steganographic codes from public watermarking codes using binning methods and randomized permutations of the code. The permutation is a secret key shared between encoder and decoder. We derive (positive) capacity and random-coding exponents for perfectly-secure steganographic systems. The error exponents provide estimates of the code length required to achieve a target low error probability. We address the potential loss in communication performance due to the perfect-security requirement. This loss is the same as the loss obtained under a weaker order-1 steganographic requirement that would just require matching of first-order marginals of the covertext and stegotext distributions. Furthermore, no loss occurs if the covertext distribution is uniform and the distortion metric is cyclically symmetric; steganographic capacity is then achieved by randomized linear codes. Our framework may also be useful for developing computationally secure steganographic systems that have near-optimal communication performance.Comment: To appear in IEEE Trans. on Information Theory, June 2008; ignore Version 2 as the file was corrupte

    Perfectly Secure Steganography: Capacity, Error Exponents, and Code Constructions

    Full text link
    An analysis of steganographic systems subject to the following perfect undetectability condition is presented in this paper. Following embedding of the message into the covertext, the resulting stegotext is required to have exactly the same probability distribution as the covertext. Then no statistical test can reliably detect the presence of the hidden message. We refer to such steganographic schemes as perfectly secure. A few such schemes have been proposed in recent literature, but they have vanishing rate. We prove that communication performance can potentially be vastly improved; specifically, our basic setup assumes independently and identically distributed (i.i.d.) covertext, and we construct perfectly secure steganographic codes from public watermarking codes using binning methods and randomized permutations of the code. The permutation is a secret key shared between encoder and decoder. We derive (positive) capacity and random-coding exponents for perfectly-secure steganographic systems. The error exponents provide estimates of the code length required to achieve a target low error probability. We address the potential loss in communication performance due to the perfect-security requirement. This loss is the same as the loss obtained under a weaker order-1 steganographic requirement that would just require matching of first-order marginals of the covertext and stegotext distributions. Furthermore, no loss occurs if the covertext distribution is uniform and the distortion metric is cyclically symmetric; steganographic capacity is then achieved by randomized linear codes. Our framework may also be useful for developing computationally secure steganographic systems that have near-optimal communication performance.Comment: To appear in IEEE Trans. on Information Theory, June 2008; ignore Version 2 as the file was corrupte

    New Digital Audio Watermarking Algorithms for Copyright Protection

    Get PDF
    This thesis investigates the development of digital audio watermarking in addressing issues such as copyright protection. Over the past two decades, many digital watermarking algorithms have been developed, each with its own advantages and disadvantages. The main aim of this thesis was to develop a new watermarking algorithm within an existing Fast Fourier Transform framework. This resulted in the development of a Complex Spectrum Phase Evolution based watermarking algorithm. In this new implementation, the embedding positions were generated dynamically thereby rendering it more difficult for an attacker to remove, and watermark information was embedded by manipulation of the spectral components in the time domain thereby reducing any audible distortion. Further improvements were attained when the embedding criteria was based on bin location comparison instead of magnitude, thereby rendering it more robust against those attacks that interfere with the spectral magnitudes. However, it was discovered that this new audio watermarking algorithm has some disadvantages such as a relatively low capacity and a non-consistent robustness for different audio files. Therefore, a further aim of this thesis was to improve the algorithm from a different perspective. Improvements were investigated using an Singular Value Decomposition framework wherein a novel observation was discovered. Furthermore, a psychoacoustic model was incorporated to suppress any audible distortion. This resulted in a watermarking algorithm which achieved a higher capacity and a more consistent robustness. The overall result was that two new digital audio watermarking algorithms were developed which were complementary in their performance thereby opening more opportunities for further research

    Data hiding in images based on fractal modulation and diversity combining

    Get PDF
    The current work provides a new data-embedding infrastructure based on fractal modulation. The embedding problem is tackled from a communications point of view. The data to be embedded becomes the signal to be transmitted through a watermark channel. The channel could be the image itself or some manipulation of the image. The image self noise and noise due to attacks are the two sources of noise in this paradigm. At the receiver, the image self noise has to be suppressed, while noise due to the attacks may sometimes be predicted and inverted. The concepts of fractal modulation and deterministic self-similar signals are extended to 2-dimensional images. These novel techniques are used to build a deterministic bi-homogenous watermark signal that embodies the binary data to be embedded. The binary data to be embedded, is repeated and scaled with different amplitudes at each level and is used as the wavelet decomposition pyramid. The binary data is appended with special marking data, which is used during demodulation, to identify and correct unreliable or distorted blocks of wavelet coefficients. This specially constructed pyramid is inverted using the inverse discrete wavelet transform to obtain the self-similar watermark signal. In the data embedding stage, the well-established linear additive technique is used to add the watermark signal to the cover image, to generate the watermarked (stego) image. Data extraction from a potential stego image is done using diversity combining. Neither the original image nor the original binary sequence (or watermark signal) is required during the extraction. A prediction of the original image is obtained using a cross-shaped window and is used to suppress the image self noise in the potential stego image. The resulting signal is then decomposed using the discrete wavelet transform. The number of levels and the wavelet used are the same as those used in the watermark signal generation stage. A thresholding process similar to wavelet de-noising is used to identify whether a particular coefficient is reliable or not. A decision is made as to whether a block is reliable or not based on the marking data present in each block and sometimes corrections are applied to the blocks. Finally the selected blocks are combined based on the diversity combining strategy to extract the embedded binary data

    Digital Watermarking, Fingerprinting and Compression: An Information-Theoretic Perspective

    Get PDF
    The ease with which digital data can be duplicated and distributed over the media and the Internethas raised many concerns about copyright infringement.In many situations, multimedia data (e.g., images, music, movies, etc) are illegally circulated, thus violatingintellectual property rights. In an attempt toovercome this problem, watermarking has been suggestedin the literature as the most effective means for copyright protection and authentication. Watermarking is the procedure whereby information (pertaining to owner and/or copyright) is embedded into host data, such that it is:(i) hidden, i.e., not perceptually visible; and(ii) recoverable, even after a (possibly malicious) degradation of the protected work. In this thesis,we prove some theoretical results that establish the fundamental limits of a general class of watermarking schemes. The main focus of this thesis is the problem ofjoint watermarking and compression of images, whichcan be briefly described as follows: due to bandwidth or storage constraints, a watermarked image is distributed in quantized form, using RQR_Q bits per image dimension, and is subject to some additional degradation (possibly due to malicious attacks). The hidden message carries RWR_W bits per image dimension. Our main result is the determination of the region of allowable rates (RQ,RW)(R_Q, R_W), such that: (i) an average distortion constraint between the original and the watermarked/compressed image is satisfied, and (ii) the hidden message is detected from the degraded image with very high probability. Using notions from information theory, we prove coding theorems that establish the rate regionin the following cases: (a) general i.i.d. image distributions,distortion constraints and memoryless attacks, (b) memoryless attacks combined with collusion (for fingerprinting applications), and (c) general---not necessarily stationary or ergodic---Gaussian image distributions and attacks, and average quadratic distortion constraints. Moreover, we prove a multi-user version of a result by Costa on the capacity of a Gaussian channel with known interference at the encoder

    Method for copyright protection of deep neural networks using digital watermarking

    Get PDF
    В статье предлагается новый метод защиты авторских прав на глубокие нейронные сети. Основная идея метода заключается во встраивании цифровых водяных знаков в защищаемую модель путем ее дообучения на уникальном наборе псевдоголографических изображений (псевдоголограмм). Псевдоголограмма – это двумерный синусоидальный сигнал, кодирующий двоичную последовательность произвольной длины. Изменяя фазу каждой синусоиды, можно формировать различные изображения-псевдоголограммы на основе одной битовой последовательности. Предлагаемая схема встраивания заключается в генерации обучающей выборки таким образом, чтобы псевдоголограммы, сформированные на основе одной последовательности, попадали в один и тот же класс. При этом каждому классу будут соответствовать различные битовые последовательности. Верификация цифровых водяных знаков осуществляется путем подачи на вход модели различных псевдоголограмм и проверки соответствия скрытой в них последовательности определенному классу. Экспериментальные исследования подтверждают работоспособность метода, а также соответствие всем критериям качества, выдвигаемым к методам встраивания цифровых водяных знаков в нейронные сети.Исследование выполнено за счет гранта Российского научного фонда № 21-71-00106, https://rscf.ru/project/21-71-00106/

    Secure covert communications over streaming media using dynamic steganography

    Get PDF
    Streaming technologies such as VoIP are widely embedded into commercial and industrial applications, so it is imperative to address data security issues before the problems get really serious. This thesis describes a theoretical and experimental investigation of secure covert communications over streaming media using dynamic steganography. A covert VoIP communications system was developed in C++ to enable the implementation of the work being carried out. A new information theoretical model of secure covert communications over streaming media was constructed to depict the security scenarios in streaming media-based steganographic systems with passive attacks. The model involves a stochastic process that models an information source for covert VoIP communications and the theory of hypothesis testing that analyses the adversary‘s detection performance. The potential of hardware-based true random key generation and chaotic interval selection for innovative applications in covert VoIP communications was explored. Using the read time stamp counter of CPU as an entropy source was designed to generate true random numbers as secret keys for streaming media steganography. A novel interval selection algorithm was devised to choose randomly data embedding locations in VoIP streams using random sequences generated from achaotic process. A dynamic key updating and transmission based steganographic algorithm that includes a one-way cryptographical accumulator integrated into dynamic key exchange for covert VoIP communications, was devised to provide secure key exchange for covert communications over streaming media. The discrete logarithm problem in mathematics and steganalysis using t-test revealed the algorithm has the advantage of being the most solid method of key distribution over a public channel. The effectiveness of the new steganographic algorithm for covert communications over streaming media was examined by means of security analysis, steganalysis using non parameter Mann-Whitney-Wilcoxon statistical testing, and performance and robustness measurements. The algorithm achieved the average data embedding rate of 800 bps, comparable to other related algorithms. The results indicated that the algorithm has no or little impact on real-time VoIP communications in terms of speech quality (< 5% change in PESQ with hidden data), signal distortion (6% change in SNR after steganography) and imperceptibility, and it is more secure and effective in addressing the security problems than other related algorithms

    An improved randomization of a multi-blocking jpeg based steganographic system.

    Get PDF
    Thesis (M.Sc.)-University of KwaZulu-Natal, Durban, 2010.Steganography is classified as the art of hiding information. In a digital context, this refers to our ability to hide secret messages within innocent digital cover data. The digital domain offers many opportunities for possible cover mediums, such as cloud based hiding (saving secret information within the internet and its structure), image based hiding, video and audio based hiding, text based documents as well as the potential of hiding within any set of compressed data. This dissertation focuses on the image based domain and investigates currently available image based steganographic techniques. After a review of the history of the field, and a detailed survey of currently available JPEG based steganographic systems, the thesis focuses on the systems currently considered to be secure and introduces mechanisms that have been developed to detect them. The dissertation presents a newly developed system that is designed to counter act the current weakness in the YASS JPEG based steganographic system. By introducing two new levels of randomization to the embedding process, the proposed system offers security benefits over YASS. The introduction of randomization to the B‐block sizes as well as the E‐block sizes used in the embedding process aids in increasing security and the potential for new, larger E‐block sizes also aids in providing an increased set of candidate coefficients to be used for embedding. The dissertation also introduces a new embedding scheme which focuses on hiding in medium frequency coefficients. By hiding in these medium frequency coefficients, we allow for more aggressive embedding without risking more visual distortion but trade this off with a risk of higher error rates due to compression losses. Finally, the dissertation presents simulation aimed at testing the proposed system performance compared to other JPEG based steganographic systems with similar embedding properties. We show that the new system achieves an embedding capacity of 1.6, which represents round a 7 times improvement over YASS. We also show that the new system, although introducing more bits in error per B‐block, successfully allows for the embedding of up to 2 bits per B‐block more than YASS at a similar error rate per B‐block. We conclude the results by demonstrating the new systems ability to resist detection both through human observation, via a survey, as well as resist computer aided analysis
    corecore