30 research outputs found

    Secure covert communications over streaming media using dynamic steganography

    Get PDF
    Streaming technologies such as VoIP are widely embedded into commercial and industrial applications, so it is imperative to address data security issues before the problems get really serious. This thesis describes a theoretical and experimental investigation of secure covert communications over streaming media using dynamic steganography. A covert VoIP communications system was developed in C++ to enable the implementation of the work being carried out. A new information theoretical model of secure covert communications over streaming media was constructed to depict the security scenarios in streaming media-based steganographic systems with passive attacks. The model involves a stochastic process that models an information source for covert VoIP communications and the theory of hypothesis testing that analyses the adversary‘s detection performance. The potential of hardware-based true random key generation and chaotic interval selection for innovative applications in covert VoIP communications was explored. Using the read time stamp counter of CPU as an entropy source was designed to generate true random numbers as secret keys for streaming media steganography. A novel interval selection algorithm was devised to choose randomly data embedding locations in VoIP streams using random sequences generated from achaotic process. A dynamic key updating and transmission based steganographic algorithm that includes a one-way cryptographical accumulator integrated into dynamic key exchange for covert VoIP communications, was devised to provide secure key exchange for covert communications over streaming media. The discrete logarithm problem in mathematics and steganalysis using t-test revealed the algorithm has the advantage of being the most solid method of key distribution over a public channel. The effectiveness of the new steganographic algorithm for covert communications over streaming media was examined by means of security analysis, steganalysis using non parameter Mann-Whitney-Wilcoxon statistical testing, and performance and robustness measurements. The algorithm achieved the average data embedding rate of 800 bps, comparable to other related algorithms. The results indicated that the algorithm has no or little impact on real-time VoIP communications in terms of speech quality (< 5% change in PESQ with hidden data), signal distortion (6% change in SNR after steganography) and imperceptibility, and it is more secure and effective in addressing the security problems than other related algorithms

    Digital Watermarking for Verification of Perception-based Integrity of Audio Data

    Get PDF
    In certain application fields digital audio recordings contain sensitive content. Examples are historical archival material in public archives that preserve our cultural heritage, or digital evidence in the context of law enforcement and civil proceedings. Because of the powerful capabilities of modern editing tools for multimedia such material is vulnerable to doctoring of the content and forgery of its origin with malicious intent. Also inadvertent data modification and mistaken origin can be caused by human error. Hence, the credibility and provenience in terms of an unadulterated and genuine state of such audio content and the confidence about its origin are critical factors. To address this issue, this PhD thesis proposes a mechanism for verifying the integrity and authenticity of digital sound recordings. It is designed and implemented to be insensitive to common post-processing operations of the audio data that influence the subjective acoustic perception only marginally (if at all). Examples of such operations include lossy compression that maintains a high sound quality of the audio media, or lossless format conversions. It is the objective to avoid de facto false alarms that would be expectedly observable in standard crypto-based authentication protocols in the presence of these legitimate post-processing. For achieving this, a feasible combination of the techniques of digital watermarking and audio-specific hashing is investigated. At first, a suitable secret-key dependent audio hashing algorithm is developed. It incorporates and enhances so-called audio fingerprinting technology from the state of the art in contentbased audio identification. The presented algorithm (denoted as ”rMAC” message authentication code) allows ”perception-based” verification of integrity. This means classifying integrity breaches as such not before they become audible. As another objective, this rMAC is embedded and stored silently inside the audio media by means of audio watermarking technology. This approach allows maintaining the authentication code across the above-mentioned admissible post-processing operations and making it available for integrity verification at a later date. For this, an existent secret-key ependent audio watermarking algorithm is used and enhanced in this thesis work. To some extent, the dependency of the rMAC and of the watermarking processing from a secret key also allows authenticating the origin of a protected audio. To elaborate on this security aspect, this work also estimates the brute-force efforts of an adversary attacking this combined rMAC-watermarking approach. The experimental results show that the proposed method provides a good distinction and classification performance of authentic versus doctored audio content. It also allows the temporal localization of audible data modification within a protected audio file. The experimental evaluation finally provides recommendations about technical configuration settings of the combined watermarking-hashing approach. Beyond the main topic of perception-based data integrity and data authenticity for audio, this PhD work provides new general findings in the fields of audio fingerprinting and digital watermarking. The main contributions of this PhD were published and presented mainly at conferences about multimedia security. These publications were cited by a number of other authors and hence had some impact on their works

    Integration of biometrics and steganography: A comprehensive review

    Get PDF
    The use of an individual’s biometric characteristics to advance authentication and verification technology beyond the current dependence on passwords has been the subject of extensive research for some time. Since such physical characteristics cannot be hidden from the public eye, the security of digitised biometric data becomes paramount to avoid the risk of substitution or replay attacks. Biometric systems have readily embraced cryptography to encrypt the data extracted from the scanning of anatomical features. Significant amounts of research have also gone into the integration of biometrics with steganography to add a layer to the defence-in-depth security model, and this has the potential to augment both access control parameters and the secure transmission of sensitive biometric data. However, despite these efforts, the amalgamation of biometric and steganographic methods has failed to transition from the research lab into real-world applications. In light of this review of both academic and industry literature, we suggest that future research should focus on identifying an acceptable level steganographic embedding for biometric applications, securing exchange of steganography keys, identifying and address legal implications, and developing industry standards

    An improvement of RGB color image watermarking technique using ISB stream bit and Hadamard matrix

    Get PDF
    In the past half century, the advancement of internet technology has been rapid and widespread. The innovation provides an efficient platform for human communication and other digital applications. Nowadays, everyone can easily access, copy, modify and distribute digital contents for personal or commercial gains. Therefore, a good copyright protection is required to discourage the illicit activities. On way is to watermark the assets by embedding an owner's identity which could later on be used for authentication. Thus far, many watermarking techniques have been proposed which focus on improving three standard measures, visual quality or imperceptibility, robustness and capacity. Although their performances are encouraging, there are still plenty of rooms for improvements. Thus, this study proposes a new watermarking technique using Least Significant Bit (LSB) insertion approach coupled with Hadamard matrix. The technique involves four main stages: Firstly, the cover image is decomposed into three separate channels, Red, Green and Blue. Secondly, the Blue channel is chosen and converted into an eight bit stream. Thirdly, the second least signification bit is selected from the bit stream for embedding. In order to increase the imperceptibility a Hadamard matrix is used to find the best pixels of the cover image for the embedding task. Experimental results on standard dataset have revealed that average PSNR value is greater than 58db, which indicates the watermarked image is visually identical to its original. However, the proposed technique suffers from Gaussian and Poisson noise attacks

    Application and Theory of Multimedia Signal Processing Using Machine Learning or Advanced Methods

    Get PDF
    This Special Issue is a book composed by collecting documents published through peer review on the research of various advanced technologies related to applications and theories of signal processing for multimedia systems using ML or advanced methods. Multimedia signals include image, video, audio, character recognition and optimization of communication channels for networks. The specific contents included in this book are data hiding, encryption, object detection, image classification, and character recognition. Academics and colleagues who are interested in these topics will find it interesting to read

    Image Evolution Analysis Through Forensic Techniques

    Get PDF

    Applications de la reprĂ©sentation parcimonieuse perceptuelle par graphe de dĂ©charges (Spikegramme) pour la protection du droit d’auteur des signaux sonores

    Get PDF
    Chaque annĂ©e, le piratage mondial de la musique coĂ»te plusieurs milliards de dollars en pertes Ă©conomiques, pertes d’emplois et pertes de gains des travailleurs ainsi que la perte de millions de dollars en recettes fiscales. La plupart du piratage de la musique est dĂ» Ă  la croissance rapide et Ă  la facilitĂ© des technologies actuelles pour la copie, le partage, la manipulation et la distribution de donnĂ©es musicales [Domingo, 2015], [Siwek, 2007]. Le tatouage des signaux sonores a Ă©tĂ© proposĂ© pour protĂ©ger les droit des auteurs et pour permettre la localisation des instants oĂč le signal sonore a Ă©tĂ© falsifiĂ©. Dans cette thĂšse, nous proposons d’utiliser la reprĂ©sentation parcimonieuse bio-inspirĂ©e par graphe de dĂ©charges (spikegramme), pour concevoir une nouvelle mĂ©thode permettant la localisation de la falsification dans les signaux sonores. Aussi, une nouvelle mĂ©thode de protection du droit d’auteur. Finalement, une nouvelle attaque perceptuelle, en utilisant le spikegramme, pour attaquer des systĂšmes de tatouage sonore. Nous proposons tout d’abord une technique de localisation des falsifications (‘tampering’) des signaux sonores. Pour cela nous combinons une mĂ©thode Ă  spectre Ă©tendu modifiĂ© (‘modified spread spectrum’, MSS) avec une reprĂ©sentation parcimonieuse. Nous utilisons une technique de poursuite perceptive adaptĂ©e (perceptual marching pursuit, PMP [Hossein Najaf-Zadeh, 2008]) pour gĂ©nĂ©rer une reprĂ©sentation parcimonieuse (spikegramme) du signal sonore d’entrĂ©e qui est invariante au dĂ©calage temporel [E. C. Smith, 2006] et qui prend en compte les phĂ©nomĂšnes de masquage tels qu’ils sont observĂ©s en audition. Un code d’authentification est insĂ©rĂ© Ă  l’intĂ©rieur des coefficients de la reprĂ©sentation en spikegramme. Puis ceux-ci sont combinĂ©s aux seuils de masquage. Le signal tatouĂ© est resynthĂ©tisĂ© Ă  partir des coefficients modifiĂ©s, et le signal ainsi obtenu est transmis au dĂ©codeur. Au dĂ©codeur, pour identifier un segment falsifiĂ© du signal sonore, les codes d’authentification de tous les segments intacts sont analysĂ©s. Si les codes ne peuvent ĂȘtre dĂ©tectĂ©s correctement, on sait qu’alors le segment aura Ă©tĂ© falsifiĂ©. Nous proposons de tatouer selon le principe Ă  spectre Ă©tendu (appelĂ© MSS) afin d’obtenir une grande capacitĂ© en nombre de bits de tatouage introduits. Dans les situations oĂč il y a dĂ©synchronisation entre le codeur et le dĂ©codeur, notre mĂ©thode permet quand mĂȘme de dĂ©tecter des piĂšces falsifiĂ©es. Par rapport Ă  l’état de l’art, notre approche a le taux d’erreur le plus bas pour ce qui est de dĂ©tecter les piĂšces falsifiĂ©es. Nous avons utilisĂ© le test de l’opinion moyenne (‘MOS’) pour mesurer la qualitĂ© des systĂšmes tatouĂ©s. Nous Ă©valuons la mĂ©thode de tatouage semi-fragile par le taux d’erreur (nombre de bits erronĂ©s divisĂ© par tous les bits soumis) suite Ă  plusieurs attaques. Les rĂ©sultats confirment la supĂ©rioritĂ© de notre approche pour la localisation des piĂšces falsifiĂ©es dans les signaux sonores tout en prĂ©servant la qualitĂ© des signaux. Ensuite nous proposons une nouvelle technique pour la protection des signaux sonores. Cette technique est basĂ©e sur la reprĂ©sentation par spikegrammes des signaux sonores et utilise deux dictionnaires (TDA pour Two-Dictionary Approach). Le spikegramme est utilisĂ© pour coder le signal hĂŽte en utilisant un dictionnaire de filtres gammatones. Pour le tatouage, nous utilisons deux dictionnaires diffĂ©rents qui sont sĂ©lectionnĂ©s en fonction du bit d’entrĂ©e Ă  tatouer et du contenu du signal. Notre approche trouve les gammatones appropriĂ©s (appelĂ©s noyaux de tatouage) sur la base de la valeur du bit Ă  tatouer, et incorpore les bits de tatouage dans la phase des gammatones du tatouage. De plus, il est montrĂ© que la TDA est libre d’erreur dans le cas d’aucune situation d’attaque. Il est dĂ©montrĂ© que la dĂ©corrĂ©lation des noyaux de tatouage permet la conception d’une mĂ©thode de tatouage sonore trĂšs robuste. Les expĂ©riences ont montrĂ© la meilleure robustesse pour la mĂ©thode proposĂ©e lorsque le signal tatouĂ© est corrompu par une compression MP3 Ă  32 kbits par seconde avec une charge utile de 56.5 bps par rapport Ă  plusieurs techniques rĂ©centes. De plus nous avons Ă©tudiĂ© la robustesse du tatouage lorsque les nouveaux codec USAC (Unified Audion and Speech Coding) Ă  24kbps sont utilisĂ©s. La charge utile est alors comprise entre 5 et 15 bps. Finalement, nous utilisons les spikegrammes pour proposer trois nouvelles mĂ©thodes d’attaques. Nous les comparons aux mĂ©thodes rĂ©centes d’attaques telles que 32 kbps MP3 et 24 kbps USAC. Ces attaques comprennent l’attaque par PMP, l’attaque par bruit inaudible et l’attaque de remplacement parcimonieuse. Dans le cas de l’attaque par PMP, le signal de tatouage est reprĂ©sentĂ© et resynthĂ©tisĂ© avec un spikegramme. Dans le cas de l’attaque par bruit inaudible, celui-ci est gĂ©nĂ©rĂ© et ajoutĂ© aux coefficients du spikegramme. Dans le cas de l’attaque de remplacement parcimonieuse, dans chaque segment du signal, les caractĂ©ristiques spectro-temporelles du signal (les dĂ©charges temporelles ;‘time spikes’) se trouvent en utilisant le spikegramme et les spikes temporelles et similaires sont remplacĂ©s par une autre. Pour comparer l’efficacitĂ© des attaques proposĂ©es, nous les comparons au dĂ©codeur du tatouage Ă  spectre Ă©tendu. Il est dĂ©montrĂ© que l’attaque par remplacement parcimonieux rĂ©duit la corrĂ©lation normalisĂ©e du dĂ©codeur de spectre Ă©tendu avec un plus grand facteur par rapport Ă  la situation oĂč le dĂ©codeur de spectre Ă©tendu est attaquĂ© par la transformation MP3 (32 kbps) et 24 kbps USAC.Abstract : Every year global music piracy is making billion dollars of economic, job, workers’ earnings losses and also million dollars loss in tax revenues. Most of the music piracy is because of rapid growth and easiness of current technologies for copying, sharing, manipulating and distributing musical data [Domingo, 2015], [Siwek, 2007]. Audio watermarking has been proposed as one approach for copyright protection and tamper localization of audio signals to prevent music piracy. In this thesis, we use the spikegram- which is a bio-inspired sparse representation- to propose a novel approach to design an audio tamper localization method as well as an audio copyright protection method and also a new perceptual attack against any audio watermarking system. First, we propose a tampering localization method for audio signal, based on a Modified Spread Spectrum (MSS) approach. Perceptual Matching Pursuit (PMP) is used to compute the spikegram (which is a sparse and time-shift invariant representation of audio signals) as well as 2-D masking thresholds. Then, an authentication code (which includes an Identity Number, ID) is inserted inside the sparse coefficients. For high quality watermarking, the watermark data are multiplied with masking thresholds. The time domain watermarked signal is re-synthesized from the modified coefficients and the signal is sent to the decoder. To localize a tampered segment of the audio signal, at the decoder, the ID’s associated to intact segments are detected correctly, while the ID associated to a tampered segment is mis-detected or not detected. To achieve high capacity, we propose a modified version of the improved spread spectrum watermarking called MSS (Modified Spread Spectrum). We performed a mean opinion test to measure the quality of the proposed watermarking system. Also, the bit error rates for the presented tamper localization method are computed under several attacks. In comparison to conventional methods, the proposed tamper localization method has the smallest number of mis-detected tampered frames, when only one frame is tampered. In addition, the mean opinion test experiments confirms that the proposed method preserves the high quality of input audio signals. Moreover, we introduce a new audio watermarking technique based on a kernel-based representation of audio signals. A perceptive sparse representation (spikegram) is combined with a dictionary of gammatone kernels to construct a robust representation of sounds. Compared to traditional phase embedding methods where the phase of signal’s Fourier coefficients are modified, in this method, the watermark bit stream is inserted by modifying the phase of gammatone kernels. Moreover, the watermark is automatically embedded only into kernels with high amplitudes where all masked (non-meaningful) gammatones have been already removed. Two embedding methods are proposed, one based on the watermark embedding into the sign of gammatones (one dictionary method) and another one based on watermark embedding into both sign and phase of gammatone kernels (two-dictionary method). The robustness of the proposed method is shown against 32 kbps MP3 with an embedding rate of 56.5 bps while the state of the art payload for 32 kbps MP3 robust iii iv watermarking is lower than 50.3 bps. Also, we showed that the proposed method is robust against unified speech and audio codec (24 kbps USAC, Linear predictive and Fourier domain modes) with an average payload of 5 − 15 bps. Moreover, it is shown that the proposed method is robust against a variety of signal processing transforms while preserving quality. Finally, three perceptual attacks are proposed in the perceptual sparse domain using spikegram. These attacks are called PMP, inaudible noise adding and the sparse replacement attacks. In PMP attack, the host signals are represented and re-synthesized with spikegram. In inaudible noise attack, the inaudible noise is generated and added to the spikegram coefficients. In sparse replacement attack, each specific frame of the spikegram representation - when possible - is replaced with a combination of similar frames located in other parts of the spikegram. It is shown than the PMP and inaudible noise attacks have roughly the same efficiency as the 32 kbps MP3 attack, while the replacement attack reduces the normalized correlation of the spread spectrum decoder with a greater factor than when attacking with 32 kbps MP3 or 24 kbps unified speech and audio coding (USAC)

    Resiliency Assessment and Enhancement of Intrinsic Fingerprinting

    Get PDF
    Intrinsic fingerprinting is a class of digital forensic technology that can detect traces left in digital multimedia data in order to reveal data processing history and determine data integrity. Many existing intrinsic fingerprinting schemes have implicitly assumed favorable operating conditions whose validity may become uncertain in reality. In order to establish intrinsic fingerprinting as a credible approach to digital multimedia authentication, it is important to understand and enhance its resiliency under unfavorable scenarios. This dissertation addresses various resiliency aspects that can appear in a broad range of intrinsic fingerprints. The first aspect concerns intrinsic fingerprints that are designed to identify a particular component in the processing chain. Such fingerprints are potentially subject to changes due to input content variations and/or post-processing, and it is desirable to ensure their identifiability in such situations. Taking an image-based intrinsic fingerprinting technique for source camera model identification as a representative example, our investigations reveal that the fingerprints have a substantial dependency on image content. Such dependency limits the achievable identification accuracy, which is penalized by a mismatch between training and testing image content. To mitigate such a mismatch, we propose schemes to incorporate image content into training image selection and significantly improve the identification performance. We also consider the effect of post-processing against intrinsic fingerprinting, and study source camera identification based on imaging noise extracted from low-bit-rate compressed videos. While such compression reduces the fingerprint quality, we exploit different compression levels within the same video to achieve more efficient and accurate identification. The second aspect of resiliency addresses anti-forensics, namely, adversarial actions that intentionally manipulate intrinsic fingerprints. We investigate the cost-effectiveness of anti-forensic operations that counteract color interpolation identification. Our analysis pinpoints the inherent vulnerabilities of color interpolation identification, and motivates countermeasures and refined anti-forensic strategies. We also study the anti-forensics of an emerging space-time localization technique for digital recordings based on electrical network frequency analysis. Detection schemes against anti-forensic operations are devised under a mathematical framework. For both problems, game-theoretic approaches are employed to characterize the interplay between forensic analysts and adversaries and to derive optimal strategies. The third aspect regards the resilient and robust representation of intrinsic fingerprints for multiple forensic identification tasks. We propose to use the empirical frequency response as a generic type of intrinsic fingerprint that can facilitate the identification of various linear and shift-invariant (LSI) and non-LSI operations

    DRONE DELIVERY OF CBNRECy – DEW WEAPONS Emerging Threats of Mini-Weapons of Mass Destruction and Disruption (WMDD)

    Get PDF
    Drone Delivery of CBNRECy – DEW Weapons: Emerging Threats of Mini-Weapons of Mass Destruction and Disruption (WMDD) is our sixth textbook in a series covering the world of UASs and UUVs. Our textbook takes on a whole new purview for UAS / CUAS/ UUV (drones) – how they can be used to deploy Weapons of Mass Destruction and Deception against CBRNE and civilian targets of opportunity. We are concerned with the future use of these inexpensive devices and their availability to maleficent actors. Our work suggests that UASs in air and underwater UUVs will be the future of military and civilian terrorist operations. UAS / UUVs can deliver a huge punch for a low investment and minimize human casualties.https://newprairiepress.org/ebooks/1046/thumbnail.jp
    corecore