488 research outputs found

    SecMon: End-to-End Quality and Security Monitoring System

    Get PDF
    The Voice over Internet Protocol (VoIP) is becoming a more available and popular way of communicating for Internet users. This also applies to Peer-to-Peer (P2P) systems and merging these two have already proven to be successful (e.g. Skype). Even the existing standards of VoIP provide an assurance of security and Quality of Service (QoS), however, these features are usually optional and supported by limited number of implementations. As a result, the lack of mandatory and widely applicable QoS and security guaranties makes the contemporary VoIP systems vulnerable to attacks and network disturbances. In this paper we are facing these issues and propose the SecMon system, which simultaneously provides a lightweight security mechanism and improves quality parameters of the call. SecMon is intended specially for VoIP service over P2P networks and its main advantage is that it provides authentication, data integrity services, adaptive QoS and (D)DoS attack detection. Moreover, the SecMon approach represents a low-bandwidth consumption solution that is transparent to the users and possesses a self-organizing capability. The above-mentioned features are accomplished mainly by utilizing two information hiding techniques: digital audio watermarking and network steganography. These techniques are used to create covert channels that serve as transport channels for lightweight QoS measurement's results. Furthermore, these metrics are aggregated in a reputation system that enables best route path selection in the P2P network. The reputation system helps also to mitigate (D)DoS attacks, maximize performance and increase transmission efficiency in the network.Comment: Paper was presented at 7th international conference IBIZA 2008: On Computer Science - Research And Applications, Poland, Kazimierz Dolny 31.01-2.02 2008; 14 pages, 5 figure

    DeAR: A Deep-learning-based Audio Re-recording Resilient Watermarking

    Full text link
    Audio watermarking is widely used for leaking source tracing. The robustness of the watermark determines the traceability of the algorithm. With the development of digital technology, audio re-recording (AR) has become an efficient and covert means to steal secrets. AR process could drastically destroy the watermark signal while preserving the original information. This puts forward a new requirement for audio watermarking at this stage, that is, to be robust to AR distortions. Unfortunately, none of the existing algorithms can effectively resist AR attacks due to the complexity of the AR process. To address this limitation, this paper proposes DeAR, a deep-learning-based audio re-recording resistant watermarking. Inspired by DNN-based image watermarking, we pioneer a deep learning framework for audio carriers, based on which the watermark signal can be effectively embedded and extracted. Meanwhile, in order to resist the AR attack, we delicately analyze the distortions that occurred in the AR process and design the corresponding distortion layer to cooperate with the proposed watermarking framework. Extensive experiments show that the proposed algorithm can resist not only common electronic channel distortions but also AR distortions. Under the premise of high-quality embedding (SNR=25.86dB), in the case of a common re-recording distance (20cm), the algorithm can effectively achieve an average bit recovery accuracy of 98.55%.Comment: Accepted by AAAI202

    Modeling and frequency tracking of marine mammal whistle calls

    Get PDF
    Submitted in partial fulfillment of the requirements for the degree of Master of Science at the Massachusetts Institute of Technology and the Woods Hole Oceanographic Institution February 2009Marine mammal whistle calls present an attractive medium for covert underwater communications. High quality models of the whistle calls are needed in order to synthesize natural-sounding whistles with embedded information. Since the whistle calls are composed of frequency modulated harmonic tones, they are best modeled as a weighted superposition of harmonically related sinusoids. Previous research with bottlenose dolphin whistle calls has produced synthetic whistles that sound too “clean” for use in a covert communications system. Due to the sensitivity of the human auditory system, watermarking schemes that slightly modify the fundamental frequency contour have good potential for producing natural-sounding whistles embedded with retrievable watermarks. Structured total least squares is used with linear prediction analysis to track the time-varying fundamental frequency and harmonic amplitude contours throughout a whistle call. Simulation and experimental results demonstrate the capability to accurately model bottlenose dolphin whistle calls and retrieve embedded information from watermarked synthetic whistle calls. Different fundamental frequency watermarking schemes are proposed based on their ability to produce natural sounding synthetic whistles and yield suitable watermark detection and retrieval

    Wavelet-Based Audio Embedding & Audio/Video Compression

    Get PDF
    With the decline in military spending, the United States relies heavily on state side support. Communications has never been more important. High-quality audio and video capabilities are a must. Watermarking, traditionally used for copyright protection, is used in a new and exciting way. An efficient wavelet-based watermarking technique embeds audio information into a video signal. Several highly effective compression techniques are applied to compress the resulting audio/video signal in an embedded fashion. This wavelet-based compression algorithm incorporates bit plane coding, first difference coding, and Huffman coding. To demonstrate the potential of this audio embedding audio/video compression system, an audio signal is embedded into a video signal and the combined signal is compressed. Results show that overall compression rates of 15:1 can be achieved. The video signal is reconstructed with a median PSNR of nearly 33dB. Finally, the audio signal is extracted with out error

    Employing Psychoacoustic Model for Digital Audio Watermarking

    Get PDF
    This thesis discusses about digital audio watermarking by employing psychoacoustic model to make the watermarked signal inaudible to the audience. Due to the digital media data able to distribute easily without losing of data information, thus the intellectual property of musical creators and distributor may affected by this kind of circumstance . To prevent this, we propose the usage of spread spectrum technique and psychoacoustic model for embedding process, zero-forcing equalization and detection and wiener filtering for extracting process. Three samples of audio signal have been chosen for this experiment which are categorized as quiet, moderate, and noise state signal. The findings shows that our watermarking scheme achieved the intended purposes which are to test digital audio watermarking by employing psychoacoustic model, to embed different length of messages to test on accuracy of extracted data and to study the suitability on using hash function for verification of modification attacks
    corecore