22 research outputs found

    Attention Driven Solutions for Robust Digital Watermarking Within Media

    Get PDF
    As digital technologies have dramatically expanded within the last decade, content recognition now plays a major role within the control of media. Of the current recent systems available, digital watermarking provides a robust maintainable solution to enhance media security. The two main properties of digital watermarking, imperceptibility and robustness, are complimentary to each other but by employing visual attention based mechanisms within the watermarking framework, highly robust watermarking solutions are obtainable while also maintaining high media quality. This thesis firstly provides suitable bottom-up saliency models for raw image and video. The image and video saliency algorithms are estimated directly from within the wavelet domain for enhanced compatibility with the watermarking framework. By combining colour, orientation and intensity contrasts for the image model and globally compensated object motion in the video model, novel wavelet-based visual saliency algorithms are provided. The work extends these saliency models into a unique visual attention-based watermarking scheme by increasing the watermark weighting parameter within visually uninteresting regions. An increased watermark robustness, up to 40%, against various filtering attacks, JPEG2000 and H.264/AVC compression is obtained while maintaining the media quality, verified by various objective and subjective evaluation tools. As most video sequences are stored in an encoded format, this thesis studies watermarking schemes within the compressed domain. Firstly, the work provides a compressed domain saliency model formulated directly within the HEVC codec, utilizing various coding decisions such as block partition size, residual magnitude, intra frame angular prediction mode and motion vector difference magnitude. Large computational savings, of 50% or greater, are obtained compared with existing methodologies, as the saliency maps are generated from partially decoded bitstreams. Finally, the saliency maps formulated within the compressed HEVC domain are studied within the watermarking framework. A joint encoder and a frame domain watermarking scheme are both proposed by embedding data into the quantised transform residual data or wavelet coefficients, respectively, which exhibit low visual salience

    Tatouage du flux compressé MPEG-4 AVC

    Get PDF
    La présente thèse aborde le sujet de tatouage du flux MPEG-4 AVC sur ses deux volets théoriques et applicatifs en considérant deux domaines applicatifs à savoir la protection du droit d auteur et la vérification de l'intégrité du contenu. Du point de vue théorique, le principal enjeu est de développer un cadre de tatouage unitaire en mesure de servir les deux applications mentionnées ci-dessus. Du point de vue méthodologique, le défi consiste à instancier ce cadre théorique pour servir les applications visées. La première contribution principale consiste à définir un cadre théorique pour le tatouage multi symboles à base de modulation d index de quantification (m-QIM). La règle d insertion QIM a été généralisée du cas binaire au cas multi-symboles et la règle de détection optimale (minimisant la probabilité d erreur à la détection en condition du bruit blanc, additif et gaussien) a été établie. Il est ainsi démontré que la quantité d information insérée peut être augmentée par un facteur de log2m tout en gardant les mêmes contraintes de robustesse et de transparence. Une quantité d information de 150 bits par minutes, soit environ 20 fois plus grande que la limite imposée par la norme DCI est obtenue. La deuxième contribution consiste à spécifier une opération de prétraitement qui permet d éliminer les impactes du phénomène du drift (propagation de la distorsion) dans le flux compressé MPEG-4 AVC. D abord, le problème a été formalisé algébriquement en se basant sur les expressions analytiques des opérations d encodage. Ensuite, le problème a été résolu sous la contrainte de prévention du drift. Une amélioration de la transparence avec des gains de 2 dB en PSNR est obtenueThe present thesis addresses the MPEG-4 AVC stream watermarking and considers two theoretical and applicative challenges, namely ownership protection and content integrity verification.From the theoretical point of view, the thesis main challenge is to develop a unitary watermarking framework (insertion/detection) able to serve the two above mentioned applications in the compressed domain. From the methodological point of view, the challenge is to instantiate this theoretical framework for serving the targeted applications. The thesis first main contribution consists in building the theoretical framework for the multi symbol watermarking based on quantization index modulation (m-QIM). The insertion rule is analytically designed by extending the binary QIM rule. The detection rule is optimized so as to ensure minimal probability of error under additive white Gaussian noise distributed attacks. It is thus demonstrated that the data payload can be increased by a factor of log2m, for prescribed transparency and additive Gaussian noise power. A data payload of 150 bits per minute, i.e. about 20 times larger than the limit imposed by the DCI standard, is obtained. The thesis second main theoretical contribution consists in specifying a preprocessing MPEG-4 AVC shaping operation which can eliminate the intra-frame drift effect. The drift represents the distortion spread in the compressed stream related to the MPEG encoding paradigm. In this respect, the drift distortion propagation problem in MPEG-4 AVC is algebraically expressed and the corresponding equations system is solved under drift-free constraints. The drift-free shaping results in gain in transparency of 2 dB in PSNREVRY-INT (912282302) / SudocSudocFranceF

    Reduced reference image and video quality assessments: review of methods

    Get PDF
    With the growing demand for image and video-based applications, the requirements of consistent quality assessment metrics of image and video have increased. Different approaches have been proposed in the literature to estimate the perceptual quality of images and videos. These approaches can be divided into three main categories; full reference (FR), reduced reference (RR) and no-reference (NR). In RR methods, instead of providing the original image or video as a reference, we need to provide certain features (i.e., texture, edges, etc.) of the original image or video for quality assessment. During the last decade, RR-based quality assessment has been a popular research area for a variety of applications such as social media, online games, and video streaming. In this paper, we present review and classification of the latest research work on RR-based image and video quality assessment. We have also summarized different databases used in the field of 2D and 3D image and video quality assessment. This paper would be helpful for specialists and researchers to stay well-informed about recent progress of RR-based image and video quality assessment. The review and classification presented in this paper will also be useful to gain understanding of multimedia quality assessment and state-of-the-art approaches used for the analysis. In addition, it will help the reader select appropriate quality assessment methods and parameters for their respective applications

    Virtual Eye – Revolutionizing Vision Assistance For People With Disabilities

    Get PDF
    Visually challenged individuals have faced numerous challenges in their daily lives. These challenges include: Visually challenged individuals have difficulty reading printed materials, including books, magazines, and newspapers. This limitation can significantly impact their education, as they may not have access to all the materials they need to learn. Moving around in unfamiliar places can be a daunting task for the visually impaired. They may struggle to access digital or printed materials, as these are often not available in accessible formats. It might be difficult for those who are blind to identify objects. This can be frustrating, especially in situations where they are alone and need to identify objects. To address this issue, we are developing a mobile application for visually challenged individuals by providing a range of features such as text-to-speech, speech-to-text, image-to-audio, and PDF-to-audio. It enables visually challenged individuals to access information, read books, identify objects, communicate, and navigate with ease and independence. The app's user-friendly interface can be accessed both manually and by voice command, making it easy to use for people with varying levels of technical expertise. Overall, the Virtual Eye app helps visually challenged individuals lead more fulfilling and independent lives. Overall, Virtual Eye application is an essential tool for visually challenged individuals, helping them navigate their daily lives with ease and independence. With this app, they can access information, communicate, and identify objects without the need for a third party, enhancing their quality of life and sense of autonomy

    Efficient and Robust Video Steganography Algorithms for Secure Data Communication

    Get PDF
    Over the last two decades, the science of secretly embedding and communicating data has gained tremendous significance due to the technological advancement in communication and digital content. Steganography is the art of concealing secret data in a particular interactive media transporter such as text, audio, image, and video data in order to build a covert communication between authorized parties. Nowadays, video steganography techniques are important in many video-sharing and social networking applications such as Livestreaming, YouTube, Twitter, and Facebook because of noteworthy developments in advanced video over the Internet. The performance of any steganography method, ultimately, relies on the imperceptibility, hiding capacity, and robustness against attacks. Although many video steganography methods exist, several of them lack the preprocessing stages. In addition, less security, low embedding capacity, less imperceptibility, and less robustness against attacks are other issues that affect these algorithms. This dissertation investigates and analyzes cutting edge video steganography techniques in both compressed and raw domains. Moreover, it provides solutions for the aforementioned problems by proposing new and effective methods for digital video steganography. The key objectives of this research are to develop: 1) a highly secure video steganography algorithm based on error correcting codes (ECC); 2) an increased payload video steganography algorithm in the discrete wavelet domain based on ECC; 3) a novel video steganography algorithm based on Kanade-Lucas-Tomasi (KLT) tracking and ECC; 4) a robust video steganography algorithm in the wavelet domain based on KLT tracking and ECC; 5) a new video steganography algorithm based on the multiple object tracking (MOT) and ECC; and 6) a robust and secure video steganography algorithm in the discrete wavelet and discrete cosine transformations based on MOT and ECC. The experimental results from our research demonstrate that our proposed algorithms achieve higher embedding capacity as well as better imperceptibility of stego videos. Furthermore, the preprocessing stages increase the security and robustness of the proposed algorithms against attacks when compared to state-of-the-art steganographic methods

    Perceptual Video Quality Assessment and Enhancement

    Get PDF
    With the rapid development of network visual communication technologies, digital video has become ubiquitous and indispensable in our everyday lives. Video acquisition, communication, and processing systems introduce various types of distortions, which may have major impact on perceived video quality by human observers. Effective and efficient objective video quality assessment (VQA) methods that can predict perceptual video quality are highly desirable in modern visual communication systems for performance evaluation, quality control and resource allocation purposes. Moreover, perceptual VQA measures may also be employed to optimize a wide variety of video processing algorithms and systems for best perceptual quality. This thesis exploits several novel ideas in the areas of video quality assessment and enhancement. Firstly, by considering a video signal as a 3D volume image, we propose a 3D structural similarity (SSIM) based full-reference (FR) VQA approach, which also incorporates local information content and local distortion-based pooling methods. Secondly, a reduced-reference (RR) VQA scheme is developed by tracing the evolvement of local phase structures over time in the complex wavelet domain. Furthermore, we propose a quality-aware video system which combines spatial and temporal quality measures with a robust video watermarking technique, such that RR-VQA can be performed without transmitting RR features via an ancillary lossless channel. Finally, a novel strategy for enhancing video denoising algorithms, namely poly-view fusion, is developed by examining a video sequence as a 3D volume image from multiple (front, side, top) views. This leads to significant and consistent gain in terms of both peak signal-to-noise ratio (PSNR) and SSIM performance, especially at high noise levels

    Visual Saliency Estimation Via HEVC Bitstream Analysis

    Get PDF
    Abstract Since Information Technology developed dramatically from the last century 50's, digital images and video are ubiquitous. In the last decade, image and video processing have become more and more popular in biomedical, industrial, art and other fields. People made progress in the visual information such as images or video display, storage and transmission. The attendant problem is that video processing tasks in time domain become particularly arduous. Based on the study of the existing compressed domain video saliency detection model, a new saliency estimation model for video based on High Efficiency Video Coding (HEVC) is presented. First, the relative features are extracted from HEVC encoded bitstream. The naive Bayesian model is used to train and test features based on original YUV videos and ground truth. The intra frame saliency map can be achieved after training and testing intra features. And inter frame saliency can be achieved by intra saliency with moving motion vectors. The ROC of our proposed intra mode is 0.9561. Other classification methods such as support vector machine (SVM), k nearest neighbors (KNN) and the decision tree are presented to compare the experimental outcomes. The variety of compression ratio has been analysis to affect the saliency

    Fast and Efficient Foveated Video Compression Schemes for H.264/AVC Platform

    Get PDF
    Some fast and efficient foveated video compression schemes for H.264/AVC platform are presented in this dissertation. The exponential growth in networking technologies and widespread use of video content based multimedia information over internet for mass communication applications like social networking, e-commerce and education have promoted the development of video coding to a great extent. Recently, foveated imaging based image or video compression schemes are in high demand, as they not only match with the perception of human visual system (HVS), but also yield higher compression ratio. The important or salient regions are compressed with higher visual quality while the non-salient regions are compressed with higher compression ratio. From amongst the foveated video compression developments during the last few years, it is observed that saliency detection based foveated schemes are the keen areas of intense research. Keeping this in mind, we propose two multi-scale saliency detection schemes. (1) Multi-scale phase spectrum based saliency detection (FTPBSD); (2) Sign-DCT multi-scale pseudo-phase spectrum based saliency detection (SDCTPBSD). In FTPBSD scheme, a saliency map is determined using phase spectrum of a given image/video with unity magnitude spectrum. On the other hand, the proposed SDCTPBSD method uses sign information of discrete cosine transform (DCT) also known as sign-DCT (SDCT). It resembles the response of receptive field neurons of HVS. A bottom-up spatio-temporal saliency map is obtained by linear weighted sum of spatial saliency map and temporal saliency map. Based on these saliency detection techniques, foveated video compression (FVC) schemes (FVC-FTPBSD and FVC-SDCTPBSD) are developed to improve the compression performance further.Moreover, the 2D-discrete cosine transform (2D-DCT) is widely used in various video coding standards for block based transformation of spatial data. However, for directional featured blocks, 2D-DCT offers sub-optimal performance and may not able to efficiently represent video data with fewer coefficients that deteriorates compression ratio. Various directional transform schemes are proposed in literature for efficiently encoding such directional featured blocks. However, it is observed that these directional transform schemes suffer from many issues like ‘mean weighting defect’, use of a large number of DCTs and a number of scanning patterns. We propose a directional transform scheme based on direction-adaptive fixed length discrete cosine transform (DAFL-DCT) for intra-, and inter-frame to achieve higher coding efficiency in case of directional featured blocks.Furthermore, the proposed DAFL-DCT has the following two encoding modes. (1) Direction-adaptive fixed length ― high efficiency (DAFL-HE) mode for higher compression performance; (2) Direction-adaptive fixed length ― low complexity (DAFL-LC) mode for low complexity with a fair compression ratio. On the other hand, motion estimation (ME) exploits temporal correlation between video frames and yields significant improvement in compression ratio while sustaining high visual quality in video coding. Block-matching motion estimation (BMME) is the most popular approach due to its simplicity and efficiency. However, the real-world video sequences may contain slow, medium and/or fast motion activities. Further, a single search pattern does not prove efficient in finding best matched block for all motion types. In addition, it is observed that most of the BMME schemes are based on uni-modal error surface. Nevertheless, real-world video sequences may exhibit a large number of local minima available within a search window and thus possess multi-modal error surface (MES). Hence, the following two uni-modal error surface based and multi-modal error surface based motion estimation schemes are developed. (1) Direction-adaptive motion estimation (DAME) scheme; (2) Pattern-based modified particle swarm optimization motion estimation (PMPSO-ME) scheme. Subsequently, various fast and efficient foveated video compression schemes are developed with combination of these schemes to improve the video coding performance further while maintaining high visual quality to salient regions. All schemes are incorporated into the H.264/AVC video coding platform. Various experiments have been carried out on H.264/AVC joint model reference software (version JM 18.6). Computing various benchmark metrics, the proposed schemes are compared with other existing competitive schemes in terms of rate-distortion curves, Bjontegaard metrics (BD-PSNR, BD-SSIM and BD-bitrate), encoding time, number of search points and subjective evaluation to derive an overall conclusion
    corecore