34 research outputs found

    Efficient simultaneous encryption and compression of digital videos in computationally constrained applications

    Get PDF
    This thesis is concerned with the secure video transmission over open and wireless network channels. This would facilitate adequate interaction in computationally constrained applications among trusted entities such as in disaster/conflict zones, secure airborne transmission of videos for intelligence/security or surveillance purposes, and secure video communication for law enforcing agencies in crime fighting or in proactive forensics. Video content is generally too large and vulnerable to eavesdropping when transmitted over open network channels so that compression and encryption become very essential for storage and/or transmission. In terms of security, wireless channels, are more vulnerable than other kinds of mediums to a variety of attacks and eavesdropping. Since wireless communication is the main mode in the above applications, protecting video transmissions from unauthorized access through such network channels is a must. The main and multi-faceted challenges that one faces in implementing such a task are related to competing, and to some extent conflicting, requirements of a number of standard control factors relating to the constrained bandwidth, reasonably high image quality at the receiving end, the execution time, and robustness against security attacks. Applying both compression and encryption techniques simultaneously is a very tough challenge due to the fact that we need to optimize the compression ratio, time complexity, security and the quality simultaneously. There are different available image/video compression schemes that provide reasonable compression while attempting to maintain image quality, such as JPEG, MPEG and JPEG2000. The main approach to video compression is based on detecting and removing spatial correlation within the video frames as well as temporal correlations across the video frames. Temporal correlations are expected to be more evident across sequences of frames captured within a short period of time (often a fraction of a second). Correlation can be measured in terms of similarity between blocks of pixels. Frequency domain transforms such as the Discrete Cosine Transform (DCT) and the Discrete Wavelet Transform (DWT) have both been used restructure the frequency content (coefficients) to become amenable for efficient detection. JPEG and MPEG use DCT while JPEG2000 uses DWT. Removing spatial/temporal correlation encodes only one block from each class of equivalent (i.e. similar) blocks and remembering the position of all other block within the equivalence class. JPEG2000 compressed images achieve higher image quality than JPEG for the same compression ratios, while DCT based coding suffer from noticeable distortion at high compression ratio but when applied to any block it is easy to isolate the significant coefficients from the non-significant ones. Efficient video encryption in computationally constrained applications is another challenge on its own. It has long been recognised that selective encryption is the only viable approach to deal with the overwhelming file size. Selection can be made in the spatial or frequency domain. Efficiency of simultaneous compression and encryption is a good reason for us to apply selective encryption in the frequency domain. In this thesis we develop a hybrid of DWT and DCT for improved image/video compression in terms of image quality, compression ratio, bandwidth, and efficiency. We shall also investigate other techniques that have similar properties to the DCT in terms of representation of significant wavelet coefficients. The statistical properties of wavelet transform high frequency sub-bands provide one such approach, and we also propose phase sensing as another alternative but very efficient scheme. Simultaneous compression and encryption, in our investigations, were aimed at finding the best way of applying these two tasks in parallel by selecting some wavelet sub-bands for encryptions and applying compression on the other sub-bands. Since most spatial/temporal correlation appear in the high frequency wavelet sub-bands and the LL sub-bands of wavelet transformed images approximate the original images then we select the LL-sub-band data for encryption and the non-LL high frequency sub-band coefficients for compression. We also follow the common practice of using stream ciphers to meet efficiency requirements of real-time transmission. For key stream generation we investigated a number of schemes and the ultimate choice will depend on robustness to attacks. The still image (i.e. RF’s) are compressed with a modified EZW wavelet scheme by applying the DCT on the blocks of the wavelet sub-bands, selecting appropriate thresholds for determining significance of coefficients, and encrypting the EZW thresholds only with a simple 10-bit LFSR cipher This scheme is reasonably efficient in terms of processing time, compression ratio, image quality, as well was security robustness against statistical and frequency attack. However, many areas for improvements were identified as necessary to achieve the objectives of the thesis. Through a process of refinement we developed and tested 3 different secure efficient video compression schemes, whereby at each step we improve the performance of the scheme in the previous step. Extensive experiments are conducted to test performance of the new scheme, at each refined stage, in terms of efficiency, compression ratio, image quality, and security robustness. Depending on the aspects of compression that needs improvement at each refinement step, we replaced the previous block coding scheme with a more appropriate one from among the 3 above mentioned schemes (i.e. DCT, Edge sensing and phase sensing) for the reference frames or the non-reference ones. In subsequent refinement steps we apply encryption to a slightly expanded LL-sub-band using successively more secure stream ciphers, but with different approaches to key stream generation. In the first refinement step, encryption utilized two LFSRs seeded with three secret keys to scramble the significant wavelet LL-coefficients multiple times. In the second approach, the encryption algorithm utilises LFSR to scramble the wavelet coefficients of the edges extracted from the low frequency sub-band. These edges are mapped from the high frequency sub-bands using different threshold. Finally, use a version of the A5 cipher combined with chaotic logistic map to encrypt the significant parameters of the LL sub-band. Our empirical results show that the refinement process achieves the ultimate objectives of the thesis, i.e. efficient secure video compression scheme that is scalable in terms of the frame size at about 100 fps and satisfying the following features; high compression, reasonable quality, and resistance to the statistical, frequency and the brute force attack with low computational processing. Although image quality fluctuates depending on video complexity, in the conclusion we recommend an adaptive implementation of our scheme. Although this thesis does not deal with transmission tasks but the efficiency achieved in terms of video encryption and compression time as well as in compression ratios will be sufficient for real-time secure transmission of video using commercially available mobile computing devices

    Wireless triple play system

    Get PDF
    Dissertação para obtenção do Grau de Mestre em Engenharia Electrotécnica e ComputadoresTriple play is a service that combines three types of services: voice, data and multimedia over a single communication channel for a price that is less than the total price of the individual services. However there is no standard for provisioning the Triple play services, rather they are provisioned individually, since the requirements are quite different for each service. The digital revolution helped to create and deliver a high quality media solutions. One of the most demanding services is the Video on Demand (VoD). This implicates a dedicated streaming channel for each user in order to provide normal media player commands (as pause, fast forward). Most of the multimedia companies that develops personalized products does not always fulfil the users needs and are far from being cheap solutions. The goal of the project was to create a reliable and scalable triple play solution that works via Wireless Local Area Network (WLAN), fully capable of dealing with the existing state of the art multimedia technologies only resorting to open-source tools. This project was design to be a transparent web environment using only web technologies to maximize the potential of the services. HyperText Markup Language (HTML),Cascading Style Sheets (CSS) and JavaScript were the used technologies for the development of the applications. Both a administration and user interfaces were developed to fully manage all video contents and properly view it in a rich and appealing application, providing the proof of concept. The developed prototype was tested in a WLAN with up to four clients and the Quality of Service (QoS) and Quality of Experience (QoE) was measured for several combinations of active services. In the end it is possible to acknowledge that the developed prototype was capable of dealing with all the problems of WLAN technologies and successfully delivery all the proposed services with high QoE

    Multimedia

    Get PDF
    The nowadays ubiquitous and effortless digital data capture and processing capabilities offered by the majority of devices, lead to an unprecedented penetration of multimedia content in our everyday life. To make the most of this phenomenon, the rapidly increasing volume and usage of digitised content requires constant re-evaluation and adaptation of multimedia methodologies, in order to meet the relentless change of requirements from both the user and system perspectives. Advances in Multimedia provides readers with an overview of the ever-growing field of multimedia by bringing together various research studies and surveys from different subfields that point out such important aspects. Some of the main topics that this book deals with include: multimedia management in peer-to-peer structures & wireless networks, security characteristics in multimedia, semantic gap bridging for multimedia content and novel multimedia applications

    Prioritizing Content of Interest in Multimedia Data Compression

    Get PDF
    Image and video compression techniques make data transmission and storage in digital multimedia systems more efficient and feasible for the system's limited storage and bandwidth. Many generic image and video compression techniques such as JPEG and H.264/AVC have been standardized and are now widely adopted. Despite their great success, we observe that these standard compression techniques are not the best solution for data compression in special types of multimedia systems such as microscopy videos and low-power wireless broadcast systems. In these application-specific systems where the content of interest in the multimedia data is known and well-defined, we should re-think the design of a data compression pipeline. We hypothesize that by identifying and prioritizing multimedia data's content of interest, new compression methods can be invented that are far more effective than standard techniques. In this dissertation, a set of new data compression methods based on the idea of prioritizing the content of interest has been proposed for three different kinds of multimedia systems. I will show that the key to designing efficient compression techniques in these three cases is to prioritize the content of interest in the data. The definition of the content of interest of multimedia data depends on the application. First, I show that for microscopy videos, the content of interest is defined as the spatial regions in the video frame with pixels that don't only contain noise. Keeping data in those regions with high quality and throwing out other information yields to a novel microscopy video compression technique. Second, I show that for a Bluetooth low energy beacon based system, practical multimedia data storage and transmission is possible by prioritizing content of interest. I designed custom image compression techniques that preserve edges in a binary image, or foreground regions of a color image of indoor or outdoor objects. Last, I present a new indoor Bluetooth low energy beacon based augmented reality system that integrates a 3D moving object compression method that prioritizes the content of interest.Doctor of Philosoph

    Video enhancement : content classification and model selection

    Get PDF
    The purpose of video enhancement is to improve the subjective picture quality. The field of video enhancement includes a broad category of research topics, such as removing noise in the video, highlighting some specified features and improving the appearance or visibility of the video content. The common difficulty in this field is how to make images or videos more beautiful, or subjectively better. Traditional approaches involve lots of iterations between subjective assessment experiments and redesigns of algorithm improvements, which are very time consuming. Researchers have attempted to design a video quality metric to replace the subjective assessment, but so far it is not successful. As a way to avoid heuristics in the enhancement algorithm design, least mean square methods have received considerable attention. They can optimize filter coefficients automatically by minimizing the difference between processed videos and desired versions through a training. However, these methods are only optimal on average but not locally. To solve the problem, one can apply the least mean square optimization for individual categories that are classified by local image content. The most interesting example is Kondo’s concept of local content adaptivity for image interpolation, which we found could be generalized into an ideal framework for content adaptive video processing. We identify two parts in the concept, content classification and adaptive processing. By exploring new classifiers for the content classification and new models for the adaptive processing, we have generalized a framework for more enhancement applications. For the part of content classification, new classifiers have been proposed to classify different image degradations such as coding artifacts and focal blur. For the coding artifact, a novel classifier has been proposed based on the combination of local structure and contrast, which does not require coding block grid detection. For the focal blur, we have proposed a novel local blur estimation method based on edges, which does not require edge orientation detection and shows more robust blur estimation. With these classifiers, the proposed framework has been extended to coding artifact robust enhancement and blur dependant enhancement. With the content adaptivity to more image features, the number of content classes can increase significantly. We show that it is possible to reduce the number of classes without sacrificing much performance. For the part of model selection, we have introduced several nonlinear filters to the proposed framework. We have also proposed a new type of nonlinear filter, trained bilateral filter, which combines both advantages of the original bilateral filter and the least mean square optimization. With these nonlinear filters, the proposed framework show better performance than with linear filters. Furthermore, we have shown a proof-of-concept for a trained approach to obtain contrast enhancement by a supervised learning. The transfer curves are optimized based on the classification of global or local image content. It showed that it is possible to obtain the desired effect by learning from other computationally expensive enhancement algorithms or expert-tuned examples through the trained approach. Looking back, the thesis reveals a single versatile framework for video enhancement applications. It widens the application scope by including new content classifiers and new processing models and offers scalabilities with solutions to reduce the number of classes, which can greatly accelerate the algorithm design

    Information Analysis for Steganography and Steganalysis in 3D Polygonal Meshes

    Get PDF
    Information hiding, which embeds a watermark/message over a cover signal, has recently found extensive applications in, for example, copyright protection, content authentication and covert communication. It has been widely considered as an appealing technology to complement conventional cryptographic processes in the field of multimedia security by embedding information into the signal being protected. Generally, information hiding can be classified into two categories: steganography and watermarking. While steganography attempts to embed as much information as possible into a cover signal, watermarking tries to emphasize the robustness of the embedded information at the expense of embedding capacity. In contrast to information hiding, steganalysis aims at detecting whether a given medium has hidden message in it, and, if possible, recover that hidden message. It can be used to measure the security performance of information hiding techniques, meaning a steganalysis resistant steganographic/watermarking method should be imperceptible not only to Human Vision Systems (HVS), but also to intelligent analysis. As yet, 3D information hiding and steganalysis has received relatively less attention compared to image information hiding, despite the proliferation of 3D computer graphics models which are fairly promising information carriers. This thesis focuses on this relatively neglected research area and has the following primary objectives: 1) to investigate the trade-off between embedding capacity and distortion by considering the correlation between spatial and normal/curvature noise in triangle meshes; 2) to design satisfactory 3D steganographic algorithms, taking into account this trade-off; 3) to design robust 3D watermarking algorithms; 4) to propose a steganalysis framework for detecting the existence of the hidden information in 3D models and introduce a universal 3D steganalytic method under this framework. %and demonstrate the performance of the proposed steganalysis by testing it against six well-known 3D steganographic/watermarking methods. The thesis is organized as follows. Chapter 1 describes in detail the background relating to information hiding and steganalysis, as well as the research problems this thesis will be studying. Chapter 2 conducts a survey on the previous information hiding techniques for digital images, 3D models and other medium and also on image steganalysis algorithms. Motivated by the observation that the knowledge of the spatial accuracy of the mesh vertices does not easily translate into information related to the accuracy of other visually important mesh attributes such as normals, Chapters 3 and 4 investigate the impact of modifying vertex coordinates of 3D triangle models on the mesh normals. Chapter 3 presents the results of an empirical investigation, whereas Chapter 4 presents the results of a theoretical study. Based on these results, a high-capacity 3D steganographic algorithm capable of controlling embedding distortion is also presented in Chapter 4. In addition to normal information, several mesh interrogation, processing and rendering algorithms make direct or indirect use of curvature information. Motivated by this, Chapter 5 studies the relation between Discrete Gaussian Curvature (DGC) degradation and vertex coordinate modifications. Chapter 6 proposes a robust watermarking algorithm for 3D polygonal models, based on modifying the histogram of the distances from the model vertices to a point in 3D space. That point is determined by applying Principal Component Analysis (PCA) to the cover model. The use of PCA makes the watermarking method robust against common 3D operations, such as rotation, translation and vertex reordering. In addition, Chapter 6 develops a 3D specific steganalytic algorithm to detect the existence of the hidden messages embedded by one well-known watermarking method. By contrast, the focus of Chapter 7 will be on developing a 3D watermarking algorithm that is resistant to mesh editing or deformation attacks that change the global shape of the mesh. By adopting a framework which has been successfully developed for image steganalysis, Chapter 8 designs a 3D steganalysis method to detect the existence of messages hidden in 3D models with existing steganographic and watermarking algorithms. The efficiency of this steganalytic algorithm has been evaluated on five state-of-the-art 3D watermarking/steganographic methods. Moreover, being a universal steganalytic algorithm can be used as a benchmark for measuring the anti-steganalysis performance of other existing and most importantly future watermarking/steganographic algorithms. Chapter 9 concludes this thesis and also suggests some potential directions for future work

    Smart Sensor Technologies for IoT

    Get PDF
    The recent development in wireless networks and devices has led to novel services that will utilize wireless communication on a new level. Much effort and resources have been dedicated to establishing new communication networks that will support machine-to-machine communication and the Internet of Things (IoT). In these systems, various smart and sensory devices are deployed and connected, enabling large amounts of data to be streamed. Smart services represent new trends in mobile services, i.e., a completely new spectrum of context-aware, personalized, and intelligent services and applications. A variety of existing services utilize information about the position of the user or mobile device. The position of mobile devices is often achieved using the Global Navigation Satellite System (GNSS) chips that are integrated into all modern mobile devices (smartphones). However, GNSS is not always a reliable source of position estimates due to multipath propagation and signal blockage. Moreover, integrating GNSS chips into all devices might have a negative impact on the battery life of future IoT applications. Therefore, alternative solutions to position estimation should be investigated and implemented in IoT applications. This Special Issue, “Smart Sensor Technologies for IoT” aims to report on some of the recent research efforts on this increasingly important topic. The twelve accepted papers in this issue cover various aspects of Smart Sensor Technologies for IoT
    corecore