884 research outputs found

    Distributed video coding for wireless video sensor networks: a review of the state-of-the-art architectures

    Get PDF
    Distributed video coding (DVC) is a relatively new video coding architecture originated from two fundamental theorems namely, Slepian–Wolf and Wyner–Ziv. Recent research developments have made DVC attractive for applications in the emerging domain of wireless video sensor networks (WVSNs). This paper reviews the state-of-the-art DVC architectures with a focus on understanding their opportunities and gaps in addressing the operational requirements and application needs of WVSNs

    Improving the Rate-Distortion Performance in Distributed Video Coding

    Get PDF
    Distributed video coding is a coding paradigm, which allows encoding of video frames at a complexity that is substantially lower than that in conventional video coding schemes. This feature makes it suitable for some emerging applications such as wireless surveillance video and mobile camera phones. In distributed video coding, a subset of frames in the video sequence, known as the key frames, are encoded using a conventional intra-frame encoder, such as H264/AVC in the intra mode, and then transmitted to the decoder. The remaining frames, known as the Wyner-Ziv frames, are encoded based on the Wyner-Ziv principle by using the channel codes, such as LDPC codes. In the transform-domain distributed video coding, each Wyner-Ziv frame undergoes a 4x4 block DCT transform and the resulting DCT coefficients are grouped into DCT bands. The bitplaines corresponding to each DCT band are encoded by a channel encoder, for example an LDPCA encoder, one after another. The resulting error-correcting bits are retained in a buffer at the encoder and transmitted incrementally as needed by the decoder. At the decoder, the key frames are first decoded. The decoded key frames are then used to generate a side information frame as an initial estimate of the corresponding Wyner-Ziv frame, usually by employing an interpolation method. The difference between the DCT band in the side information frame and the corresponding one in the Wyner-Ziv frame, referred to as the correlation noise, is often modeled by Laplacian distribution. A soft-input information for each bit in the bitplane is obtained using this correlation noise model and the corresponding DCT band of the side information frame. The channel decoder then uses this soft-input information along with some error-correcting bits sent by the encoder to decode the bitplanes of each DCT band in each of the Wyner-Ziv frames. Hence, an accurate estimation of the correlation noise model parameter(s) and generation of high-quality side information are required for reliable soft-input information for the bitplanes in the decoder, which in turn leads to a more efficient decoding. Consequently, less error-correcting bits need to be transmitted from the encoder to the decoder to decode the bitplanes, leading to a better compression efficiency and rate-distortion performance. The correlation noise is not stationary and its statistics vary within each Wyner-Ziv frame and within its corresponding DCT bands. Hence, it is difficult to find an accurate model for the correlation noise and estimate its parameters precisely at the decoder. Moreover, in existing schemes the parameters of the correlation noise for each DCT band are estimated before the decoder starts to decode the bitplanes of that DCT band and they are not modified and kept unchanged during decoding process of the bitplanes. Another problem of concern is that, since side information frame is generated in the decoder using the temporal interpolation between the previously decoded frames, the quality of the side information frames is generally poor when the motions between the frames are non-linear. Hence, generating a high-quality side information is a challenging problem. This thesis is concerned with the study of accurate estimation of correlation noise model parameters and increasing in the quality of the side information from the standpoint of improving the rate-distortion performance in distributed video coding. A new scheme is proposed for the estimation of the correlation noise parameters wherein the decoder decodes simultaneously all the bitplanes of a DCT band in a Wyner-Ziv frame and then refines the parameters of the correlation noise model of the band in an iterative manner. This process is carried out on an augmented factor graph using a new recursive message passing algorithm, with the side information generated and kept unchanged during the decoding of the Wyner-Ziv frame. Extensive simulations are carried out showing that the proposed decoder leads to an improved rate-distortion performance in comparison to the original DISCOVER codec and in another DVC codec employing side information frame refinement, particularly for video sequences with high motion content. In the second part of this work, a new algorithm for the generation of the side information is proposed to refine the initial side information frame using the additional information obtained after decoding the previous DCT bands of a Wyner-Ziv frame. The simulations are carried out demonstrating that the proposed algorithm provides a performance superior to that of schemes employing the other side information refinement mechanisms. Finally, it is shown that incorporating the proposed algorithm for refining the side information into the decoder proposed in the first part of the thesis leads to a further improvement in the rate-distortion performance of the DVC codec

    On robust and secure wireless communication system design using software-defined radios

    Get PDF
    This dissertation is composed of three parts: airborne multi input multi output (MIMO) communications, physical layer authentication, and software radio design for DARPA Spectrum Challenge. A common theme for the three distinct problems is the system perspective that we have adopted throughout this dissertation. Instead of considering isolated issues within these problems, we have provided a holistic design approach to the three problems and have implemented all three systems using the GNU Radio/USRP (Universal Software Radio Peripheral) platform. In the first part, we develop a MIMO communication system for airborne platforms. MIMO communication has long been considered to be suitable only for environment that is rich in scatterers. This, unfortunately is not the case for airborne platforms. However, this lack of scattering can be compensated by the large aperture of the airborne MIMO platform; this is corroborated by our careful analysis using real measurement data. Our analysis of the airborne MIMO channels leads to the development of a variable rate MIMO transceiver architecture. This architecture is numerically shown to improve the bit error rate (BER) over conventional transceiver architectures that are developed for rich scattering environments. A software radio based MIMO system is then implemented to demonstrate experimentally the efficacy of the developed architecture. In the second part, we develop a physical layer authentication scheme as a counter measure to primary user emulation attack (PUEA) in cognitive radio (CR) networks. In this attack, a malicious user emulates the signal characteristics of the primary user (PU) when it is silent which prevents unsuspecting secondary user (SU) from utilizing the network. The developed physical layer authentication is based on embedding cryptographic hash signatures, referred to as authentication tags, within PU\u27s signal constellations. The embedding is performed such that the legacy receivers are not affected. We analyze the scheme using the fast fading Rayleigh channel model and present an optimal scheme to embed signals in PU\u27s constellations which minimizes the tag BER. Experimental results are obtained that corroborate our theoretical claims, thereby establish that reliable authentication can be achieved without sacrificing signal quality at the primary receivers. In the final part, we describe in detail our design of software radios developed as part of the DARPA Spectrum Challenge (DSC), a year long competition that started in January 2013 and concluded in March 2014 with the final tournament held in Arlington, VA at the DARPA headquarter. DSC was comprised of two tournaments, competitive and cooperative. In the competitive mode two radio pairs, each composed of a transmitter and a receiver, are pitted against each other to transmit the most amount of data error-free while operating concurrently in the same frequency band. In the cooperative mode, three radio pairs have to share a frequency band in a cooperative manner wherein the goal is to maximize the throughput of all the three pairs. We describe the design of our software radio system that integrates some key technologies crucial in operating in an environment that does not allow user coordination and spectrum pre-planning, including: spectrum sensing, adaptive transmission both in spectrum utilization and transmission rate, opportunistic jamming, and sliding window feedback. The developed radio is robust in the presence of unknown interference and achieves the desired balance between throughput and reliability in an uncoordinated transmission environment

    Research and developments of distributed video coding

    Get PDF
    This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University.The recent developed Distributed Video Coding (DVC) is typically suitable for the applications such as wireless/wired video sensor network, mobile camera etc. where the traditional video coding standard is not feasible due to the constrained computation at the encoder. With DVC, the computational burden is moved from encoder to decoder. The compression efficiency is achieved via joint decoding at the decoder. The practical application of DVC is referred to Wyner-Ziv video coding (WZ) where the side information is available at the decoder to perform joint decoding. This join decoding inevitably causes a very complex decoder. In current WZ video coding issues, many of them emphasise how to improve the system coding performance but neglect the huge complexity caused at the decoder. The complexity of the decoder has direct influence to the system output. The beginning period of this research targets to optimise the decoder in pixel domain WZ video coding (PDWZ), while still achieves similar compression performance. More specifically, four issues are raised to optimise the input block size, the side information generation, the side information refinement process and the feedback channel respectively. The transform domain WZ video coding (TDWZ) has distinct superior performance to the normal PDWZ due to the exploitation in spatial direction during the encoding. However, since there is no motion estimation at the encoder in WZ video coding, the temporal correlation is not exploited at all at the encoder in all current WZ video coding issues. In the middle period of this research, the 3D DCT is adopted in the TDWZ to remove redundancy in both spatial and temporal direction thus to provide even higher coding performance. In the next step of this research, the performance of transform domain Distributed Multiview Video Coding (DMVC) is also investigated. Particularly, three types transform domain DMVC frameworks which are transform domain DMVC using TDWZ based 2D DCT, transform domain DMVC using TDWZ based on 3D DCT and transform domain residual DMVC using TDWZ based on 3D DCT are investigated respectively. One of the important applications of WZ coding principle is error-resilience. There have been several attempts to apply WZ error-resilient coding for current video coding standard e.g. H.264/AVC or MEPG 2. The final stage of this research is the design of WZ error-resilient scheme for wavelet based video codec. To balance the trade-off between error resilience ability and bandwidth consumption, the proposed scheme emphasises the protection of the Region of Interest (ROI) area. The efficiency of bandwidth utilisation is achieved by mutual efforts of WZ coding and sacrificing the quality of unimportant area. In summary, this research work contributed to achieves several advances in WZ video coding. First of all, it is targeting to build an efficient PDWZ with optimised decoder. Secondly, it aims to build an advanced TDWZ based on 3D DCT, which then is applied into multiview video coding to realise advanced transform domain DMVC. Finally, it aims to design an efficient error-resilient scheme for wavelet video codec, with which the trade-off between bandwidth consumption and error-resilience can be better balanced

    Integrating spinal codes into wireless systems

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2013.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Cataloged from student-submitted PDF version of thesis.Includes bibliographical references (p. 85-88).Rateless spinal codes [47] promise performance gains for future wireless systems. These gains can be realized in the form of higher data rates, longer operational ranges, reduced power consumption, and greater reliability. This is due in part to the manner in which rateless codes exploit the instantaneous characteristics of the wireless medium, including unpredictable fluctuations. By contrast, traditional rated codes can accommodate variability only by making overly conservative assumptions. Before spinal codes reach practical deployment, they must be integrated into the networking stacks of real devices, and they must be instantiated in compact, ecient silicon. This thesis addresses fundamental challenges in each of these two areas, covering a body of work reported in previous publications by this author and others [27, 26]. On the networking side, this thesis explores a rateless analogue of link-layer retransmission schemes, capturing the idea of rate adaptation and generalizing the approach of hybrid ARQ/incremental redundancy systems such as LTE [29]. On the silicon side, this thesis presents the development of a VLSI architecture that exploits the inherent parallelism of the spinal decoder.by Peter Anthony Iannucci.S.M

    Hand-based multimodal identification system with secure biometric template storage

    Get PDF
    WOS:000304107200001This study proposes a biometric system for personal identification based on three biometric characteristics from the hand, namely: the palmprint, finger surfaces and hand geometry. A protection scheme is applied to the biometric template data to guarantee its revocability, security and diversity among different biometric systems. An error-correcting code (ECC), a cryptographic hash function (CHF) and a binarisation module are the core of the template protection scheme. Since the ECC and CHF operate on binary data, an additional feature binarisation step is required. This study proposes: (i) a novel identification architecture that uses hand geometry as a soft biometric to accelerate the identification process and ensure the system's scalability; and (ii) a new feature binarisation technique that guarantees that the Hamming distance between transformed binary features is proportional to the difference between their real values. The proposed system achieves promising recognition and speed performances on two publicly available hand image databases.info:eu-repo/semantics/acceptedVersio

    Multimedia

    Get PDF
    The nowadays ubiquitous and effortless digital data capture and processing capabilities offered by the majority of devices, lead to an unprecedented penetration of multimedia content in our everyday life. To make the most of this phenomenon, the rapidly increasing volume and usage of digitised content requires constant re-evaluation and adaptation of multimedia methodologies, in order to meet the relentless change of requirements from both the user and system perspectives. Advances in Multimedia provides readers with an overview of the ever-growing field of multimedia by bringing together various research studies and surveys from different subfields that point out such important aspects. Some of the main topics that this book deals with include: multimedia management in peer-to-peer structures & wireless networks, security characteristics in multimedia, semantic gap bridging for multimedia content and novel multimedia applications

    Secure and efficient storage of multimedia: content in public cloud environments using joint compression and encryption

    Get PDF
    The Cloud Computing is a paradigm still with many unexplored areas ranging from the technological component to the de nition of new business models, but that is revolutionizing the way we design, implement and manage the entire infrastructure of information technology. The Infrastructure as a Service is the delivery of computing infrastructure, typically a virtual data center, along with a set of APIs that allow applications, in an automatic way, can control the resources they wish to use. The choice of the service provider and how it applies to their business model may lead to higher or lower cost in the operation and maintenance of applications near the suppliers. In this sense, this work proposed to carry out a literature review on the topic of Cloud Computing, secure storage and transmission of multimedia content, using lossless compression, in public cloud environments, and implement this system by building an application that manages data in public cloud environments (dropbox and meocloud). An application was built during this dissertation that meets the objectives set. This system provides the user a wide range of functions of data management in public cloud environments, for that the user only have to login to the system with his/her credentials, after performing the login, through the Oauth 1.0 protocol (authorization protocol) is generated an access token, this token is generated only with the consent of the user and allows the application to get access to data/user les without having to use credentials. With this token the framework can now operate and unlock the full potential of its functions. With this application is also available to the user functions of compression and encryption so that user can make the most of his/her cloud storage system securely. The compression function works using the compression algorithm LZMA being only necessary for the user to choose the les to be compressed. Relatively to encryption it will be used the encryption algorithm AES (Advanced Encryption Standard) that works with a 128 bit symmetric key de ned by user. We build the research into two distinct and complementary parts: The rst part consists of the theoretical foundation and the second part is the development of computer application where the data is managed, compressed, stored, transmitted in various environments of cloud computing. The theoretical framework is organized into two chapters, chapter 2 - Background on Cloud Storage and chapter 3 - Data compression. Sought through theoretical foundation demonstrate the relevance of the research, convey some of the pertinent theories and input whenever possible, research in the area. The second part of the work was devoted to the development of the application in cloud environment. We showed how we generated the application, presented the features, advantages, and safety standards for the data. Finally, we re ect on the results, according to the theoretical framework made in the rst part and platform development. We think that the work obtained is positive and that ts the goals we set ourselves to achieve. This research has some limitations, we believe that the time for completion was scarce and the implementation of the platform could bene t from the implementation of other features.In future research it would be appropriate to continue the project expanding the capabilities of the application, test the operation with other users and make comparative tests.A Computação em nuvem é um paradigma ainda com muitas áreas por explorar que vão desde a componente tecnológica à definição de novos modelos de negócio, mas que está a revolucionar a forma como projetamos, implementamos e gerimos toda a infraestrutura da tecnologia da informação. A Infraestrutura como Serviço representa a disponibilização da infraestrutura computacional, tipicamente um datacenter virtual, juntamente com um conjunto de APls que permitirá que aplicações, de forma automática, possam controlar os recursos que pretendem utilizar_ A escolha do fornecedor de serviços e a forma como este aplica o seu modelo de negócio poderão determinar um maior ou menor custo na operacionalização e manutenção das aplicações junto dos fornecedores. Neste sentido, esta dissertação propôs· se efetuar uma revisão bibliográfica sobre a temática da Computação em nuvem, a transmissão e o armazenamento seguro de conteúdos multimédia, utilizando a compressão sem perdas, em ambientes em nuvem públicos, e implementar um sistema deste tipo através da construção de uma aplicação que faz a gestão dos dados em ambientes de nuvem pública (dropbox e meocloud). Foi construída uma aplicação no decorrer desta dissertação que vai de encontro aos objectivos definidos. Este sistema fornece ao utilizador uma variada gama de funções de gestão de dados em ambientes de nuvem pública, para isso o utilizador tem apenas que realizar o login no sistema com as suas credenciais, após a realização de login, através do protocolo Oauth 1.0 (protocolo de autorização) é gerado um token de acesso, este token só é gerado com o consentimento do utilizador e permite que a aplicação tenha acesso aos dados / ficheiros do utilizador ~em que seja necessário utilizar as credenciais. Com este token a aplicação pode agora operar e disponibilizar todo o potencial das suas funções. Com esta aplicação é também disponibilizado ao utilizador funções de compressão e encriptação de modo a que possa usufruir ao máximo do seu sistema de armazenamento cloud com segurança. A função de compressão funciona utilizando o algoritmo de compressão LZMA sendo apenas necessário que o utilizador escolha os ficheiros a comprimir. Relativamente à cifragem utilizamos o algoritmo AES (Advanced Encryption Standard) que funciona com uma chave simétrica de 128bits definida pelo utilizador. Alicerçámos a investigação em duas partes distintas e complementares: a primeira parte é composta pela fundamentação teórica e a segunda parte consiste no desenvolvimento da aplicação informática em que os dados são geridos, comprimidos, armazenados, transmitidos em vários ambientes de computação em nuvem. A fundamentação teórica encontra-se organizada em dois capítulos, o capítulo 2 - "Background on Cloud Storage" e o capítulo 3 "Data Compression", Procurámos, através da fundamentação teórica, demonstrar a pertinência da investigação. transmitir algumas das teorias pertinentes e introduzir, sempre que possível, investigações existentes na área. A segunda parte do trabalho foi dedicada ao desenvolvimento da aplicação em ambiente "cloud". Evidenciámos o modo como gerámos a aplicação, apresentámos as funcionalidades, as vantagens. Por fim, refletimos sobre os resultados , de acordo com o enquadramento teórico efetuado na primeira parte e o desenvolvimento da plataforma. Pensamos que o trabalho obtido é positivo e que se enquadra nos objetivos que nos propusemos atingir. Este trabalho de investigação apresenta algumas limitações, consideramos que o tempo para a sua execução foi escasso e a implementação da plataforma poderia beneficiar com a implementação de outras funcionalidades. Em investigações futuras seria pertinente dar continuidade ao projeto ampliando as potencialidades da aplicação, testar o funcionamento com outros utilizadores e efetuar testes comparativos.Fundação para a Ciência e a Tecnologia (FCT
    corecore