259 research outputs found

    Recent Advances in Signal Processing

    Get PDF
    The signal processing task is a very critical issue in the majority of new technological inventions and challenges in a variety of applications in both science and engineering fields. Classical signal processing techniques have largely worked with mathematical models that are linear, local, stationary, and Gaussian. They have always favored closed-form tractability over real-world accuracy. These constraints were imposed by the lack of powerful computing tools. During the last few decades, signal processing theories, developments, and applications have matured rapidly and now include tools from many areas of mathematics, computer science, physics, and engineering. This book is targeted primarily toward both students and researchers who want to be exposed to a wide variety of signal processing techniques and algorithms. It includes 27 chapters that can be categorized into five different areas depending on the application at hand. These five categories are ordered to address image processing, speech processing, communication systems, time-series analysis, and educational packages respectively. The book has the advantage of providing a collection of applications that are completely independent and self-contained; thus, the interested reader can choose any chapter and skip to another without losing continuity

    Critical Data Compression

    Full text link
    A new approach to data compression is developed and applied to multimedia content. This method separates messages into components suitable for both lossless coding and 'lossy' or statistical coding techniques, compressing complex objects by separately encoding signals and noise. This is demonstrated by compressing the most significant bits of data exactly, since they are typically redundant and compressible, and either fitting a maximally likely noise function to the residual bits or compressing them using lossy methods. Upon decompression, the significant bits are decoded and added to a noise function, whether sampled from a noise model or decompressed from a lossy code. This results in compressed data similar to the original. For many test images, a two-part image code using JPEG2000 for lossy coding and PAQ8l for lossless coding produces less mean-squared error than an equal length of JPEG2000. Computer-generated images typically compress better using this method than through direct lossy coding, as do many black and white photographs and most color photographs at sufficiently high quality levels. Examples applying the method to audio and video coding are also demonstrated. Since two-part codes are efficient for both periodic and chaotic data, concatenations of roughly similar objects may be encoded efficiently, which leads to improved inference. Applications to artificial intelligence are demonstrated, showing that signals using an economical lossless code have a critical level of redundancy which leads to better description-based inference than signals which encode either insufficient data or too much detail.Comment: 99 pages, 31 figure

    Learning to compress and search visual data in large-scale systems

    Full text link
    The problem of high-dimensional and large-scale representation of visual data is addressed from an unsupervised learning perspective. The emphasis is put on discrete representations, where the description length can be measured in bits and hence the model capacity can be controlled. The algorithmic infrastructure is developed based on the synthesis and analysis prior models whose rate-distortion properties, as well as capacity vs. sample complexity trade-offs are carefully optimized. These models are then extended to multi-layers, namely the RRQ and the ML-STC frameworks, where the latter is further evolved as a powerful deep neural network architecture with fast and sample-efficient training and discrete representations. For the developed algorithms, three important applications are developed. First, the problem of large-scale similarity search in retrieval systems is addressed, where a double-stage solution is proposed leading to faster query times and shorter database storage. Second, the problem of learned image compression is targeted, where the proposed models can capture more redundancies from the training images than the conventional compression codecs. Finally, the proposed algorithms are used to solve ill-posed inverse problems. In particular, the problems of image denoising and compressive sensing are addressed with promising results.Comment: PhD thesis dissertatio

    A method for protecting and controlling access to JPEG2000 images

    Get PDF
    The image compression standard JPEG2000 brings not only powerful compression performance but also new functionality unavailable in previous standards (such as region of interest, scalability and random access to image data, through flexible code stream description of the image). ISO/IEC JTC1/SC29/WG1, which is the ISO Committee working group for JPEG2000 standardization is currently defining additional parts to the standard that will allow extended functionalities. One of these extensions is Part 8 JPSEC - JPEG2000 security, which deals with the protection and access control of JPEG2000 code-stream. This paper reports the JPSEC activities detailing with the three core experiments which are in progress to supply the JPEG2000 ISO Committee, with the appropriate protection technology. These core experiments are focusing on the protection of the code-stream itself and on the overall security infrastructure that is needed to manage the access rights of users and applications to that protected code-stream. Regarding the encryption/scrambling process, this one deals with the JPEG2000 code stream in such a way that only the packets, which contain image data information are encrypted. All the other code-stream data will be in clear mode. Ibis paper will also advance details of one of the JPSEC proposed solutions for the security infrastructure - OpenSDRM (Open and Secure Digital Rights Management) [16], which provides security and rights management from the content provider to the content final user. A use case where this security infrastructure was successfully used will also be provided.info:eu-repo/semantics/acceptedVersio

    Literature Study On Cloud Based Healthcare File Protection Algorithms

    Get PDF
    There is a huge development in Computers and Cloud computing technology, the trend in recent years is to outsource information storage on Cloud-based services. The cloud provides  large storage space. Cloud-based service providers such as Dropbox, Google Drive, are providing users with infinite and low-cost storage. In this project we aim at presenting a protection method through by encrypting and decrypting the files to provide enhanced level of protection. To encrypt the file that we upload in cloud, we make use of double encryption technique. The file is been encrypted twice one followed by the other using two algorithms. The order in which the algorithms are used is that, the file is first encrypted using AES algorithm, now this file will be in the encrypted format and this encrypted file is again encrypted using RSA algorithm. The corresponding keys are been generated during the execution of the algorithm. This is done in order to increase the security level. The various parameters that we have considered here are security level, speed, data confidentiality, data integrity and cipher text size. Our project is more efficient as it satisfies all the parameters whereas the conventional methods failed to do so. The Cloud we used is Dropbox to store the content of the file which is in the encrypted format using AES and RSA algorithms and corresponding key is generated which can be used to decrypt the file. While uploading the file the double encryption technique is been implemented

    A Study on the Usage of Cross-Layer Power Control and Forward Error Correction for Embedded Video Transmission over Wireless Links

    Get PDF
    Cross-layering is a design paradigm for overcoming the limitations deriving from the ISO/OSI layering principle, thus improving the performance of communications in specific scenarios, such as wireless multimedia communications. However, most available solutions are based on empirical considerations, and do not provide a theoretical background supporting such approaches. The paper aims at providing an analytical framework for the study of single-hop video delivery over a wireless link, enabling cross-layer interactions for performance optimization using power control and FEC and providing a useful tool to determine the potential gain deriving from the employment of such design paradigm. The analysis is performed using rate-distortion information of an embedded video bitstream jointly with a Lagrangian power minimization approach. Simulation results underline that cross-layering can provide relevant improvement in specific environments and that the proposed approach is able to capitalize on the advantage deriving from its deployment

    Statistical Tools for Digital Image Forensics

    Get PDF
    A digitally altered image, often leaving no visual clues of having been tampered with, can be indistinguishable from an authentic image. The tampering, however, may disturb some underlying statistical properties of the image. Under this assumption, we propose five techniques that quantify and detect statistical perturbations found in different forms of tampered images: (1) re-sampled images (e.g., scaled or rotated); (2) manipulated color filter array interpolated images; (3) double JPEG compressed images; (4) images with duplicated regions; and (5) images with inconsistent noise patterns. These techniques work in the absence of any embedded watermarks or signatures. For each technique we develop the theoretical foundation, show its effectiveness on credible forgeries, and analyze its sensitivity and robustness to simple counter-attacks

    Prioritizing Content of Interest in Multimedia Data Compression

    Get PDF
    Image and video compression techniques make data transmission and storage in digital multimedia systems more efficient and feasible for the system's limited storage and bandwidth. Many generic image and video compression techniques such as JPEG and H.264/AVC have been standardized and are now widely adopted. Despite their great success, we observe that these standard compression techniques are not the best solution for data compression in special types of multimedia systems such as microscopy videos and low-power wireless broadcast systems. In these application-specific systems where the content of interest in the multimedia data is known and well-defined, we should re-think the design of a data compression pipeline. We hypothesize that by identifying and prioritizing multimedia data's content of interest, new compression methods can be invented that are far more effective than standard techniques. In this dissertation, a set of new data compression methods based on the idea of prioritizing the content of interest has been proposed for three different kinds of multimedia systems. I will show that the key to designing efficient compression techniques in these three cases is to prioritize the content of interest in the data. The definition of the content of interest of multimedia data depends on the application. First, I show that for microscopy videos, the content of interest is defined as the spatial regions in the video frame with pixels that don't only contain noise. Keeping data in those regions with high quality and throwing out other information yields to a novel microscopy video compression technique. Second, I show that for a Bluetooth low energy beacon based system, practical multimedia data storage and transmission is possible by prioritizing content of interest. I designed custom image compression techniques that preserve edges in a binary image, or foreground regions of a color image of indoor or outdoor objects. Last, I present a new indoor Bluetooth low energy beacon based augmented reality system that integrates a 3D moving object compression method that prioritizes the content of interest.Doctor of Philosoph

    Discrete Wavelet Transforms

    Get PDF
    The discrete wavelet transform (DWT) algorithms have a firm position in processing of signals in several areas of research and industry. As DWT provides both octave-scale frequency and spatial timing of the analyzed signal, it is constantly used to solve and treat more and more advanced problems. The present book: Discrete Wavelet Transforms: Algorithms and Applications reviews the recent progress in discrete wavelet transform algorithms and applications. The book covers a wide range of methods (e.g. lifting, shift invariance, multi-scale analysis) for constructing DWTs. The book chapters are organized into four major parts. Part I describes the progress in hardware implementations of the DWT algorithms. Applications include multitone modulation for ADSL and equalization techniques, a scalable architecture for FPGA-implementation, lifting based algorithm for VLSI implementation, comparison between DWT and FFT based OFDM and modified SPIHT codec. Part II addresses image processing algorithms such as multiresolution approach for edge detection, low bit rate image compression, low complexity implementation of CQF wavelets and compression of multi-component images. Part III focuses watermaking DWT algorithms. Finally, Part IV describes shift invariant DWTs, DC lossless property, DWT based analysis and estimation of colored noise and an application of the wavelet Galerkin method. The chapters of the present book consist of both tutorial and highly advanced material. Therefore, the book is intended to be a reference text for graduate students and researchers to obtain state-of-the-art knowledge on specific applications
    corecore