393 research outputs found

    Reconfigurable rateless codes

    No full text
    We propose novel reconfigurable rateless codes, that are capable of not only varying the block length but also adaptively modify their encoding strategy by incrementally adjusting their degree distribution according to the prevalent channel conditions without the availability of the channel state information at the transmitter. In particular, we characterize a reconfigurable ratelesscode designed for the transmission of 9,500 information bits that achieves a performance, which is approximately 1 dB away from the discrete-input continuous-output memoryless channel’s (DCMC) capacity over a diverse range of channel signal-to-noise (SNR) ratios

    Combining Fractal Coding and Orthogonal Linear Transforms

    Get PDF

    Compressive sensing based imaging via belief propagation

    Get PDF
    Multiple description coding (MDC) using Compressive Sensing (CS) mainly aims at restoring an image from a small subset of samples with reasonable accuracy using an iterative message passing decoding algorithm commonly known as Belief Propagation (BP). The CS technique can accurately recover any compressible or sparse signal from a lesser number of non-adaptive, randomized linear projection samples than that specified by the Nyquist rate. In this work, we demonstrate how CS-based encoding generates measurements from the sparse image signal and the measurement matrix. Then we demonstrate how a BP decoding algorithm reconstructs the image from the measurements generated. In our work, the CS-BP algorithm assumes that all the unknown variables have the same prior distribution as we do not have any knowledge of the side information available during the initiation of the decoding process. Thus, we prove that this algorithm is effective even in the absence of side information

    3D Medical Image Lossless Compressor Using Deep Learning Approaches

    Get PDF
    The ever-increasing importance of accelerated information processing, communica-tion, and storing are major requirements within the big-data era revolution. With the extensive rise in data availability, handy information acquisition, and growing data rate, a critical challenge emerges in efficient handling. Even with advanced technical hardware developments and multiple Graphics Processing Units (GPUs) availability, this demand is still highly promoted to utilise these technologies effectively. Health-care systems are one of the domains yielding explosive data growth. Especially when considering their modern scanners abilities, which annually produce higher-resolution and more densely sampled medical images, with increasing requirements for massive storage capacity. The bottleneck in data transmission and storage would essentially be handled with an effective compression method. Since medical information is critical and imposes an influential role in diagnosis accuracy, it is strongly encouraged to guarantee exact reconstruction with no loss in quality, which is the main objective of any lossless compression algorithm. Given the revolutionary impact of Deep Learning (DL) methods in solving many tasks while achieving the state of the art results, includ-ing data compression, this opens tremendous opportunities for contributions. While considerable efforts have been made to address lossy performance using learning-based approaches, less attention was paid to address lossless compression. This PhD thesis investigates and proposes novel learning-based approaches for compressing 3D medical images losslessly.Firstly, we formulate the lossless compression task as a supervised sequential prediction problem, whereby a model learns a projection function to predict a target voxel given sequence of samples from its spatially surrounding voxels. Using such 3D local sampling information efficiently exploits spatial similarities and redundancies in a volumetric medical context by utilising such a prediction paradigm. The proposed NN-based data predictor is trained to minimise the differences with the original data values while the residual errors are encoded using arithmetic coding to allow lossless reconstruction.Following this, we explore the effectiveness of Recurrent Neural Networks (RNNs) as a 3D predictor for learning the mapping function from the spatial medical domain (16 bit-depths). We analyse Long Short-Term Memory (LSTM) models’ generalisabil-ity and robustness in capturing the 3D spatial dependencies of a voxel’s neighbourhood while utilising samples taken from various scanning settings. We evaluate our proposed MedZip models in compressing unseen Computerized Tomography (CT) and Magnetic Resonance Imaging (MRI) modalities losslessly, compared to other state-of-the-art lossless compression standards.This work investigates input configurations and sampling schemes for a many-to-one sequence prediction model, specifically for compressing 3D medical images (16 bit-depths) losslessly. The main objective is to determine the optimal practice for enabling the proposed LSTM model to achieve a high compression ratio and fast encoding-decoding performance. A solution for a non-deterministic environments problem was also proposed, allowing models to run in parallel form without much compression performance drop. Compared to well-known lossless codecs, experimental evaluations were carried out on datasets acquired by different hospitals, representing different body segments, and have distinct scanning modalities (i.e. CT and MRI).To conclude, we present a novel data-driven sampling scheme utilising weighted gradient scores for training LSTM prediction-based models. The objective is to determine whether some training samples are significantly more informative than others, specifically in medical domains where samples are available on a scale of billions. The effectiveness of models trained on the presented importance sampling scheme was evaluated compared to alternative strategies such as uniform, Gaussian, and sliced-based sampling

    Error concealment techniques for H.264/MVC encoded sequences

    Get PDF
    This work is partially funded by the Strategic Educational Pathways Scholarship Scheme (STEPS-Malta). This scholarship is partly financed by the European Union–European Social Fund (ESF 1.25).The H.264/MVC standard offers good compression ratios for multi-view sequences by exploiting spatial, temporal and interview image dependencies. This works well in error-free channels, however in the event of transmission errors, it leads to the propagation of the distorted macro-blocks, degrading the quality of experience of the user. This paper reviews the state-of-the-art error concealment solutions and proposes a low complexity concealment method that can be used with multi-view video coding. The error resilience techniques used to aid error concealment are also identified. Results obtained demonstrate that good multi-view video reconstruction can be obtained with this approach.peer-reviewe

    Securing Coding-Based Cloud Storage Against Pollution Attacks

    Get PDF
    The widespread diffusion of distributed and cloud storage solutions has changed dramatically the way users, system designers, and service providers manage their data. Outsourcing data on remote storage provides indeed many advantages in terms of both capital and operational costs. The security of data outsourced to the cloud, however, still represents one of the major concerns for all stakeholders. Pollution attacks, whereby a set of malicious entities attempt to corrupt stored data, are one of the many risks that affect cloud data security. In this paper we deal with pollution attacks in coding-based block-level cloud storage systems, i.e., systems that use linear codes to fragment, encode, and disperse virtual disk sectors across a set of storage nodes to achieve desired levels of redundancy, and to improve reliability and availability without sacrificing performance. Unfortunately, the effects of a pollution attack on linear coding can be disastrous, since a single polluted fragment can propagate pervasively in the decoding phase, thus hampering the whole sector. In this work we show that, using rateless codes, we can design an early pollution detection algorithm able to spot the presence of an attack while fetching the data from cloud storage during the normal disk reading operations. The alarm triggers a procedure that locates the polluting nodes using the proposed detection mechanism along with statistical inference. The performance of the proposed solution is analyzed under several aspects using both analytical modelling and accurate simulation using real disk traces. Our results show that the proposed approach is very robust and is able to effectively isolate the polluters, even in harsh conditions, provided that enough data redundancy is used

    Transfer learning of deep neural network representations for fMRI decoding

    Get PDF
    Background: Deep neural networks have revolutionised machine learning, with unparalleled performance in object classification. However, in brain imaging (e.g., fMRI), the direct application of Convolutional Neural Networks (CNN) to decoding subject states or perception from imaging data seems impractical given the scarcity of available data. New method: In this work we propose a robust method to transfer information from deep learning (DL) features to brain fMRI data with the goal of decoding. By adopting Reduced Rank Regression with Ridge Regularisation we establish a multivariate link between imaging data and the fully connected layer (fc7) of a CNN. We exploit the reconstructed fc7 features by performing an object image classification task on two datasets: one of the largest fMRI databases, taken from different scanners from more than two hundred subjects watching different movie clips, and another with fMRI data taken while watching static images. Results: The fc7 features could be significantly reconstructed from the imaging data, and led to significant decoding performance. Comparison with existing methods: The decoding based on reconstructed fc7 outperformed the decoding based on imaging data alone. Conclusion: In this work we show how to improve fMRI-based decoding benefiting from the mapping between functional data and CNN features. The potential advantage of the proposed method is twofold: the extraction of stimuli representations by means of an automatic procedure (unsupervised) and the embedding of high-dimensional neuroimaging data onto a space designed for visual object discrimination, leading to a more manageable space from dimensionality point of view

    MASCOT : metadata for advanced scalable video coding tools : final report

    Get PDF
    The goal of the MASCOT project was to develop new video coding schemes and tools that provide both an increased coding efficiency as well as extended scalability features compared to technology that was available at the beginning of the project. Towards that goal the following tools would be used: - metadata-based coding tools; - new spatiotemporal decompositions; - new prediction schemes. Although the initial goal was to develop one single codec architecture that was able to combine all new coding tools that were foreseen when the project was formulated, it became clear that this would limit the selection of the new tools. Therefore the consortium decided to develop two codec frameworks within the project, a standard hybrid DCT-based codec and a 3D wavelet-based codec, which together are able to accommodate all tools developed during the course of the project

    Towards practical distributed video coding

    Get PDF
    Multimedia is increasingly becoming a utility rather than mere entertainment. The range of video applications has increased, some of which are becoming indispensable in modem lifestyle. Video surveillance is one area that has attracted significant amount of focus and also benefited from considerable research effort for development. However, it is noted that there is still a notable technological gap between an ideal video surveillance platform and the available solutions, mainly in the form of the encoder and decoder complexity balance and the associated design costs. In this thesis, we tocus on an emerging technology, Distributed Video Coding (DVC), which is ideally suited for the video surveillance scenario, and fits many other potential applications too.EThOS - Electronic Theses Online ServiceGBUnited Kingdo
    corecore