337 research outputs found

    Information Systems: Secure Access and Storage in the Age of Cloud Computing

    Get PDF
    Given that cloud computing is a remotely accessed service, the connection between provider and customer needs to be adequately protected against all known security risks. In order to ensure this, an open and clear specification of all standards, algorithms and security protocols adopted by the cloud provider is required. In this paper, we review current issues concerned with security threats to cloud computing and present a solution based on our unique patented compression-encryption method. The method provides highly efficient data compression where a unique symmetric key is generated as part of the compression process and is dependent on the characteristics of the data. Without the key, the data cannot be decompressed. We focus on threat prevention by cryptography that, if properly implemented, is virtually impossible to break directly. Our security by design is based on two principles: first, defence in depth, where our proposed design is such that more than one subsystem needs to be violated to get both the data and their key. Second, the principle of least privilege, where the attacker may gain access to only part of a system. The paper highlights the benefits of the solution that include high compression ratios, less bandwidth requirements, faster data transmission and response times, less storage space, and less energy consumption among others

    Summative Stereoscopic Image Compression using Arithmetic Coding

    Get PDF
    Image compression targets at plummeting the amount of bits required for image representation for save storage space and speed up the transmission over network. The reduction of size helps to store more images in the disk and take less transfer time in the data network. Stereoscopic image refers to a three dimensional (3D) image that is perceived by the human brain as the transformation of two images that is being sent to the left and right human eyes with distinct phases. However, storing of these images takes twice space than a single image and hence the motivation for this novel approach called Summative Stereoscopic Image Compression using Arithmetic Coding (S2ICAC) where the difference and average of these stereo pair images are calculated, quantized in the case of lossy approach and unquantized in the case of lossless approach, and arithmetic coding is applied. The experimental result analysis indicates that the proposed method achieves high compression ratio and high PSNR value. The proposed method is also compared with JPEG 2000 Position Based Coding Scheme(JPEG 2000 PBCS) and Stereoscopic Image Compression using Huffman Coding (SICHC). From the experimental analysis, it is observed that S2ICAC outperforms JPEG 2000 PBCS as well as SICHC

    ADAPTIVE AND SECURE DISTRIBUTED SOURCE CODING FOR VIDEO AND IMAGE COMPRESSION

    Get PDF
    Distributed Video Coding (DVC) is rapidly gaining popularity as a low cost, robust video coding solution, that reduces video encoding complexity. DVC is built on Distributed Source Coding (DSC) principles where correlation between sources to be compressed is exploited at the decoder side. In the case of DVC, a current frame available only at the encoder is estimated at the decoder with side information (SI) generated from other frames available at the decoder. The inter-frame correlation in DVC is then explored at the decoder based on the received syndromes of Wyner-Ziv (WZ) frame and SI frame. However, the ultimate decoding performances of DVC are based on the assumption that the perfect knowledge of correlation statistic between WZ and SI frames should be available at decoder. Therefore, the ability of obtaining a good statistical correlation estimate is becoming increasingly important in practical DVC implementations.Generally, the existing correlation estimation methods in DVC can be classified into two main types: online estimation where estimation starts before decoding and on-the-fly (OTF) estimation where estimation can be refined iteratively during decoding. As potential changes between frames might be unpredictable or dynamical, OTF estimation methods usually outperforms online estimation techniques with the cost of increased decoding complexity.In order to exploit the robustness of DVC code designs, I integrate particle filtering with standard belief propagation decoding for inference on one joint factor graph to estimate correlation among source and side information. Correlation estimation is performed OTF as it is carried out jointly with decoding of the graph-based DSC code. Moreover, I demonstrate our proposed scheme within state-of-the-art DVC systems, which are transform-domain based with a feedback channel for rate adaptation. Experimental results show that our proposed system gives a significant performance improvement compared to the benchmark state-of-the-art DISCOVER codec (including correlation estimation) and the case without dynamic particle filtering tracking, due to improved knowledge of timely correlation statistics via the combination of joint bit-plane decoding and particle-based BP tracking.Although sampling (e.g., particle filtering) based OTF correlation advances performances of DVC, it also introduces significant computational overhead and results in the decoding delay of DVC. Therefore, I tackle this difficulty through a low complexity adaptive DVC scheme using the deterministic approximate inference, where correlation estimation is also performed OTF as it is carried out jointly with decoding of the factor graph-based DVC code but with much lower complexity. The proposed adaptive DVC scheme is based on expectation propagation (EP), which generally offers better tradeoff between accuracy and complexity among different deterministic approximate inference methods. Experimental results show that our proposed scheme outperforms the benchmark state-of-the-art DISCOVER codec and other cases without correlation tracking, and achieves comparable decoding performance but with significantly low complexity comparing with sampling method.Finally, I extend the concept of DVC (i.e., exploring inter-frames correlation at the decoder side) to the compression of biomedical imaging data (e.g., CT sequence) in a lossless setup, where each slide of a CT sequence is analogous to a frame of video sequence. Besides compression efficiency, another important concern of biomedical imaging data is the privacy and security. Ideally, biomedical data should be kept in a secure manner (i.e. encrypted).An intuitive way is to compress the encrypted biomedical data directly. Unfortunately, traditional compression algorithms (removing redundancy through exploiting the structure of data) fail to handle encrypted data. The reason is that encrypted data appear to be random and lack the structure in the original data. The "best" practice has been compressing the data before encryption, however, this is not appropriate for privacy related scenarios (e.g., biomedical application), where one wants to process data while keeping them encrypted and safe. In this dissertation, I develop a Secure Privacy-presERving Medical Image CompRessiOn (SUPERMICRO) framework based on DSC, which makes the compression of the encrypted data possible without compromising security and compression efficiency. Our approach guarantees the data transmission and storage in a privacy-preserving manner. I tested our proposed framework on two CT image sequences and compared it with the state-of-the-art JPEG 2000 lossless compression. Experimental results demonstrated that the SUPERMICRO framework provides enhanced security and privacy protection, as well as high compression performance

    Robust watermarking for magnetic resonance images with automatic region of interest detection

    Get PDF
    Medical image watermarking requires special considerations compared to ordinary watermarking methods. The first issue is the detection of an important area of the image called the Region of Interest (ROI) prior to starting the watermarking process. Most existing ROI detection procedures use manual-based methods, while in automated methods the robustness against intentional or unintentional attacks has not been considered extensively. The second issue is the robustness of the embedded watermark against different attacks. A common drawback of existing watermarking methods is their weakness against salt and pepper noise. The research carried out in this thesis addresses these issues of having automatic ROI detection for magnetic resonance images that are robust against attacks particularly the salt and pepper noise and designing a new watermarking method that can withstand high density salt and pepper noise. In the ROI detection part, combinations of several algorithms such as morphological reconstruction, adaptive thresholding and labelling are utilized. The noise-filtering algorithm and window size correction block are then introduced for further enhancement. The performance of the proposed ROI detection is evaluated by computing the Comparative Accuracy (CA). In the watermarking part, a combination of spatial method, channel coding and noise filtering schemes are used to increase the robustness against salt and pepper noise. The quality of watermarked image is evaluated using Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index (SSIM), and the accuracy of the extracted watermark is assessed in terms of Bit Error Rate (BER). Based on experiments, the CA under eight different attacks (speckle noise, average filter, median filter, Wiener filter, Gaussian filter, sharpening filter, motion, and salt and pepper noise) is between 97.8% and 100%. The CA under different densities of salt and pepper noise (10%-90%) is in the range of 75.13% to 98.99%. In the watermarking part, the performance of the proposed method under different densities of salt and pepper noise measured by total PSNR, ROI PSNR, total SSIM and ROI SSIM has improved in the ranges of 3.48-23.03 (dB), 3.5-23.05 (dB), 0-0.4620 and 0-0.5335 to 21.75-42.08 (dB), 20.55-40.83 (dB), 0.5775-0.8874 and 0.4104-0.9742 respectively. In addition, the BER is reduced to the range of 0.02% to 41.7%. To conclude, the proposed method has managed to significantly improve the performance of existing medical image watermarking methods

    Edge Intelligence : Empowering Intelligence to the Edge of Network

    Get PDF
    Edge intelligence refers to a set of connected systems and devices for data collection, caching, processing, and analysis proximity to where data are captured based on artificial intelligence. Edge intelligence aims at enhancing data processing and protects the privacy and security of the data and users. Although recently emerged, spanning the period from 2011 to now, this field of research has shown explosive growth over the past five years. In this article, we present a thorough and comprehensive survey of the literature surrounding edge intelligence. We first identify four fundamental components of edge intelligence, i.e., edge caching, edge training, edge inference, and edge offloading based on theoretical and practical results pertaining to proposed and deployed systems. We then aim for a systematic classification of the state of the solutions by examining research results and observations for each of the four components and present a taxonomy that includes practical problems, adopted techniques, and application goals. For each category, we elaborate, compare, and analyze the literature from the perspectives of adopted techniques, objectives, performance, advantages and drawbacks, and so on. This article provides a comprehensive survey of edge intelligence and its application areas. In addition, we summarize the development of the emerging research fields and the current state of the art and discuss the important open issues and possible theoretical and technical directions.Peer reviewe

    Edge Intelligence : Empowering Intelligence to the Edge of Network

    Get PDF
    Edge intelligence refers to a set of connected systems and devices for data collection, caching, processing, and analysis proximity to where data are captured based on artificial intelligence. Edge intelligence aims at enhancing data processing and protects the privacy and security of the data and users. Although recently emerged, spanning the period from 2011 to now, this field of research has shown explosive growth over the past five years. In this article, we present a thorough and comprehensive survey of the literature surrounding edge intelligence. We first identify four fundamental components of edge intelligence, i.e., edge caching, edge training, edge inference, and edge offloading based on theoretical and practical results pertaining to proposed and deployed systems. We then aim for a systematic classification of the state of the solutions by examining research results and observations for each of the four components and present a taxonomy that includes practical problems, adopted techniques, and application goals. For each category, we elaborate, compare, and analyze the literature from the perspectives of adopted techniques, objectives, performance, advantages and drawbacks, and so on. This article provides a comprehensive survey of edge intelligence and its application areas. In addition, we summarize the development of the emerging research fields and the current state of the art and discuss the important open issues and possible theoretical and technical directions.Peer reviewe

    Novel methods of image compression for 3D reconstruction

    Get PDF
    Data compression techniques are widely used in the transmission and storage of 2D image, video and 3D data structures. The thesis addresses two aspects of data compression: 2D images and 3D structures by focusing research on solving the problem of compressing structured light images for 3D reconstruction. It is useful then to describe the research by separating the compression of 2D images from the compression of 3D data. Concerning image compression, there are many types of techniques and among the most popular are JPEG and JPEG2000. The thesis addresses different types of discrete transformations (DWT, DCT and DST) thatcombined in particular ways followed by Matrix Minimization algorithm,which is achieved high compression ratio by converting groups of data into a single value. This is an essential step to achieve higher compression ratios reaches to 99%. It is demonstrated that the approach is superior to both JPEG and JPEG2000 for compressing 2D images used in 3D reconstruction. The approach has also been tested oncompressing natural or generic 2D images mainly through DCT followed by Matrix Minimization and arithmetic coding.Results show that the method is superior to JPEG in terms of compression ratios and image quality, and equivalent to JPEG2000 in terms of image quality. Concerning the compression of 3D data structures, the Matrix Minimization algorithm is used to compress geometry and connectivity represented by a list of vertices and a list of triangulated faces. It is demonstrated that the method can compress vertices very efficiently compared with other 3D formats. Here the Matrix Minimization algorithm converts each vertex (X, Y and Z) into a single value without the use of any prior discrete transformation (as used in 2D images) and without using any coding algorithm. Concerningconnectivity,the triangulated face data are also compressed with the Matrix Minimizationalgorithm followed by arithmetic coding yielding a stream of compressed data. Results show compression ratiosclose to 95% which are far superior to compression with other 3D techniques. The compression methods presented in this thesis are defined as per-file compression. The methods to generate compression keys depend on the data to be compressed. Thus, each file generates their own set of compression keys and their own set of unique data. This feature enables application in the security domain for safe transmission and storage of data. The generated keys together with the set of unique data can be defined as an encryption key for the file as, without this information, the file cannot be decompressed

    FPGA based technical solutions for high throughput data processing and encryption for 5G communication: A review

    Get PDF
    The field programmable gate array (FPGA) devices are ideal solutions for high-speed processing applications, given their flexibility, parallel processing capability, and power efficiency. In this review paper, at first, an overview of the key applications of FPGA-based platforms in 5G networks/systems is presented, exploiting the improved performances offered by such devices. FPGA-based implementations of cloud radio access network (C-RAN) accelerators, network function virtualization (NFV)-based network slicers, cognitive radio systems, and multiple input multiple output (MIMO) channel characterizers are the main considered applications that can benefit from the high processing rate, power efficiency and flexibility of FPGAs. Furthermore, the implementations of encryption/decryption algorithms by employing the Xilinx Zynq Ultrascale+MPSoC ZCU102 FPGA platform are discussed, and then we introduce our high-speed and lightweight implementation of the well-known AES-128 algorithm, developed on the same FPGA platform, and comparing it with similar solutions already published in the literature. The comparison results indicate that our AES-128 implementation enables efficient hardware usage for a given data-rate (up to 28.16 Gbit/s), resulting in higher efficiency (8.64 Mbps/slice) than other considered solutions. Finally, the applications of the ZCU102 platform for high-speed processing are explored, such as image and signal processing, visual recognition, and hardware resource management

    Image acquisition and storage for medical imaging systems

    Full text link
    Image Acquisition and Storage for Medical Imaging Systems investigates the issues and requirements to develop a medical imaging system for the dental industry. Research was conducted through studying image acquisition and digitization systems, image file format standards, and data image distribution techniques in a medical facility. Furthermore, the future trends in medical imaging industry were identified; From the studies gathered, a medical imaging system called Miniature Image and Data Acquisition System (MIDAS) was created. MIDAS is an intraoral camera imaging system, which has the capability to capture images of patient\u27s teeth and gums, track images with patient data, and distributes images and data over a Local Area Network (LAN). These capabilities match or exceed those found in most intraoral camera systems
    corecore