9 research outputs found

    Secure steganography, compression and diagnoses of electrocardiograms in wireless body sensor networks

    Get PDF
    Submission of this completed form results in your thesis/project being lodged online at the RMIT Research Repository. Further information about the RMIT Research Repository is available at http://researchbank.rmit.edu.au Please complete abstract and keywords below for cataloguing and indexing your thesis/project. Abstract (Minimum 200 words, maximum 500 words) The usage of e-health applications is increasing in the modern era. Remote cardiac patients monitoring application is an important example of these e-health applications. Diagnosing cardiac disease in time is of crucial importance to save many patients lives. More than 3.5 million Australians suffer from long-term cardiac diseases. Therefore, in an ideal situation, a continuous cardiac monitoring system should be provided for this large number of patients. However, health-care providers lack the technology required to achieve this objective. Cloud services can be utilized to fill the technology gap for health-care providers. However, three main problems prevent health-care providers from using cloud services. Privacy, performance and accuracy of diagnoses. In this thesis we are addressing these three problems. To provide strong privacy protection services, two steganography techniques are proposed. Both techniques could achieve promising results in terms of security and distortion measurement. The differences between original and resultant watermarked ECG signals were less then 1%. Accordingly, the resultant ECG signal can be still used for diagnoses purposes, and only authorized persons who have the required security information, can extract the hidden secret data in the ECG signal. Consequently, to solve the performance problem of storing huge amount of data concerning ECG into the cloud, two types of compression techniques are introduced: Fractal based lossy compression technique and Gaussian based lossless compression technique. This thesis proves that, fractal models can be efficiently used in ECG lossy compression. Moreover, the proposed fractal technique is a multi-processing ready technique that is suitable to be implemented inside a cloud to make use of its multi processing capability. A high compression ratio could be achieved with low distortion effects. The Gaussian lossless compression technique is proposed to provide a high compression ratio. Moreover, because the compressed files are stored in the cloud, its services should be able to provide automatic diagnosis capability. Therefore, cloud services should be able to diagnose compressed ECG files without undergoing a decompression stage to reduce additional processing overhead. Accordingly, the proposed Gaussian compression provides the ability to diagnose the resultant compressed file. Subsequently, to make use of this homomorphic feature of the proposed Gaussian compression algorithm, in this thesis we have introduced a new diagnoses technique that can be used to detect life-threatening cardiac diseases such as Ventricular Tachycardia and Ventricular Fibrillation. The proposed technique is applied directly to the compressed ECG files without going through the decompression stage. The proposed technique could achieve high accuracy results near to 100% for detecting Ventricular Arrhythmia and 96% for detecting Left Bundle Branch Block. Finally, we believe that in this thesis, the first steps towards encouraging health-care providers to use cloud services have been taken. However, this journey is still long

    Secure Compression: Theory \& Practice

    Get PDF
    Encryption and compression are frequently used together in both network and storage systems, for example in TLS. Despite often being used together, there has not been a formal framework for analyzing these combined systems; moreover, the systems are usually just a simple chaining of compression followed by encryption. In this work, we present the first formal framework for proving security in combined compression-encryption schemes and relate it to the traditional notion of semantic security. We call this entropy-restricted semantic security. Additionally, we present a new, efficient cipher, called the squeeze cipher, that combines compression and encryption into a single primitive and provably achieves our entropy-restricted security

    Robust data protection and high efficiency for IoTs streams in the cloud

    Get PDF
    Remotely generated streaming of the Internet of Things (IoTs) data has become a vital category upon which many applications rely. Smart meters collect readings for household activities such as power and gas consumption every second - the readings are transmitted wirelessly through various channels and public hops to the operation centres. Due to the unusually large streams sizes, the operation centres are using cloud servers where various entities process the data on a real-time basis for billing and power management. It is possible that smart pipe projects (where oil pipes are continuously monitored using sensors) and collected streams are sent to the public cloud for real-time flawed detection. There are many other similar applications that can render the world a convenient place which result in climate change mitigation and transportation improvement to name a few. Despite the obvious advantages of these applications, some unique challenges arise posing some questions regarding a suitable balance between guaranteeing the streams security, such as privacy, authenticity and integrity, while not hindering the direct operations on those streams, while also handling data management issues, such as the volume of protected streams during transmission and storage. These challenges become more complicated when the streams reside on third-party cloud servers. In this thesis, a few novel techniques are introduced to address these problems. We begin by protecting the privacy and authenticity of transmitted readings without disrupting the direct operations. We propose two steganography techniques that rely on different mathematical security models. The results look promising - security: only the approved party who has the required security tokens can retrieve the hidden secret, and distortion effect with the difference between the original and protected readings that are almost at zero. This means the streams can be used in their protected form at intermediate hops or third party servers. We then improved the integrity of the transmitted protected streams which are prone to intentional or unintentional noise - we proposed a secure error detection and correction based stenographic technique. This allows legitimate recipients to (1) detect and recover any noise loss from the hidden sensitive information without privacy disclosure, and (2) remedy the received protected readings by using the corrected version of the secret hidden data. It is evident from the experiments that our technique has robust recovery capabilities (i.e. Root Mean Square (RMS) <0.01%, Bit Error Rate (BER) = 0 and PRD < 1%). To solve the issue of huge transmitted protected streams, two compression algorithms for lossless IoTs readings are introduced to ensure the volume of protected readings at intermediate hops is reduced without revealing the hidden secrets. The first uses Gaussian approximation function to represent IoTs streams in a few parameters regardless of the roughness in the signal. The second reduces the randomness of the IoTs streams into a smaller finite field by splitting to enhance repetition and avoiding the floating operations round errors issues. Under the same conditions, our both techniques were superior to existing models mathematically (i.e. the entropy was halved) and empirically (i.e. achieved ratio was 3.8:1 to 4.5:1). We were driven by the question ‘Can the size of multi-incoming compressed protected streams be re-reduced on the cloud without decompression?’ to overcome the issue of vast quantities of compressed and protected IoTs streams on the cloud. A novel lossless size reduction algorithm was introduced to prove the possibility of reducing the size of already compressed IoTs protected readings. This is successfully achieved by employing similarity measurements to classify the compressed streams into subsets in order to reduce the effect of uncorrelated compressed streams. The values of every subset was treated independently for further reduction. Both mathematical and empirical experiments proved the possibility of enhancing the entropy (i.e. almost reduced by 50%) and the resultant size reduction (i.e. up to 2:1)

    Compcrypt–lightweight ANS-based compression and encryption

    Get PDF
    Compression is widely used in Internet communication to save communication time and bandwidth. Recently invented by Jarek Duda asymmetric numeral system (ANS) offers an improved efficiency and a close to optimal compression. The ANS algorithm has been deployed by major IT companies such as Facebook, Google and Apple. Compression by itself does not provide any security (such as confidentiality or authentication of transmitted data). An obvious solution to this problem is an encryption of compressed bitstream. However, it requires two algorithms: one for compression and the other for encryption. In this work, we investigate natural properties of ANS that allow to incorporate authenticated encryption using as little cryptography as possible. We target low-level security communication such as transmission of data from IoT devices/sensors. In particular, we propose three solutions for joint compression and encryption (compcrypt). All of them use a pseudorandom bit generator (PRGB) based on lightweight stream ciphers. The first solution applies state jumps controlled by PRGB. The second one employs two ANS algorithms, where compression switches between the two. The switch is controlled by a PRGB bit. The third compcrypt modifies the encoding function of ANS depending on PRGB bits. Security and efficiency of the proposed compcrypt algorithms are evaluated

    Universal homophonic coding

    Get PDF
    Redundancy in plaintext is a fertile source of attack in any encryption system. Compression before encryption reduces the redundancy in the plaintext, but this does not make a cipher more secure. The cipher text is still susceptible to known-plaintext and chosen-plaintext attacks. The aim of homophonic coding is to convert a plaintext source into a random sequence by randomly mapping each source symbol into one of a set of homophones. Each homophone is then encoded by a source coder after which it can be encrypted with a cryptographic system. The security of homophonic coding falls into the class of unconditionally secure ciphers. The main advantage of homophonic coding over pure source coding is that it provides security both against known-plaintext and chosen-plaintext attacks, whereas source coding merely protects against a ciphertext-only attack. The aim of this dissertation is to investigate the implementation of an adaptive homophonic coder based on an arithmetic coder. This type of homophonic coding is termed universal, as it is not dependent on the source statistics.Computer ScienceM.Sc. (Computer Science

    Methods for Identifying Variation in Large-Scale Genomic Data

    Get PDF
    The rise of next-generation sequencing has produced an abundance of data with almost limitless analysis applications. As sequencing technology decreases in cost and increases in throughput, the amount of available data is quickly outpacing improve- ments in processor speed. Analysis methods must also increase in scale to remain computationally tractable. At the same time, larger datasets and the availability of population-wide data offer a broader context with which to improve accuracy. This thesis presents three tools that improve the scalability of sequencing data storage and analysis. First, a lossy compression method for RNA-seq alignments offers extreme size reduction without compromising downstream accuracy of isoform assembly and quantitation. Second, I describe a graph genome analysis tool that filters population variants for optimal aligner performance. Finally, I offer several methods for improving CNV segmentation accuracy, including borrowing strength across samples to overcome the limitations of low coverage. These methods compose a practical toolkit for improving the computational power of genomic analysis
    corecore