34 research outputs found

    The Weakness Of Winrar Encrypted Archives To Compression Side-channel Attacks

    Get PDF
    Arthur-Durett, Kristine MS, Purdue University, December 2014. The weakness of WinRAR encrypted archives to compression side-channel attacks. Major Professor: Eugene Spaff This paper explores the security of WinRAR encrypted archives. Previous works concerning potential attacks against encrypted archives are studied and evaluated for practical implementation. These attacks include passive actions examining the effects of compression ratios of archives and the fi contained, the study of temporary ar- tifacts and active man-in-the-middle attacks on communication between individuals. An extensive overview of the WinRAR software and the functions implemented within it is presented to aid in understanding the intricacies of attacks against archives. Several attacks are chosen from the literature to execute on WinRAR v5.10. Select fi types are identifi through the examination of compression ratios. The appear- ance of a fi in an archive is determined through both the appearance of substrings in the known area of an archive and the comparison of compression ratios. Finally, the author outlines a revised version of an attack that takes advantage of the independence between the compression and encryption algorithms. While a previous version of this attack only succeeded in removing the encryption from an archive, the revised version is capable of fully recovering an original document from a encrypted compressed archive. The advantages and shortcomings of these attacks are discussed and some countermeasures are briefl mentione

    Compressive sensing-based data uploading in time-driven public sensing applications

    Get PDF
    Over the last few years, the technology of mobile phones greatly got increased. People gain and upload more and more information through their mobile phones in an easy way. Accordingly, a new sensing technology emerges, referred to as public sensing (PS). The core idea behind PS is to exploit the crowdedness of smart mobile devices to opportunistically provide real-time sensor data considering spatial and environmental dimensions. Recently, PS has been applied in many different application scenarios, such as environmental monitoring, traffic analysis, and indoor mapping. However, PS applications face several challenges. One of the most prominent challenges is the users acceptance to participate in the PS applications. In order to convince users to participate in the PS applications, several incentives mechanisms have been developed. However, the main two requirements - which should be met by any PS application - are the users privacy and the energy costs of running the PS application. In fact, there exist several energy consumers in PS applications. For example, many PS applications require the mobile devices to fix their position and frequently send this position data to the PS server. Similarly, the mobile devices waste energy when they receive sensing queries outside the sensing areas. However, the most energy-expensive task is to frequently acquire and send data to the PS server. In this thesis, we tackle the problem of energy consumption in a special category of PS applications in which the participating mobile devices are periodically queried for sensor data, such as acceleration and images. To reduce the energy overhead of uploading lots of information, we exploit the fact that processing approximately one thousand instructions consumes energy equal to that of transmitting one bit of information. Accordingly, we exploit data compression to reduce the number of bit that will be transmitted from the participating mobile devices to the PS server. Although, he technical literature has many compression methods, such as derivative-based prediction, Cosine transform, Wavelet transform; we designed a framework based on the compressive sensing (CS) theory. In the last decade, CS has been proven as a promising candidate for compressing N-dimensional data. Moreover, it shows satisfactory results when used for inferring missing data. Accordingly, we exploit CS to compress 1D data (e.g. acceleration, gravity) and 2D data (e.g. images). To efficiently utilize the CS method on resources-taxed devices such as the smart mobile devices, we start with identifying the most lightweight measurements matrices which will be implemented on the mobile devices. We examine several matrices, such as the random measurement matrix, the random Gaussian matrix, and the Toeplitz matrix. Our analysis mainly bases on the recovery accuracy and the dissipated energy from the mobile device's battery. Additionally, we perform a comparative study with other compressors, including the cosine transform and the lossless ZIP compressor. To further confirm that CS has a high recovery accuracy, we implemented an activity recognition algorithm at the server side. To this end, we exploit the dynamic time warping (DTW) algorithm as a pattern matching tool between a set of stored patterns and the recovered data. Several experiments have been performed which show the high accuracy of both CS and DTW to recover several activities such as walking, running, and jogging. In terms of energy, CS significantly reduce the battery consumption relative to the other baseline compressors. Finally, we prove the possibility of exploiting the CS-based compression method for manipulating 1D data as well as 2D data, i.e. images. The main challenge is to perform image encoding on the mobile devices, despite the complex matrix operations between the image pixels and the sensing matrices. To overcome this problem, we divide the image into a number of cells and subsequently, we perform the encoding process on each cell individually. Accordingly, the compression process is iteratively achieved. The evaluation results show promising results for 2D compression-based on the CS theory in terms of the saved energy consumption and the recovery accuracy

    The 1995 Science Information Management and Data Compression Workshop

    Get PDF
    This document is the proceedings from the 'Science Information Management and Data Compression Workshop,' which was held on October 26-27, 1995, at the NASA Goddard Space Flight Center, Greenbelt, Maryland. The Workshop explored promising computational approaches for handling the collection, ingestion, archival, and retrieval of large quantities of data in future Earth and space science missions. It consisted of fourteen presentations covering a range of information management and data compression approaches that are being or have been integrated into actual or prototypical Earth or space science data information systems, or that hold promise for such an application. The Workshop was organized by James C. Tilton and Robert F. Cromp of the NASA Goddard Space Flight Center

    Adaptive and power-aware mechanism in wireless sensor networks

    Get PDF
    In a very short time, the interest about Wireless Sensors Networks(WSN) and their applications has grown both within the academic community and in the industry. At the same time, the complexity of the envisaged WSN-based systems has grown from a handful of homogeneous sensors to hundreds or thousands of devices, possibly differing in terms of capability, architecture, and operating system. Recently, some deployments of WSNs have been suggested in the literature both addressing the energy-aware and the adaptability to environmental changes issues; in both cases limitations arise which either prevent a long lifetime or/and the Quality of Service(QoS) of the envisaged applications.Energy is one of the scarcest resources in WSNs; energy harvesting technologies are thus required to design credible autonomous sensor networks. In addition to energy harvesting technologies, energy saving mechanisms play an important role to reduce energy consumption in sensor nodes. Moreover, in wireless sensor networks, the network topology may change over time due to permanent or transient node and communication faults, energy availability for the nodes (despite thepossible presence of energy harvesting mechanisms) and environmental changes (e.g., a landslide phenomenon, solar powerdensity made available to the nodes, presence of vegetation subject to seasonal dynamics). Development of smart routing algorithms is hence a must for granting effective communications in large scale wireless networks with adaptation ability combined with energy-aware aspects

    The contour tree image encoding technique and file format

    Get PDF
    The process of contourization is presented which converts a raster image into a discrete set of plateaux or contours. These contours can be grouped into a hierarchical structure, defining total spatial inclusion, called a contour tree. A contour coder has been developed which fully describes these contours in a compact and efficient manner and is the basis for an image compression method. Simplification of the contour tree has been undertaken by merging contour tree nodes thus lowering the contour tree's entropy. This can be exploited by the contour coder to increase the image compression ratio. By applying general and simple rules derived from physiological experiments on the human vision system, lossy image compression can be achieved which minimises noticeable artifacts in the simplified image. The contour merging technique offers a complementary lossy compression system to the QDCT (Quantised Discrete Cosine Transform). The artifacts introduced by the two methods are very different; QDCT produces a general blurring and adds extra highlights in the form of overshoots, whereas contour merging sharpens edges, reduces highlights and introduces a degree of false contouring. A format based on the contourization technique which caters for most image types is defined, called the contour tree image format. Image operations directly on this compressed format have been studied which for certain manipulations can offer significant operational speed increases over using a standard raster image format. A couple of examples of operations specific to the contour tree format are presented showing some of the features of the new format.Science and Engineering Research Counci

    3D Medical Image Lossless Compressor Using Deep Learning Approaches

    Get PDF
    The ever-increasing importance of accelerated information processing, communica-tion, and storing are major requirements within the big-data era revolution. With the extensive rise in data availability, handy information acquisition, and growing data rate, a critical challenge emerges in efficient handling. Even with advanced technical hardware developments and multiple Graphics Processing Units (GPUs) availability, this demand is still highly promoted to utilise these technologies effectively. Health-care systems are one of the domains yielding explosive data growth. Especially when considering their modern scanners abilities, which annually produce higher-resolution and more densely sampled medical images, with increasing requirements for massive storage capacity. The bottleneck in data transmission and storage would essentially be handled with an effective compression method. Since medical information is critical and imposes an influential role in diagnosis accuracy, it is strongly encouraged to guarantee exact reconstruction with no loss in quality, which is the main objective of any lossless compression algorithm. Given the revolutionary impact of Deep Learning (DL) methods in solving many tasks while achieving the state of the art results, includ-ing data compression, this opens tremendous opportunities for contributions. While considerable efforts have been made to address lossy performance using learning-based approaches, less attention was paid to address lossless compression. This PhD thesis investigates and proposes novel learning-based approaches for compressing 3D medical images losslessly.Firstly, we formulate the lossless compression task as a supervised sequential prediction problem, whereby a model learns a projection function to predict a target voxel given sequence of samples from its spatially surrounding voxels. Using such 3D local sampling information efficiently exploits spatial similarities and redundancies in a volumetric medical context by utilising such a prediction paradigm. The proposed NN-based data predictor is trained to minimise the differences with the original data values while the residual errors are encoded using arithmetic coding to allow lossless reconstruction.Following this, we explore the effectiveness of Recurrent Neural Networks (RNNs) as a 3D predictor for learning the mapping function from the spatial medical domain (16 bit-depths). We analyse Long Short-Term Memory (LSTM) models’ generalisabil-ity and robustness in capturing the 3D spatial dependencies of a voxel’s neighbourhood while utilising samples taken from various scanning settings. We evaluate our proposed MedZip models in compressing unseen Computerized Tomography (CT) and Magnetic Resonance Imaging (MRI) modalities losslessly, compared to other state-of-the-art lossless compression standards.This work investigates input configurations and sampling schemes for a many-to-one sequence prediction model, specifically for compressing 3D medical images (16 bit-depths) losslessly. The main objective is to determine the optimal practice for enabling the proposed LSTM model to achieve a high compression ratio and fast encoding-decoding performance. A solution for a non-deterministic environments problem was also proposed, allowing models to run in parallel form without much compression performance drop. Compared to well-known lossless codecs, experimental evaluations were carried out on datasets acquired by different hospitals, representing different body segments, and have distinct scanning modalities (i.e. CT and MRI).To conclude, we present a novel data-driven sampling scheme utilising weighted gradient scores for training LSTM prediction-based models. The objective is to determine whether some training samples are significantly more informative than others, specifically in medical domains where samples are available on a scale of billions. The effectiveness of models trained on the presented importance sampling scheme was evaluated compared to alternative strategies such as uniform, Gaussian, and sliced-based sampling
    corecore