15 research outputs found

    Binary Differencing for Media Files

    Get PDF

    PRECISE DELTA EXTRACTION SCHEME FOR REPROGRAMMING OF WIRELESS SENSOR NODES

    Get PDF
     In this paper, we present a precise delta extraction scheme and tool for use in wireless sensor network reprogramming processes. Our approach involves the use of a novel algorithm based on SET theory and the unique pattern of the Execution Link File (ELF) structure to extract delta from two distinct firmware (original and the modified). The delta consist of two set of unique values: one set clearly indicate the address of where the change has occurred and the second relays the change Data content. In addition, we developed a set of metrics that relays the degree of modification made with respect to the original file. The scheme capabilities, when compared with similar utilities referred in literature, shows an appreciable capacity to reduce energy consumption rate as well as effect a reduction in the amount of memory space used during reprogramming processes.  http://dx.doi.org/10.4314/njt.v35i1.2

    Lossless Differential Compression for Synchronizing Arbitrary Single-Dimensional Strings

    Get PDF
    Differential compression allows expressing a modified document as differences relative to another version of the document. A compressed string requires space relative to amount of changes, irrespective of original document sizes. The purpose of this study was to answer what algorithms are suitable for universal lossless differential compression for synchronizing two arbitrary documents either locally or remotely. Two main problems in differential compression are finding the differences (differencing), and compactly communicating the differences (encoding). We discussed local differencing algorithms based on subsequence searching, hashtable lookups, suffix searching, and projection. We also discussed probabilistic remote algorithms based on both recursive comparison and characteristic polynomial interpolation of hashes computed from variable-length content-defined substrings. We described various heuristics for approximating optimal algorithms as arbitrary long strings and memory limitations force discarding information. Discussion also included compact delta encoding and in-place reconstruction. We presented results from empirical testing using discussed algorithms. The conclusions were that multiple algorithms need to be integrated into a hybrid implementation, which heuristically chooses algorithms based on evaluation of the input data. Algorithms based on hashtable lookups are faster on average and require less memory, but algorithms based on suffix searching find least differences. Interpolating characteristic polynomials was found to be too slow for general use. With remote hash comparison, content-defined chunks and recursive comparison can reduce protocol overhead. A differential compressor should be merged with a state-of-art non-differential compressor to enable more compact delta encoding. Input should be processed multiple times to allow constant a space bound without significant reduction in compression efficiency. Compression efficiently of current popular synchronizers could be improved, as our empiral testing showed that a non-differential compressor produced smaller files without having access to one of the two strings

    Building blocks for the internet of things

    Get PDF

    Securely accessing encrypted cloud storage from multiple devices

    Get PDF
    Cloud storage services like Dropbox, Google Drive and OneDrive are increasingly popular. They allow users to synchronize and access data from multiple devices. However, privacy of cloud data is a concern. Encrypting data on client-side before uploading it to cloud storage is an effective way to ensure data privacy. To allow data access from multiple devices, current solutions derive the encryption keys solely from user-chosen passwords which result in low entropy keys. In this thesis, we present OmniShare, the first scheme to allow client-side encryption with high-entropy keys combined with an intuitive key distribution mechanism enabling data access from multiple devices. It uses a combination of out-of-band channels and cloud storage as a communication channel to ensure minimal and consistent user actions during key distribution. Furthermore, OmniShare allows the possibility of reducing communication overhead for updating encrypted data. OmniShare is freely available on popular platforms

    Sistema de copia de seguridad automática para un programa de diseño técnico

    Get PDF
    Partint de les mancances quant a salvaguarda de dades presents en un programa de disseny tècnic, es du a terme un estudi de les tècniques necessàries per a solucionar aquestes i es detalla la seva posterior implementació en forma de llibreria. Així, s'aconsegueix oferir les funcionalitats de recuperació de dades, anul·lació d'operacions i històric de canvis, sense afectar de forma apreciable a les prestacions del programari.Partiendo de las carencias en cuanto a salvaguarda de datos presentes en un programa de diseño técnico, se lleva a cabo un estudio de las técnicas necesarias para solucionar estas carencias y se detalla su posterior implementación en forma de librería. Así, se consigue ofrecer las funcionalidades de recuperación de datos, anulación de operaciones e histórico de cambios, sin afectar de forma apreciable a las prestaciones de la aplicación.Given the insufficient data safeguard measures present in a technical design program, a study and its later implementation are carried out in order to fix these lacks. In this way, the data recovery, operation undoing and change history features are implemented without a noticeable impact on the application performance

    Using Rollback Avoidance to Mitigate Failures in Next-Generation Extreme-Scale Systems

    Get PDF
    High-performance computing (HPC) systems enable scientists to numerically model complex phenomena in many important physical systems. The next major milestone in the development of HPC systems is the construction of the first supercomputer capable executing more than an exaflop, 10^18 floating point operations per second. On systems of this scale, failures will occur much more frequently than on current systems. As a result, resilience is a key obstacle to building next-generation extreme-scale systems. Coordinated checkpointing is currently the most widely-used mechanism for handling failures on HPC systems. Although coordinated checkpointing remains effective on current systems, increasing the scale of today\u27s systems to build next-generation systems will increase the cost of fault tolerance as more and more time is taken away from the application to protect against or recover from failure. Rollback avoidance techniques seek to mitigate the cost of checkpoint/restart by allowing an application to continue its execution rather than rolling back to an earlier checkpoint when failures occur. These techniques include failure prediction and preventive migration, replicated computation, fault-tolerant algorithms, and software-based memory fault correction. In this thesis, I examine how rollback avoidance techniques can be used to address failures on extreme-scale systems. Using a combination of analytic modeling and simulation, I evaluate the potential impact of rollback avoidance on these systems. I then present a novel rollback avoidance technique that exploits similarities in application memory. Finally, I examine the feasibility of using this technique to protect against memory faults in kernel memory
    corecore