2,376 research outputs found

    Achieving Obfuscation Through Self-Modifying Code: A Theoretical Model

    Get PDF
    With the extreme amount of data and software available on networks, the protection of online information is one of the most important tasks of this technological age. There is no such thing as safe computing, and it is inevitable that security breaches will occur. Thus, security professionals and practices focus on two areas: security, preventing a breach from occurring, and resiliency, minimizing the damages once a breach has occurred. One of the most important practices for adding resiliency to source code is through obfuscation, a method of re-writing the code to a form that is virtually unreadable. This makes the code incredibly hard to decipher by attackers, protecting intellectual property and reducing the amount of information gained by the malicious actor. Achieving obfuscation through the use of self-modifying code, code that mutates during runtime, is a complicated but impressive undertaking that creates an incredibly robust obfuscating system. While there is a great amount of research that is still ongoing, the preliminary results of this subject suggest that the application of self-modifying code to obfuscation may yield self-maintaining software capable of healing itself following an attack

    Enhancing Data Security by Making Data Disappear in a P2P Systems

    Get PDF
    This paper describes the problem of securing data by making it disappear after some time limit, making it impossible for it to be recovered by an unauthorized party. This method is in response to the need to keep the data secured and to protect the privacy of archived data on the servers, Cloud and Peer-to-Peer architectures. Due to the distributed nature of these architectures, it is impossible to destroy the data completely. So, we store the data by applying encryption and then manage the key, which is easier to do as the key is small and it can be hidden in the DHT (Distributed hash table). Even if the keys in the DHT and the encrypted data were compromised, the data would still be secure. This paper describes existing solutions, points to their limitations and suggests improvements with a new secure architecture. We evaluated and executed this architecture on the Java platform and proved that it is more secure than other architectures.Comment: 18 page

    Reliable and Energy Efficient MLC STT-RAM Buffer for CNN Accelerators

    Get PDF
    We propose a lightweight scheme where the formation of a data block is changed in such a way that it can tolerate soft errors significantly better than the baseline. The key insight behind our work is that CNN weights are normalized between -1 and 1 after each convolutional layer, and this leaves one bit unused in half-precision floating-point representation. By taking advantage of the unused bit, we create a backup for the most significant bit to protect it against the soft errors. Also, considering the fact that in MLC STT-RAMs the cost of memory operations (read and write), and reliability of a cell are content-dependent (some patterns take larger current and longer time, while they are more susceptible to soft error), we rearrange the data block to minimize the number of costly bit patterns. Combining these two techniques provides the same level of accuracy compared to an error-free baseline while improving the read and write energy by 9% and 6%, respectively

    Codes for Asymmetric Limited-Magnitude Errors With Application to Multilevel Flash Memories

    Get PDF
    Several physical effects that limit the reliability and performance of multilevel flash memories induce errors that have low magnitudes and are dominantly asymmetric. This paper studies block codes for asymmetric limited-magnitude errors over q-ary channels. We propose code constructions and bounds for such channels when the number of errors is bounded by t and the error magnitudes are bounded by ℓ. The constructions utilize known codes for symmetric errors, over small alphabets, to protect large-alphabet symbols from asymmetric limited-magnitude errors. The encoding and decoding of these codes are performed over the small alphabet whose size depends only on the maximum error magnitude and is independent of the alphabet size of the outer code. Moreover, the size of the codes is shown to exceed the sizes of known codes (for related error models), and asymptotic rate-optimality results are proved. Extensions of the construction are proposed to accommodate variations on the error model and to include systematic codes as a benefit to practical implementation

    Concentric Layout, A New Scientific Data Layout For Matrix Data Set In Hadoop File System

    Get PDF
    The data generated by scientific simulation, sensor, monitor or optical telescope has increased with dramatic speed. In order to analyze the raw data speed and space efficiently, data preprocess operation is needed to achieve better performance in data analysis phase. Current research shows an increasing tread of adopting MapReduce framework for large scale data processing. However, the data access patterns which generally applied to scientific data set are not supported by current MapReduce framework directly. The gap between the requirement from analytics application and the property of MapReduce framework motivates us to provide support for these data access patterns in MapReduce framework. In our work, we studied the data access patterns in matrix files and proposed a new concentric data layout solution to facilitate matrix data access and analysis in MapReduce framework. Concentric data layout is a data layout which maintains the dimensional property in chunk level. Contrary to the continuous data layout which adopted in current Hadoop framework by default, concentric data layout stores the data from the same sub-matrix into one chunk. This matches well with the matrix operations like computation. The concentric data layout preprocesses the data beforehand, and optimizes the afterward run of MapReduce application. The experiments indicate that the concentric data layout improves the overall performance, reduces the execution time by 38% when the file size is 16 GB, also it relieves the data overhead phenomenon and increases the effective data retrieval rate by 32% on average
    • 

    corecore