22 research outputs found

    A Programmable look-up table-based interpolator with nonuniform sampling scheme

    Get PDF
    Interpolation is a useful technique for storage of complex functions on limited memory space: some few sampling values are stored on a memory bank, and the function values in between are calculated by interpolation. This paper presents a programmable Look-Up Table-based interpolator, which uses a reconfigurable nonuniform sampling scheme: the sampled points are not uniformly spaced. Their distribution can also be reconfigured to minimize the approximation error on specific portions of the interpolated function's domain. Switching from one set of configuration parameters to another set, selected on the fly from a variety of precomputed parameters, and using different sampling schemes allow for the interpolation of a plethora of functions, achieving memory saving and minimum approximation error. As a study case, the proposed interpolator was used as the core of a programmable noise generatoroutput signals drawn from different Probability Density Functions were produced for testing FPGA implementations of chaotic encryption algorithms. As a result of the proposed method, the interpolation of a specific transformation function on a Gaussian noise generator reduced the memory usage to 2.71% when compared to the traditional uniform sampling scheme method, while keeping the approximation error below a threshold equal to 0.000030518

    Stream ciphers for secure display

    Get PDF
    In any situation where private, proprietary or highly confidential material is being dealt with, the need to consider aspects of data security has grown ever more important. It is usual to secure such data from its source, over networks and on to the intended recipient. However, data security considerations typically stop at the recipient's processor, leaving connections to a display transmitting raw data which is increasingly in a digital format and of value to an adversary. With a progression to wireless display technologies the prominence of this vulnerability is set to rise, making the implementation of 'secure display' increasingly desirable. Secure display takes aspects of data security right to the display panel itself, potentially minimising the cost, component count and thickness of the final product. Recent developments in display technologies should help make this integration possible. However, the processing of large quantities of time-sensitive data presents a significant challenge in such resource constrained environments. Efficient high- throughput decryption is a crucial aspect of the implementation of secure display and one for which the widely used and well understood block cipher may not be best suited. Stream ciphers present a promising alternative and a number of strong candidate algorithms potentially offer the hardware speed and efficiency required. In the past, similar stream ciphers have suffered from algorithmic vulnerabilities. Although these new-generation designs have done much to respond to this concern, the relatively short 80-bit key lengths of some proposed hardware candidates, when combined with ever-advancing computational power, leads to the thesis identifying exhaustive search of key space as a potential attack vector. To determine the value of protection afforded by such short key lengths a unique hardware key search engine for stream ciphers is developed that makes use of an appropriate data element to improve search efficiency. The simulations from this system indicate that the proposed key lengths may be insufficient for applications where data is of long-term or high value. It is suggested that for the concept of secure display to be accepted, a longer key length should be used

    Advanced cryptographic system : design, architecture and FPGA implementation

    Get PDF
    PhD ThesisThe field programmable gate array (FPGA) is a powerful technology, and since its introduction broad prospects have opened up for engineers to creatively design and implement complete systems in various fields. One such area that has a long history in information and network security is cryptography, which is considered in this thesis. The challenge for engineers is to design secure cryptographic systems, which should work efficiently on different platforms with the levels of security required. In addition, cryptographic functionalities have to be implemented with acceptable degrees of complexity while demanding lower power consumption. The present work is devoted to the design of an efficient block cipher that meets contemporary security requirements, and to implement the proposed design in a configurable hardware platform. The cipher has been designed according to Shannon’s principles and modern cryptographic theorems. It is an iterated symmetric-key block cipher based on the substitution permutation network and number theoretic transform with variable key length, block size and word length. These parameters can be undisclosed when determined by the system, making cryptanalysis almost impossible. The aim is to design a more secure, reliable and flexible system that can run as a ratified standard, with reasonable computational complexity for a sufficient service time. Analyses are carried out on the transforms concerned, which belong to the number theoretic transforms family, to evaluate their diffusion power, and the results confirm good performance in this respect mostly of a minimum of 50%. The new Mersenne number transform and Fermat number transform were included in the design because their characteristics meet the basic requirements of modern cryptographic systems. A new 7×7 substitution box (S-box) is designed and its non-linear properties are evaluated, resulting in values of 2-6 for maximum difference propagation probability and 2-2.678 for maximum input-output correlation. In addition, these parameters are calculated for all S-boxes belonging to the previous and current standard algorithms. Moreover, three extra S-boxes are derived from the new S-box and another three from the current standard, preserving the same non-linear properties by reordering the output elements. The robustness of the proposed cipher in terms of differential and linear cryptanalysis is then considered, and it is proven that the algorithm is secure against such well-known attacks from round three onwards regardless of block or key length. A number of test vectors are run to verify the correctness of the algorithm’s implementation in terms of any possible error, and all results were promising. Tests included the known answer test, the multi-block message test, and the Monte Carlo test. Finally, efficient hardware architectures for the proposed cipher have been designed and implemented using the FPGA system generator platform. The implementations are run on the target device, Xilinx Virtex 6 (XC6VLX130T-2FF484). Using parallel loop-unrolling architecture, a high throughput of 44.9 Gbits/sec is achieved with a power consumption of 1.83W and 8030 slices for implementing the encryption module with key and block lengths of 16×7 bits. There are a variety of outcomes when the cipher is implemented on different FPGA devices as well as for different block and key lengths.Ministry of Higher Education and Scientific Research in Ira

    Survey of FPGA applications in the period 2000 – 2015 (Technical Report)

    Get PDF
    Romoth J, Porrmann M, Rückert U. Survey of FPGA applications in the period 2000 – 2015 (Technical Report).; 2017.Since their introduction, FPGAs can be seen in more and more different fields of applications. The key advantage is the combination of software-like flexibility with the performance otherwise common to hardware. Nevertheless, every application field introduces special requirements to the used computational architecture. This paper provides an overview of the different topics FPGAs have been used for in the last 15 years of research and why they have been chosen over other processing units like e.g. CPUs

    Proceedings of the 5th International Workshop on Reconfigurable Communication-centric Systems on Chip 2010 - ReCoSoC\u2710 - May 17-19, 2010 Karlsruhe, Germany. (KIT Scientific Reports ; 7551)

    Get PDF
    ReCoSoC is intended to be a periodic annual meeting to expose and discuss gathered expertise as well as state of the art research around SoC related topics through plenary invited papers and posters. The workshop aims to provide a prospective view of tomorrow\u27s challenges in the multibillion transistor era, taking into account the emerging techniques and architectures exploring the synergy between flexible on-chip communication and system reconfigurability

    Profiling side-channel attacks on cryptographic algorithms

    Get PDF
    Traditionally, attacks on cryptographic algorithms looked for mathematical weaknesses in the underlying structure of a cipher. Side-channel attacks, however, look to extract secret key information based on the leakage from the device on which the cipher is implemented, be it smart-card, microprocessor, dedicated hardware or personal computer. Attacks based on the power consumption, electromagnetic emanations and execution time have all been practically demonstrated on a range of devices to reveal partial secret-key information from which the full key can be reconstructed. The focus of this thesis is power analysis, more specifically a class of attacks known as profiling attacks. These attacks assume a potential attacker has access to, or can control, an identical device to that which is under attack, which allows him to profile the power consumption of operations or data flow during encryption. This assumes a stronger adversary than traditional non-profiling attacks such as differential or correlation power analysis, however the ability to model a device allows templates to be used post-profiling to extract key information from many different target devices using the power consumption of very few encryptions. This allows an adversary to overcome protocols intended to prevent secret key recovery by restricting the number of available traces. In this thesis a detailed investigation of template attacks is conducted, along with how the selection of various attack parameters practically affect the efficiency of the secret key recovery, as well as examining the underlying assumption of profiling attacks in that the power consumption of one device can be used to extract secret keys from another. Trace only attacks, where the corresponding plaintext or ciphertext data is unavailable, are then investigated against both symmetric and asymmetric algorithms with the goal of key recovery from a single trace. This allows an adversary to bypass many of the currently proposed countermeasures, particularly in the asymmetric domain. An investigation into machine-learning methods for side-channel analysis as an alternative to template or stochastic methods is also conducted, with support vector machines, logistic regression and neural networks investigated from a side-channel viewpoint. Both binary and multi-class classification attack scenarios are examined in order to explore the relative strengths of each algorithm. Finally these machine-learning based alternatives are empirically compared with template attacks, with their respective merits examined with regards to attack efficiency

    Real-Time Prefetching on Shared-Memory Multi-Core Systems

    Get PDF
    In recent years, there has been a growing trend towards using multi-core processors in real-time systems to cope with the rising computation requirements of real-time tasks. Coupled with this, the rising memory requirements of these tasks pushes demand beyond what can be provided by small, private on-chip caches, requiring the use of larger, slower off-chip memories such as DRAM. Due to the cost, power requirements and complexity of these memories, they are typically shared between all of the tasks within the system. In order for the execution time of these tasks to be bounded, the response time of the memory and the interference from other tasks also needs to be bounded. While there is a great amount of current research on bounding this interference, one popular method is to effectively partition the available memory bandwidth between the processors in the system. Of course, as the number of processors increases, so does the worst-case blocking, and worst-case blocking times quickly increase with the number of processors. It is difficult to further optimise the arbitration scheme; instead, this scaling problem needs to be approached from another angle. Prefetching has previously been shown to improve the execution time of tasks by speculatively issuing memory accesses ahead of time for items which may be useful in the near future, although these prefetchers are typically not used in real-time systems due to their unpredictable nature. Instead, this work presents a framework by which a prefetcher can be safely used alongside a composable memory arbiter, a predictable prefetching scheme, and finally a method by which this predictable prefetcher can be used to improve the worst-case execution time of a running task
    corecore