84 research outputs found

    There and Back Again

    Get PDF
    We present a programming pattern where a recursive function defined over a data structure traverses another data structure at return time. The idea is that the recursive calls get us `there' by traversing the first data structure and the returns get us `back again' while traversing the second data structure. We name this programming pattern of traversing a data structure at call time and another data structure at return time ``There And Back Again'' (TABA). The TABA pattern directly applies to computing symbolic convolutions and to multiplying polynomials. It also blends well with other programming patterns such as dynamic programming and traversing a list at double speed. We illustrate TABA and dynamic programming with Catalan numbers. We illustrate TABA and traversing a list at double speed with palindromes and we obtain a novel solution to this traditional exercise. Finally, through a variety of tree traversals, we show how to apply TABA to other data structures than lists. A TABA-based function written in direct style makes full use of an ALGOL-like control stack and needs no heap allocation. Conversely, in a TABA-based function written in continuation-passing style and recursively defined over a data structure (traversed at call time), the continuation acts as an iterator over a second data structure (traversed at return time). In general, the TABA pattern saves one from accumulating intermediate data structures at call time

    On the Development of Novel Encryption Methods for Conventional and Biometric Images

    Get PDF
    Information security refers to the technique of protecting information from unauthorized access, use, disclosure, disruption and modification. Governments, military, corporations, financial institutions, hospitals, and private businesses amass a great deal of confidential information about their employees, customers, products, research, and financial status. Most of this information is now collected, processed and stored on electronic media and transmitted across networks to other computers. Encryption clearly addresses the need for confidentiality of information, in process of storage and transmission. Popular application of multimedia technology and increasingly transmission ability of network gradually leads us to acquire information directly and clearly through images and hence the security of image data has become inevitable. Moreover in the recent years, biometrics is gaining popularity for security purposes in many applications. However, during communication and transmission over insecure network channels it has some risks of being hacked, modified and reused. Hence, there is a strong need to protect biometric images during communication and transmission. In this thesis, attempts have been made to encrypt image efficiently and to enhance the security of biometrics images during transmission. In the first contribution, three different key matrix generation methods invertible, involuntary, and permutation key matrix generation have been proposed. Invertible and involuntary key matrix generation methods solves the key matrix inversion problem in Hill cipher. Permutation key matrix generation method increases the Hill system’s security. The conventional Hill cipher technique fails to encrypt images properly if the image consists of large area covered with same colour or gray level. Thus, it does not hide all features of the image which reveals patterns in the plaintext. Moreover, it can be easily broken with a known plaintext attack revealing weak security. To address these issues two different techniques are proposed, those are advanced Hill cipher algorithm and H-S-X cryptosystem to encrypt the images properly. Security analysis of both the techniques reveals superiority of encryption and decryption of images. On the other hand, H-S-X cryptosystem has been used to instil more diffusion and confusion on the cryptanalysis. FPGA implementation of both the proposed techniques has been modeled to show the effectiveness of both the techniques. An extended Hill cipher algorithm based on XOR and zigzag operation is designed to reduce both encryption and decryption time. This technique not only reduces the encryption and decryption time but also ensures no loss of data during encryption and decryption process as compared to other techniques and possesses more resistance to intruder attack. The hybrid cryptosystem which is the combination of extended Hill cipher technique and RSA algorithm has been implemented to solve the key distribution problem and to enhance the security with reduced encryption and decryption time. Two distinct approaches for image encryption are proposed using chaos based DNA coding along with shifting and scrambling or poker shuffle to create grand disorder between the pixels of the images. In the first approach, results obtained from chaos based DNA coding scheme is shifted and scrambled to provide encryption. On the other hand in the second approach the results obtained from chaos based DNA coding encryption is followed by poker shuffle operation to generate the final result. Simulated results suggest performance superiority for encryption and decryption of image and the results obtained have been compared and discussed. Later on FPGA implementation of proposed cryptosystem has been performed. In another contribution, a modified Hill cipher is proposed which is the combination of three techniques. This proposed modified Hill cipher takes advantage of all the three techniques. To acquire the demands of authenticity, integrity, and non-repudiation along with confidentiality, a novel hybrid method has been implemented. This method has employed proposed modified Hill cipher to provide confidentiality. Produced message digest encrypted by private key of RSA algorithm to achieve other features such as authenticity, integrity, and non-repudiation To enhance the security of images, a biometric cryptosystem approach that combines cryptography and biometrics has been proposed. Under this approach, the image is encrypted with the help of fingerprint and password. A key generated with the combination of fingerprint and password and is used for image encryption. This mechanism is seen to enhance the security of biometrics images during transmission. Each proposed algorithm is studied separately, and simulation experiments are conducted to evaluate their performance. The security analyses are performed and performance compared with other competent schemes

    Adaptive and hybrid schemes for efficient parallel squaring and cubing units

    Get PDF
    Squaring (X2) and cubing (X3) units are special operations of multiplication used in many applications, such as image compression, equalization, decoding and demodulation, 3D graphics, scientific computing, artificial neural networks, logarithmic number system, and multimedia application. They can also be an efficient way to compute other basic functions. Therefore, improving their performances is a goal for many researchers. This dissertation will discuss modification to algorithms to compute parallel squaring and cubing units in both signed and unsigned representation. After that, truncated technique is applied to improve their performance. Each unit is modeled and estimated to obtain its area, delay by using linear evaluation model. A C program was written to generate Hardware Description Language files for each unit. These units are simulated and verified in simulation. Moreover, area, delay, and power consumption are calculated for each unit and compared with those ones in previous approaches for both Virtex 5 Xilinx FPGA and IBM 65nm ASIC technologies

    Null convention logic circuits for asynchronous computer architecture

    Get PDF
    For most of its history, computer architecture has been able to benefit from a rapid scaling in semiconductor technology, resulting in continuous improvements to CPU design. During that period, synchronous logic has dominated because of its inherent ease of design and abundant tools. However, with the scaling of semiconductor processes into deep sub-micron and then to nano-scale dimensions, computer architecture is hitting a number of roadblocks such as high power and increased process variability. Asynchronous techniques can potentially offer many advantages compared to conventional synchronous design, including average case vs. worse case performance, robustness in the face of process and operating point variability and the ready availability of high performance, fine grained pipeline architectures. Of the many alternative approaches to asynchronous design, Null Convention Logic (NCL) has the advantage that its quasi delay-insensitive behavior makes it relatively easy to set up complex circuits without the need for exhaustive timing analysis. This thesis examines the characteristics of an NCL based asynchronous RISC-V CPU and analyses the problems with applying NCL to CPU design. While a number of university and industry groups have previously developed small 8-bit microprocessor architectures using NCL techniques, it is still unclear whether these offer any real advantages over conventional synchronous design. A key objective of this work has been to analyse the impact of larger word widths and more complex architectures on NCL CPU implementations. The research commenced by re-evaluating existing techniques for implementing NCL on programmable devices such as FPGAs. The little work that has been undertaken previously on FPGA implementations of asynchronous logic has been inconclusive and seems to indicate that asynchronous systems cannot be easily implemented in these devices. However, most of this work related to an alternative technique called bundled data, which is not well suited to FPGA implementation because of the difficulty in controlling and matching delays in a 'bundle' of signals. On the other hand, this thesis clearly shows that such applications are not only possible with NCL, but there are some distinct advantages in being able to prototype complex asynchronous systems in a field-programmable technology such as the FPGA. A large part of the value of NCL derives from its architectural level behavior, inherent pipelining, and optimization opportunities such as the merging of register and combina- tional logic functions. In this work, a number of NCL multiplier architectures have been analyzed to reveal the performance trade-offs between various non-pipelined, 1D and 2D organizations. Two-dimensional pipelining can easily be applied to regular architectures such as array multipliers in a way that is both high performance and area-efficient. It was found that the performance of 2D pipelining for small networks such as multipliers is around 260% faster than the equivalent non-pipelined design. However, the design uses 265% more transistors so the methodology is mainly of benefit where performance is strongly favored over area. A pipelined 32bit x 32bit signed Baugh-Wooley multiplier with Wallace-Tree Carry Save Adders (CSA), which is representative of a real design used for CPUs and DSPs, was used to further explore this concept as it is faster and has fewer pipeline stages compared to the normal array multiplier using Ripple-Carry adders (RCA). It was found that 1D pipelining with ripple-carry chains is an efficient implementation option but becomes less so for larger multipliers, due to the completion logic for which the delay time depends largely on the number of bits involved in the completion network. The average-case performance of ripple-carry adders was explored using random input vectors and it was observed that it offers little advantage on the smaller multiplier blocks, but this particular timing characteristic of asynchronous design styles be- comes increasingly more important as word size grows. Finally, this research has resulted in the development of the first 32-Bit asynchronous RISC-V CPU core. Called the Redback RISC, the architecture is a structure of pipeline rings composed of computational oscillations linked with flow completeness relationships. It has been written using NELL, a commercial description/synthesis tool that outputs standard Verilog. The Redback has been analysed and compared to two approximately equivalent industry standard 32-Bit synchronous RISC-V cores (PicoRV32 and Rocket) that are already fabricated and used in industry. While the NCL implementation is larger than both commercial cores it has similar performance and lower power compared to the PicoRV32. The implementation results were also compared against an existing NCL design tool flow (UNCLE), which showed how much the results of these implementation strategies differ. The Redback RISC has achieved similar level of throughput and 43% better power and 34% better energy compared to one of the synchronous cores with the same benchmark test and test condition such as input sup- ply voltage. However, it was shown that area is the biggest drawback for NCL CPU design. The core is roughly 2.5× larger than synchronous designs. On the other hand its area is still 2.9× smaller than previous designs using UNCLE tools. The area penalty is largely due to the unavoidable translation into a dual-rail topology when using the standard NCL cell library

    Topics in Programming Languages, a Philosophical Analysis through the case of Prolog

    Get PDF
    [EN]Programming languages seldom find proper anchorage in philosophy of logic, language and science. is more, philosophy of language seems to be restricted to natural languages and linguistics, and even philosophy of logic is rarely framed into programming languages topics. The logic programming paradigm and Prolog are, thus, the most adequate paradigm and programming language to work on this subject, combining natural language processing and linguistics, logic programming and constriction methodology on both algorithms and procedures, on an overall philosophizing declarative status. Not only this, but the dimension of the Fifth Generation Computer system related to strong Al wherein Prolog took a major role. and its historical frame in the very crucial dialectic between procedural and declarative paradigms, structuralist and empiricist biases, serves, in exemplar form, to treat straight ahead philosophy of logic, language and science in the contemporaneous age as well. In recounting Prolog's philosophical, mechanical and algorithmic harbingers, the opportunity is open to various routes. We herein shall exemplify some: - the mechanical-computational background explored by Pascal, Leibniz, Boole, Jacquard, Babbage, Konrad Zuse, until reaching to the ACE (Alan Turing) and EDVAC (von Neumann), offering the backbone in computer architecture, and the work of Turing, Church, Gödel, Kleene, von Neumann, Shannon, and others on computability, in parallel lines, throughly studied in detail, permit us to interpret ahead the evolving realm of programming languages. The proper line from lambda-calculus, to the Algol-family, the declarative and procedural split with the C language and Prolog, and the ensuing branching and programming languages explosion and further delimitation, are thereupon inspected as to relate them with the proper syntax, semantics and philosophical élan of logic programming and Prolog

    Survey of FPGA applications in the period 2000 – 2015 (Technical Report)

    Get PDF
    Romoth J, Porrmann M, Rückert U. Survey of FPGA applications in the period 2000 – 2015 (Technical Report).; 2017.Since their introduction, FPGAs can be seen in more and more different fields of applications. The key advantage is the combination of software-like flexibility with the performance otherwise common to hardware. Nevertheless, every application field introduces special requirements to the used computational architecture. This paper provides an overview of the different topics FPGAs have been used for in the last 15 years of research and why they have been chosen over other processing units like e.g. CPUs

    On the Improving of Approximate Computing Quality Assurance

    Get PDF
    Approximate computing (AC) has been predominantly recommended for implementation in error-tolerant applications as it offers a reduced resource usage, e.g.,~area and power, for a trade-off in output quality. However, AC implementation has not been adopted in commercial designs yet as it is still falling short in providing a good enough quality. Thus, continued research in the field in the field of improving quality of AC designs is indispensable. In this direction, a recent study exploited the use of machine learning (ML) to improve output quality. Nonetheless, the idea of quality assurance in AC designs could be improved in many aspects. In the work we present in this thesis, we propose a few practical methods to improve an ML-based quality assurance methodology, which consist of an ML-model that select the most suitable design from a library of AC circuits. For instance, we extend the library of AC designs used for the ML-based approach with larger data path circuits. Larger designs, however, result in an exponential growth of complexity. Thus we propose the use of data pre-processing in order to reduce this hurdle by prioritizing designs based on their physical properties. Another direction of improving AC circuits designs in general, and the ML-based model in particular is design space exploration (DSE). We therefore propose a novel DSE that drastically reduces the design space based on the aimed targets for area, latency and power of the AC circuit. Moreover, even with a narrowed design space, the number of AC designs to be assessed for their quality could be enormous. Thus, as part of this thesis, we propose a DSE that uses an intricate mathematical modeling for designs to assess their quality. In another effort in improving quality assurance for AC design, we introduce a highly reliable model that uses a minimal overhead. This work is achieved by using redundant AC modules to form an approximate quadruple modular redundancy (AQMR) design. The proposed AQMR is superior to the exact triple modular redundancy (TMR) by offering a better reliability on top of the resource savings resulting from the implementation of AC

    Birth, growth and computation of pi to ten trillion digits

    Get PDF

    Operators in the lexicon : on the negative logic of natural language

    Get PDF
    LEI Universiteit LeidenTheoretical and Experimental Linguistic
    corecore