12 research outputs found

    Unified field multiplier for GF(p) and GF(2 n) with novel digit encoding

    Get PDF
    In recent years, there has been an increase in demand for unified field multipliers for Elliptic Curve Cryptography in the electronics industry because they provide flexibility for customers to choose between Prime (GF(p)) and Binary (GF(2")) Galois Fields. Also, having the ability to carry out arithmetic over both GF(p) and GF(2") in the same hardware provides the possibility of performing any cryptographic operation that requires the use of both fields. The unified field multiplier is relatively future proof compared with multipliers that only perform arithmetic over a single chosen field. The security provided by the architecture is also very important. It is known that the longer the key length, the more susceptible the system is to differential power attacks due to the increased amount of data leakage. Therefore, it is beneficial to design hardware that is scalable, so that more data can be processed per cycle. Another advantage of designing a multiplier that is capable of dealing with long word length is improvement in performance in terms of delay, because less cycles are needed. This is very important because typical elliptic curve cryptography involves key size of 160 bits. A novel unified field radix-4 multiplier using Montgomery Multiplication for the use of G(p) and GF(2") has been proposed. This design makes use of the unexploited state in number representation for operation in GF(2") where all carries are suppressed. The addition is carried out using a modified (4:2) redundant adder to accommodate the extra 1 * state. The proposed adder and the partial product generator design are capable of radix-4 operation, which reduces the number of computation cycles required. Also, the proposed adder is more scalable than existing designs.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Unified field multiplier for GF(p) and GF(2 n) with novel digit encoding

    Get PDF
    In recent years, there has been an increase in demand for unified field multipliers for Elliptic Curve Cryptography in the electronics industry because they provide flexibility for customers to choose between Prime (GF(p)) and Binary (GF(2')) Galois Fields. Also, having the ability to carry out arithmetic over both GF(p) and GF(2') in the same hardware provides the possibility of performing any cryptographic operation that requires the use of both fields. The unified field multiplier is relatively future proof compared with multipliers that only perform arithmetic over a single chosen field. The security provided by the architecture is also very important. It is known that the longer the key length, the more susceptible the system is to differential power attacks due to the increased amount of data leakage. Therefore, it is beneficial to design hardware that is scalable, so that more data can be processed per cycle. Another advantage of designing a multiplier that is capable of dealing with long word length is improvement in performance in terms of delay, because less cycles are needed. This is very important because typical elliptic curve cryptography involves key size of 160 bits. A novel unified field radix-4 multiplier using Montgomery Multiplication for the use of G(p) and GF(2') has been proposed. This design makes use of the unexploited state in number representation for operation in GF(2') where all carries are suppressed. The addition is carried out using a modified (4:2) redundant adder to accommodate the extra 1 * state. The proposed adder and the partial product generator design are capable of radix-4 operation, which reduces the number of computation cycles required. Also, the proposed adder is more scalable than existing designs

    Digital Control Techniques for DC/DC power converters

    Get PDF
    The newest generation of processors for embedded applications requires high performance power supplies. As a matter of fact, the new generation of power supplies are based on high-speed switching DC/DC converters. The latest constraint makes very difficult the task to implement a real time control system based on low-performance controllers. This is the reason why FPGAs are becoming more and more popular for the construction of real time control systems. Nevertheless, the main problem of using FPGAs is the requirement of having a good knowledge of the internal architecture and the associated languages for the development of applications. Thus, the main motivation of this work is intended to develop a generic methodology for the development of control systems using FPGAs. The system takes as an example the digital control techniques associated in the DC/DC converters domain. The work could be divided into three parts. The first part is introducing all the linear models associated with the behaviour of Buck converters and the design of regulated control system for these power supplies. The second part corresponds to the development of all the theoretical material to develop a fixed-point architecture for linear controllers. Finally, the third part specifies and designs a reusable architecture inside of the FPGA for the development of digital control systems

    Dynamically and partially reconfigurable hardware architectures for high performance microarray bioinformatics data analysis

    Get PDF
    The field of Bioinformatics and Computational Biology (BCB) is a multidisciplinary field that has emerged due to the computational demands of current state-of-the-art biotechnology. BCB deals with the storage, organization, retrieval, and analysis of biological datasets, which have grown in size and complexity in recent years especially after the completion of the human genome project. The advent of Microarray technology in the 1990s has resulted in the new concept of high throughput experiment, which is a biotechnology that measures the gene expression profiles of thousands of genes simultaneously. As such, Microarray requires high computational power to extract the biological relevance from its high dimensional data. Current general purpose processors (GPPs) has been unable to keep-up with the increasing computational demands of Microarrays and reached a limit in terms of clock speed. Consequently, Field Programmable Gate Arrays (FPGAs) have been proposed as a low power viable solution to overcome the computational limitations of GPPs and other methods. The research presented in this thesis harnesses current state-of-the-art FPGAs and tools to accelerate some of the most widely used data mining methods used for the analysis of Microarray data in an effort to investigate the viability of the technology as an efficient, low power, and economic solution for the analysis of Microarray data. Three widely used methods have been selected for the FPGA implementations: one is the un-supervised Kmeans clustering algorithm, while the other two are supervised classification methods, namely, the K-Nearest Neighbour (K-NN) and Support Vector Machines (SVM). These methods are thought to benefit from parallel implementation. This thesis presents detailed designs and implementations of these three BCB applications on FPGA captured in Verilog HDL, whose performance are compared with equivalent implementations running on GPPs. In addition to acceleration, the benefits of current dynamic partial reconfiguration (DPR) capability of modern Xilinx’ FPGAs are investigated with reference to the aforementioned data mining methods. Implementing K-means clustering on FPGA using non-DPR design flow has outperformed equivalent implementations in GPP and GPU in terms of speed-up by two orders and one order of magnitude, respectively; while being eight times more power efficient than GPP and four times more than a GPU implementation. As for the energy efficiency, the FPGA implementation was 615 times more energy efficient than GPPs, and 31 times more than GPUs. Over and above, the FPGA implementation outperformed the GPP and GPU implementations in terms of speed-up as the dimensionality of the Microarray data increases. Additionally, the DPR implementations of the K-means clustering have shown speed-up in partial reconfiguration time of ~5x and 17x over full chip reconfiguration for single-core and eight-core implementations, respectively. Two architectures of the K-NN classifier have been implemented on FPGA, namely, A1 and A2. The K-NN implementation based on A1 architecture achieved a speed-up of ~76x over an equivalent GPP implementation whereas the A2 architecture achieved ~68x speedup. Furthermore, the FPGA implementation outperformed the equivalent GPP implementation when the dimensionality of data was increased. In addition, The DPR implementations of the K-NN classifier have achieved speed-ups in reconfiguration time between ~4x to 10x over full chip reconfiguration when reconfiguring portion of the classifier or the complete classifier. Similar to K-NN, two architectures of the SVM classifier were implemented on FPGA whereby the former outperformed an equivalent GPP implementation by ~61x and the latter by ~49x. As for the DPR implementation of the SVM classifier, it has shown a speed-up of ~8x in reconfiguration time when reconfiguring the complete core or when exchanging it with a K-NN core forming a multi-classifier. The aforementioned implementations clearly show FPGAs to be an efficacious, efficient and economic solution for bioinformatics Microarrays data analysis

    The Fifth NASA Symposium on VLSI Design

    Get PDF
    The fifth annual NASA Symposium on VLSI Design had 13 sessions including Radiation Effects, Architectures, Mixed Signal, Design Techniques, Fault Testing, Synthesis, Signal Processing, and other Featured Presentations. The symposium provides insights into developments in VLSI and digital systems which can be used to increase data systems performance. The presentations share insights into next generation advances that will serve as a basis for future VLSI design

    Rapid Digital Architecture Design of Computationally Complex Algorithms

    Get PDF
    Traditional digital design techniques hardly keep up with the rising abundance of programmable circuitry found on recent Field-Programmable Gate Arrays. Therefore, the novel Rapid Data Type-Agnostic Digital Design Methodology (RDAM) elevates the design perspective of digital design engineers away from the register-transfer level to the algorithmic level. It is founded on the capabilities of High-Level Synthesis tools. By consequently working with data type-agnostic source codes, the RDAM brings significant simplifications to the fixed-point conversion of algorithms and the design of complex-valued architectures. Signal processing applications from the field of Compressed Sensing illustrate the efficacy of the RDAM in the context of multi-user wireless communications. For instance, a complex-valued digital architecture of Orthogonal Matching Pursuit with rank-1 updating has successfully been implemented and tested

    The Telecommunications and Data Acquisition Report

    Get PDF
    This publication, one of a series formerly titled The Deep Space Network Progress Report, documents DSN progress in flight project support, tracking and data acquisition research and technology, network engineering, hardware and software implementation, and operations. In addition, developments in Earth-based radio technology as applied to geodynamics, astrophysics and the radio search for extraterrestrial intelligence are reported

    The 1991 3rd NASA Symposium on VLSI Design

    Get PDF
    Papers from the symposium are presented from the following sessions: (1) featured presentations 1; (2) very large scale integration (VLSI) circuit design; (3) VLSI architecture 1; (4) featured presentations 2; (5) neural networks; (6) VLSI architectures 2; (7) featured presentations 3; (8) verification 1; (9) analog design; (10) verification 2; (11) design innovations 1; (12) asynchronous design; and (13) design innovations 2

    Data Service Outsourcing and Privacy Protection in Mobile Internet

    Get PDF
    Mobile Internet data have the characteristics of large scale, variety of patterns, and complex association. On the one hand, it needs efficient data processing model to provide support for data services, and on the other hand, it needs certain computing resources to provide data security services. Due to the limited resources of mobile terminals, it is impossible to complete large-scale data computation and storage. However, outsourcing to third parties may cause some risks in user privacy protection. This monography focuses on key technologies of data service outsourcing and privacy protection, including the existing methods of data analysis and processing, the fine-grained data access control through effective user privacy protection mechanism, and the data sharing in the mobile Internet
    corecore